1
|
Kerth JL, Hagemeister M, Bischops AC, Reinhart L, Dukart J, Heinrichs B, Eickhoff SB, Meissner T. Artificial intelligence in the care of children and adolescents with chronic diseases: a systematic review. Eur J Pediatr 2024; 184:83. [PMID: 39672974 PMCID: PMC11645428 DOI: 10.1007/s00431-024-05846-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 10/11/2024] [Accepted: 10/25/2024] [Indexed: 12/15/2024]
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has shown potential for various applications in the medical field, particularly for diagnosing and managing chronic diseases among children and adolescents. This systematic review aims to comprehensively analyze and synthesize research on the use of AI for monitoring, guiding, and assisting pediatric patients with chronic diseases. Five major electronic databases were searched (Medline, Scopus, PsycINFO, ACM, Web of Science), along with manual searches of gray literature, personal archives, and reference lists of relevant papers. All original studies as well as conference abstracts and proceedings, focusing on AI applications for pediatric chronic disease care were included. Thirty-one studies met the inclusion criteria. We extracted AI method used, study design, population, intervention, and main results. Two researchers independently extracted data and resolved discrepancies through discussion. AI applications are diverse, encompassing, e.g., disease classification, outcome prediction, or decision support. AI generally performed well, though most models were tested on retrospective data. AI-based tools have shown promise in mental health analysis, e.g., by using speech sampling or social media data to predict therapy outcomes for various chronic conditions. CONCLUSIONS While AI holds potential in pediatric chronic disease care, most reviewed studies are small-scale research projects. Prospective clinical implementations are needed to validate its effectiveness in real-world scenarios. Ethical considerations, cultural influences, and stakeholder attitudes should be integrated into future research. WHAT IS KNOWN • Artificial Intelligence (AI) will play a more dominant role in medicine and healthcare in the future and many applications are already being developed. WHAT IS NEW • Our review provides an overview on how AI-driven systems might be able to support children and adolescents with chronic illnesses. • While many applications are being researched, few have been tested on real-world, prospective, clinical data.
Collapse
Affiliation(s)
- Janna-Lina Kerth
- Dept. of General Pediatrics, Pediatric Cardiology and Neonatology, Medical Faculty, University Children's Hospital Düsseldorf, Heinrich Heine University, Moorenstr. 5, 40227, Düsseldorf, Germany.
| | - Maurus Hagemeister
- Dept. of General Pediatrics, Pediatric Cardiology and Neonatology, Medical Faculty, University Children's Hospital Düsseldorf, Heinrich Heine University, Moorenstr. 5, 40227, Düsseldorf, Germany
| | - Anne C Bischops
- Dept. of General Pediatrics, Pediatric Cardiology and Neonatology, Medical Faculty, University Children's Hospital Düsseldorf, Heinrich Heine University, Moorenstr. 5, 40227, Düsseldorf, Germany
| | - Lisa Reinhart
- Dept. of General Pediatrics, Pediatric Cardiology and Neonatology, Medical Faculty, University Children's Hospital Düsseldorf, Heinrich Heine University, Moorenstr. 5, 40227, Düsseldorf, Germany
| | - Juergen Dukart
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
- Institute of Systems Neuroscience, Medical Faculty & University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Bert Heinrichs
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
- Institute for Science and Ethics, University Bonn, Bonn, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
- Institute of Systems Neuroscience, Medical Faculty & University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Thomas Meissner
- Dept. of General Pediatrics, Pediatric Cardiology and Neonatology, Medical Faculty, University Children's Hospital Düsseldorf, Heinrich Heine University, Moorenstr. 5, 40227, Düsseldorf, Germany
| |
Collapse
|
2
|
Ketola JHJ, Inkinen SI, Mäkelä T, Syväranta S, Peltonen J, Kaasalainen T, Kortesniemi M. Testing process for artificial intelligence applications in radiology practice. Phys Med 2024; 128:104842. [PMID: 39522363 DOI: 10.1016/j.ejmp.2024.104842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 08/30/2024] [Accepted: 10/21/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI) applications are becoming increasingly common in radiology. However, ensuring reliable operation and expected clinical benefits remains a challenge. A systematic testing process aims to facilitate clinical deployment by confirming software applicability to local patient populations, practises, adherence to regulatory and safety requirements, and compatibility with existing systems. In this work, we present our testing process developed based on practical experience. First, a survey and pre-evaluation is conducted, where information requests are sent for potential products, and the specifications are evaluated against predetermined requirements. In the second phase, data collection, testing, and analysis are conducted. In the retrospective stage, the application undergoes testing with a pre selected dataset and is evaluated against specified key performance indicators (KPIs). In the prospective stage, the application is integrated into the clinical workflow and evaluated with additional process-specific KPIs. In the final phase, the results are evaluated in terms of safety, effectiveness, productivity, and integration. The final report summarises the results and includes a procurement/deployment or rejection recommendation. The process allows termination at any phase if the application fails to meet essential criteria. In addition, we present practical remarks from our experiences in AI testing and provide forms to guide and document the testing process. The established AI testing process facilitates a systematic evaluation and documentation of new technologies ensuring that each application undergoes equal and sufficient validation. Testing with local data is crucial for identifying biases and pitfalls of AI algorithms to improve the quality and safety, ultimately benefiting patient care.
Collapse
Affiliation(s)
- Juuso H J Ketola
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Satu I Inkinen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland; Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland
| | - Suvi Syväranta
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Juha Peltonen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Mika Kortesniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland; Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland.
| |
Collapse
|
3
|
Wongveerasin P, Tongdee T, Saiviroonporn P. Deep learning for tubes and lines detection in critical illness: Generalizability and comparison with residents. Eur J Radiol Open 2024; 13:100593. [PMID: 39175597 PMCID: PMC11338948 DOI: 10.1016/j.ejro.2024.100593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 07/12/2024] [Accepted: 07/23/2024] [Indexed: 08/24/2024] Open
Abstract
Background Artificial intelligence (AI) has been proven useful for the assessment of tubes and lines on chest radiographs of general patients. However, validation on intensive care unit (ICU) patients remains imperative. Methods This retrospective case-control study evaluated the performance of deep learning (DL) models for tubes and lines classification on both an external public dataset and a local dataset comprising 303 films randomly sampled from the ICU database. The endotracheal tubes (ETTs), central venous catheters (CVCs), and nasogastric tubes (NGTs) were classified into "Normal," "Abnormal," or "Borderline" positions by DL models with and without rule-based modification. Their performance was evaluated using an experienced radiologist as the standard reference. Results The algorithm showed decreased performance on the local ICU dataset, compared to that of the external dataset, decreasing from the Area Under the Curve of Receiver (AUC) of 0.967 (95 % CI 0.965-0.973) to the AUC of 0.70 (95 % CI 0.68-0.77). Significant improvement in the ETT classification task was observed after modifications were made to the model to allow the use of the spatial relationship between line tips and reference anatomy with the improvement of the AUC, increasing from 0.71 (95 % CI 0.70 - 0.75) to 0.86 (95 % CI 0.83 - 0.94). Conclusions The externally trained model exhibited limited generalizability on the local ICU dataset. Therefore, evaluating the performance of externally trained AI before integrating it into critical care routine is crucial. Rule-based algorithm may be used in combination with DL to improve results.
Collapse
Affiliation(s)
- Pootipong Wongveerasin
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Trongtum Tongdee
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Pairash Saiviroonporn
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
4
|
Vos S, Hebeda K, Milota M, Sand M, Drogt J, Grünberg K, Jongsma K. Making Pathologists Ready for the New Artificial Intelligence Era: Changes in Required Competencies. Mod Pathol 2024; 38:100657. [PMID: 39542175 DOI: 10.1016/j.modpat.2024.100657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 09/11/2024] [Accepted: 11/07/2024] [Indexed: 11/17/2024]
Abstract
In recent years, there has been an increasing interest in developing and using artificial intelligence (AI) models in pathology. Although pathologists generally have a positive attitude toward AI, they report a lack of knowledge and skills regarding how to use it in practice. Furthermore, it remains unclear what skills pathologists would require to use AI adequately and responsibly. However, adequate training of (future) pathologists is essential for successful AI use in pathology. In this paper, we assess which entrustable professional activities (EPAs) and associated competencies pathologists should acquire in order to use AI in their daily practice. We make use of the available academic literature, including literature in radiology, another image-based discipline, which is currently more advanced in terms of AI development and implementation. Although microscopy evaluation and reporting could be transferrable to AI in the future, most of the current pathologist EPAs and competencies will likely remain relevant when using AI techniques and interpreting and communicating results for individual patient cases. In addition, new competencies related to technology evaluation and implementation will likely be necessary, along with knowing one's own strengths and limitations in human-AI interactions. Because current EPAs do not sufficiently address the need to train pathologists in developing expertise related to technology evaluation and implementation, we propose a new EPA to enable pathology training programs to make pathologists fit for the new AI era "using AI in diagnostic pathology practice" and outline its associated competencies.
Collapse
Affiliation(s)
- Shoko Vos
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Konnie Hebeda
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Megan Milota
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Martin Sand
- Faculty of Technology, Technical University Delft, Delft, the Netherlands
| | - Jojanneke Drogt
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Katrien Grünberg
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Karin Jongsma
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
5
|
Cè M, Chiriac MD, Cozzi A, Macrì L, Rabaiotti FL, Irmici G, Fazzini D, Carrafiello G, Cellina M. Decoding Radiomics: A Step-by-Step Guide to Machine Learning Workflow in Hand-Crafted and Deep Learning Radiomics Studies. Diagnostics (Basel) 2024; 14:2473. [PMID: 39594139 PMCID: PMC11593328 DOI: 10.3390/diagnostics14222473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Revised: 10/25/2024] [Accepted: 10/28/2024] [Indexed: 11/28/2024] Open
Abstract
Although radiomics research has experienced rapid growth in recent years, with numerous studies dedicated to the automated extraction of diagnostic and prognostic information from various imaging modalities, such as CT, PET, and MRI, only a small fraction of these findings has successfully transitioned into clinical practice. This gap is primarily due to the significant methodological challenges involved in radiomics research, which emphasize the need for a rigorous evaluation of study quality. While many technical aspects may lie outside the expertise of most radiologists, having a foundational knowledge is essential for evaluating the quality of radiomics workflows and contributing, together with data scientists, to the development of models with a real-world clinical impact. This review is designed for the new generation of radiologists, who may not have specialized training in machine learning or radiomics, but will inevitably play a role in this evolving field. The paper has two primary objectives: first, to provide a clear, systematic guide to radiomics study pipeline, including study design, image preprocessing, feature selection, model training and validation, and performance evaluation. Furthermore, given the critical importance of evaluating the robustness of radiomics studies, this review offers a step-by-step guide to the application of the METhodological RadiomICs Score (METRICS, 2024)-a newly proposed tool for assessing the quality of radiomics studies. This roadmap aims to support researchers and reviewers alike, regardless of their machine learning expertise, in utilizing this tool for effective study evaluation.
Collapse
Affiliation(s)
- Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | | | - Andrea Cozzi
- Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale (EOC), Via Tesserete 46, 6900 Lugano, Switzerland;
| | - Laura Macrì
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Francesca Lucrezia Rabaiotti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Breast Imaging Department, Fondazione IRCCS Istituto Nazionale dei Tumori, Via Giacomo Venezian 1, 20133 Milan, Italy
| | - Deborah Fazzini
- Radiology Department, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Radiology Department, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy
- Department of Oncology and Hematology-Oncology, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Michaela Cellina
- Radiology Department, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| |
Collapse
|
6
|
Hesjedal MB, Lysø EH, Solbjør M, Skolbekken JA. Valuing good health care: How medical doctors, scientists and patients relate ethical challenges with artificial intelligence decision-making support tools in prostate cancer diagnostics to good health care. SOCIOLOGY OF HEALTH & ILLNESS 2024; 46:1808-1827. [PMID: 39037701 DOI: 10.1111/1467-9566.13818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 06/24/2024] [Indexed: 07/23/2024]
Abstract
Artificial intelligence (AI) is increasingly used in health care to improve diagnostics and treatment. Decision-making tools intended to help professionals in diagnostic processes are developed in a variety of medical fields. Despite the imagined benefits, AI in health care is contested. Scholars point to ethical and social issues related to the development, implementation, and use of AI in diagnostics. Here, we investigate how three relevant groups construct ethical challenges with AI decision-making tools in prostate cancer (PCa) diagnostics: scientists developing AI decision support tools for interpreting MRI scans for PCa, medical doctors working with PCa and PCa patients. This qualitative study is based on participant observation and interviews with the abovementioned actors. The analysis focuses on how each group draws on their understanding of 'good health care' when discussing ethical challenges, and how they mobilise different registers of valuing in this process. Our theoretical approach is inspired by scholarship on evaluation and justification. We demonstrate how ethical challenges in this area are conceptualised, weighted and negotiated among these participants as processes of valuing good health care and compare their perspectives.
Collapse
Affiliation(s)
- Maria Bårdsen Hesjedal
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - Emilie Hybertsen Lysø
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - Marit Solbjør
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - John-Arne Skolbekken
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
7
|
Burti S, Banzato T, Coghlan S, Wodzinski M, Bendazzoli M, Zotti A. Artificial intelligence in veterinary diagnostic imaging: Perspectives and limitations. Res Vet Sci 2024; 175:105317. [PMID: 38843690 DOI: 10.1016/j.rvsc.2024.105317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 05/22/2024] [Accepted: 05/29/2024] [Indexed: 06/17/2024]
Abstract
The field of veterinary diagnostic imaging is undergoing significant transformation with the integration of artificial intelligence (AI) tools. This manuscript provides an overview of the current state and future prospects of AI in veterinary diagnostic imaging. The manuscript delves into various applications of AI across different imaging modalities, such as radiology, ultrasound, computed tomography, and magnetic resonance imaging. Examples of AI applications in each modality are provided, ranging from orthopaedics to internal medicine, cardiology, and more. Notable studies are discussed, demonstrating AI's potential for improved accuracy in detecting and classifying various abnormalities. The ethical considerations of using AI in veterinary diagnostics are also explored, highlighting the need for transparent AI development, accurate training data, awareness of the limitations of AI models, and the importance of maintaining human expertise in the decision-making process. The manuscript underscores the significance of AI as a decision support tool rather than a replacement for human judgement. In conclusion, this comprehensive manuscript offers an assessment of the current landscape and future potential of AI in veterinary diagnostic imaging. It provides insights into the benefits and challenges of integrating AI into clinical practice while emphasizing the critical role of ethics and human expertise in ensuring the wellbeing of veterinary patients.
Collapse
Affiliation(s)
- Silvia Burti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy.
| | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Simon Coghlan
- School of Computing and Information Systems, Centre for AI and Digital Ethics, Australian Research Council Centre of Excellence for Automated Decision-Making and Society, University of Melbourne, 3052 Melbourne, Australia
| | - Marek Wodzinski
- Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Krakow, 30059 Kraków, Poland; Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), 3960 Sierre, Switzerland
| | - Margherita Bendazzoli
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| |
Collapse
|
8
|
Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back. Pflugers Arch 2024:10.1007/s00424-024-02984-3. [PMID: 38969841 DOI: 10.1007/s00424-024-02984-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 04/30/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
Artificial intelligence systems (ai-systems) (e.g. machine learning, generative artificial intelligence), in healthcare and medicine, have been received with hopes of better care quality, more efficiency, lower care costs, etc. Simultaneously, these systems have been met with reservations regarding their impacts on stakeholders' privacy, on changing power dynamics, on systemic biases, etc. Fortunately, healthcare and medicine have been guided by a multitude of ethical principles, frameworks, or approaches, which also guide the use of ai-systems in healthcare and medicine, in one form or another. Nevertheless, in this article, I argue that most of these approaches are inspired by a local isolationist view on ai-systems, here exemplified by the principlist approach. Despite positive contributions to laying out the ethical landscape of ai-systems in healthcare and medicine, such ethics approaches are too focused on a specific local healthcare and medical setting, be it a particular care relationship, a particular care organisation, or a particular society or region. By doing so, they lose sight of the global impacts ai-systems have, especially environmental impacts and related social impacts, such as increased health risks. To meet this gap, this article presents a global approach to the ethics of ai-systems in healthcare and medicine which consists of five levels of ethical impacts and analysis: individual-relational, organisational, societal, global, and historical. As such, this global approach incorporates the local isolationist view by integrating it in a wider landscape of ethical consideration so to ensure ai-systems meet the needs of everyone everywhere.
Collapse
Affiliation(s)
- Tijs Vandemeulebroucke
- Bonn Sustainable AI Lab, Institut für Wissenschaft und Ethik, Universität Bonn-University of Bonn, Bonner Talweg 57, 53113, Bonn, Germany.
| |
Collapse
|
9
|
Chen Z, Chen C, Yang G, He X, Chi X, Zeng Z, Chen X. Research integrity in the era of artificial intelligence: Challenges and responses. Medicine (Baltimore) 2024; 103:e38811. [PMID: 38968491 PMCID: PMC11224801 DOI: 10.1097/md.0000000000038811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 06/13/2024] [Indexed: 07/07/2024] Open
Abstract
The application of artificial intelligence (AI) technologies in scientific research has significantly enhanced efficiency and accuracy but also introduced new forms of academic misconduct, such as data fabrication and text plagiarism using AI algorithms. These practices jeopardize research integrity and can mislead scientific directions. This study addresses these challenges, underscoring the need for the academic community to strengthen ethical norms, enhance researcher qualifications, and establish rigorous review mechanisms. To ensure responsible and transparent research processes, we recommend the following specific key actions: Development and enforcement of comprehensive AI research integrity guidelines that include clear protocols for AI use in data analysis and publication, ensuring transparency and accountability in AI-assisted research. Implementation of mandatory AI ethics and integrity training for researchers, aimed at fostering an in-depth understanding of potential AI misuses and promoting ethical research practices. Establishment of international collaboration frameworks to facilitate the exchange of best practices and development of unified ethical standards for AI in research. Protecting research integrity is paramount for maintaining public trust in science, making these recommendations urgent for the scientific community consideration and action.
Collapse
Affiliation(s)
- Ziyu Chen
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| | - Changye Chen
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| | - Guozhao Yang
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| | - Xiangpeng He
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| | - Xiaoxia Chi
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| | - Zhuoying Zeng
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
- Chemical Analysis & Physical Testing Institute, Shenzhen Center for Disease Control and Prevention, Shenzhen, China
| | - Xuhong Chen
- The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen, China
| |
Collapse
|
10
|
Aden D, Zaheer S, Khan S. Possible benefits, challenges, pitfalls, and future perspective of using ChatGPT in pathology. REVISTA ESPANOLA DE PATOLOGIA : PUBLICACION OFICIAL DE LA SOCIEDAD ESPANOLA DE ANATOMIA PATOLOGICA Y DE LA SOCIEDAD ESPANOLA DE CITOLOGIA 2024; 57:198-210. [PMID: 38971620 DOI: 10.1016/j.patol.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/22/2024] [Accepted: 04/16/2024] [Indexed: 07/08/2024]
Abstract
The much-hyped artificial intelligence (AI) model called ChatGPT developed by Open AI can have great benefits for physicians, especially pathologists, by saving time so that they can use their time for more significant work. Generative AI is a special class of AI model, which uses patterns and structures learned from existing data and can create new data. Utilizing ChatGPT in Pathology offers a multitude of benefits, encompassing the summarization of patient records and its promising prospects in Digital Pathology, as well as its valuable contributions to education and research in this field. However, certain roadblocks need to be dealt like integrating ChatGPT with image analysis which will act as a revolution in the field of pathology by increasing diagnostic accuracy and precision. The challenges with the use of ChatGPT encompass biases from its training data, the need for ample input data, potential risks related to bias and transparency, and the potential adverse outcomes arising from inaccurate content generation. Generation of meaningful insights from the textual information which will be efficient in processing different types of image data, such as medical images, and pathology slides. Due consideration should be given to ethical and legal issues including bias.
Collapse
Affiliation(s)
- Durre Aden
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| | - Sabina Khan
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| |
Collapse
|
11
|
Temperley HC, O'Sullivan NJ, Mac Curtain BM, Corr A, Meaney JF, Kelly ME, Brennan I. Current applications and future potential of ChatGPT in radiology: A systematic review. J Med Imaging Radiat Oncol 2024; 68:257-264. [PMID: 38243605 DOI: 10.1111/1754-9485.13621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 12/29/2023] [Indexed: 01/21/2024]
Abstract
This study aimed to comprehensively evaluate the current utilization and future potential of ChatGPT, an AI-based chat model, in the field of radiology. The primary focus is on its role in enhancing decision-making processes, optimizing workflow efficiency, and fostering interdisciplinary collaboration and teaching within healthcare. A systematic search was conducted in PubMed, EMBASE and Web of Science databases. Key aspects, such as its impact on complex decision-making, workflow enhancement and collaboration, were assessed. Limitations and challenges associated with ChatGPT implementation were also examined. Overall, six studies met the inclusion criteria and were included in our analysis. All studies were prospective in nature. A total of 551 chatGPT (version 3.0 to 4.0) assessment events were included in our analysis. Considering the generation of academic papers, ChatGPT was found to output data inaccuracies 80% of the time. When ChatGPT was asked questions regarding common interventional radiology procedures, it contained entirely incorrect information 45% of the time. ChatGPT was seen to better answer US board-style questions when lower order thinking was required (P = 0.002). Improvements were seen between chatGPT 3.5 and 4.0 in regard to imaging questions with accuracy rates of 61 versus 85%(P = 0.009). ChatGPT was observed to have an average translational ability score of 4.27/5 on the Likert scale regarding CT and MRI findings. ChatGPT demonstrates substantial potential to augment decision-making and optimizing workflow. While ChatGPT's promise is evident, thorough evaluation and validation are imperative before widespread adoption in the field of radiology.
Collapse
Affiliation(s)
- Hugo C Temperley
- Department of Radiology, St. James's Hospital, Dublin, Ireland
- Department of Surgery, St. James's Hospital, Dublin, Ireland
| | | | | | - Alison Corr
- Department of Radiology, St. James's Hospital, Dublin, Ireland
| | - James F Meaney
- Department of Radiology, St. James's Hospital, Dublin, Ireland
| | - Michael E Kelly
- Department of Surgery, St. James's Hospital, Dublin, Ireland
| | - Ian Brennan
- Department of Radiology, St. James's Hospital, Dublin, Ireland
| |
Collapse
|
12
|
Ciet P, Eade C, Ho ML, Laborie LB, Mahomed N, Naidoo J, Pace E, Segal B, Toso S, Tschauner S, Vamyanmane DK, Wagner MW, Shelmerdine SC. The unintended consequences of artificial intelligence in paediatric radiology. Pediatr Radiol 2024; 54:585-593. [PMID: 37665368 DOI: 10.1007/s00247-023-05746-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 08/07/2023] [Accepted: 08/08/2023] [Indexed: 09/05/2023]
Abstract
Over the past decade, there has been a dramatic rise in the interest relating to the application of artificial intelligence (AI) in radiology. Originally only 'narrow' AI tasks were possible; however, with increasing availability of data, teamed with ease of access to powerful computer processing capabilities, we are becoming more able to generate complex and nuanced prediction models and elaborate solutions for healthcare. Nevertheless, these AI models are not without their failings, and sometimes the intended use for these solutions may not lead to predictable impacts for patients, society or those working within the healthcare profession. In this article, we provide an overview of the latest opinions regarding AI ethics, bias, limitations, challenges and considerations that we should all contemplate in this exciting and expanding field, with a special attention to how this applies to the unique aspects of a paediatric population. By embracing AI technology and fostering a multidisciplinary approach, it is hoped that we can harness the power AI brings whilst minimising harm and ensuring a beneficial impact on radiology practice.
Collapse
Affiliation(s)
- Pierluigi Ciet
- Department of Radiology and Nuclear Medicine, Erasmus MC - Sophia's Children's Hospital, Rotterdam, The Netherlands
- Department of Medical Sciences, University of Cagliari, Cagliari, Italy
| | | | - Mai-Lan Ho
- University of Missouri, Columbia, MO, USA
| | - Lene Bjerke Laborie
- Department of Radiology, Section for Paediatrics, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Nasreen Mahomed
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Jaishree Naidoo
- Paediatric Diagnostic Imaging, Dr J Naidoo Inc., Johannesburg, South Africa
- Envisionit Deep AI Ltd, Coveham House, Downside Bridge Road, Cobham, UK
| | - Erika Pace
- Department of Diagnostic Radiology, The Royal Marsden NHS Foundation Trust, London, UK
| | - Bradley Segal
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Seema Toso
- Pediatric Radiology, Children's Hospital, University Hospitals of Geneva, Geneva, Switzerland
| | - Sebastian Tschauner
- Division of Paediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Dhananjaya K Vamyanmane
- Department of Pediatric Radiology, Indira Gandhi Institute of Child Health, Bangalore, India
| | - Matthias W Wagner
- Department of Diagnostic Imaging, Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, WC1H 3JH, UK.
- Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK.
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.
- Department of Clinical Radiology, St George's Hospital, London, UK.
| |
Collapse
|
13
|
Mundinger A, Mundinger C. Artificial Intelligence in Senology - Where Do We Stand and What Are the Future Horizons? Eur J Breast Health 2024; 20:73-80. [PMID: 38571686 PMCID: PMC10985572 DOI: 10.4274/ejbh.galenos.2024.2023-12-13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 01/16/2024] [Indexed: 04/05/2024]
Abstract
Artificial Intelligence (AI) is defined as the simulation of human intelligence by a digital computer or robotic system and has become a hype in current conversations. A subcategory of AI is deep learning, which is based on complex artificial neural networks that mimic the principles of human synaptic plasticity and layered brain architectures, and uses large-scale data processing. AI-based image analysis in breast screening programmes has shown non-inferior sensitivity, reduces workload by up to 70% by pre-selecting normal cases, and reduces recall by 25% compared to human double reading. Natural language programs such as ChatGPT (OpenAI) achieve 80% and higher accuracy in advising and decision making compared to the gold standard: human judgement. This does not yet meet the necessary requirements for medical products in terms of patient safety. The main advantage of AI is that it can perform routine but complex tasks much faster and with fewer errors than humans. The main concerns in healthcare are the stability of AI systems, cybersecurity, liability and transparency. More widespread use of AI could affect human jobs in healthcare and increase technological dependency. AI in senology is just beginning to evolve towards better forms with improved properties. Responsible training of AI systems with meaningful raw data and scientific studies to analyse their performance in the real world are necessary to keep AI on track. To mitigate significant risks, it will be necessary to balance active promotion and development of quality-assured AI systems with careful regulation. AI regulation has only recently included in transnational legal frameworks, as the European Union's AI Act was the first comprehensive legal framework to be published, in December 2023. Unacceptable AI systems will be banned if they are deemed to pose a clear threat to people's fundamental rights. Using AI and combining it with human wisdom, empathy and affection will be the method of choice for further, fruitful development of tomorrow's senology.
Collapse
Affiliation(s)
- Alexander Mundinger
- Breast Imaging and Interventions; Breast Centre Osnabrück; FHH Niels-Stensen-Kliniken; Franziskus-Hospital Harderberg, Georgsmarienhütte, Germany
| | - Carolin Mundinger
- Department of Behavioural Biology, Institute for Neuro- and Behavioural Biology, University of Muenster, Muenster, Germany
| |
Collapse
|
14
|
Boverhof BJ, Redekop WK, Bos D, Starmans MPA, Birch J, Rockall A, Visser JJ. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024; 15:34. [PMID: 38315288 PMCID: PMC10844175 DOI: 10.1186/s13244-023-01599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. METHODS This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. RESULTS RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI's lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. CONCLUSION The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. CRITICAL RELEVANCE STATEMENT The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. KEYPOINTS • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap.
Collapse
Affiliation(s)
- Bart-Jan Boverhof
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - W Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniel Bos
- Department of Epidemiology, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | | | - Andrea Rockall
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands.
| |
Collapse
|
15
|
Akyüz K, Cano Abadía M, Goisauf M, Mayrhofer MT. Unlocking the potential of big data and AI in medicine: insights from biobanking. Front Med (Lausanne) 2024; 11:1336588. [PMID: 38357641 PMCID: PMC10864616 DOI: 10.3389/fmed.2024.1336588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/19/2024] [Indexed: 02/16/2024] Open
Abstract
Big data and artificial intelligence are key elements in the medical field as they are expected to improve accuracy and efficiency in diagnosis and treatment, particularly in identifying biomedically relevant patterns, facilitating progress towards individually tailored preventative and therapeutic interventions. These applications belong to current research practice that is data-intensive. While the combination of imaging, pathological, genomic, and clinical data is needed to train algorithms to realize the full potential of these technologies, biobanks often serve as crucial infrastructures for data-sharing and data flows. In this paper, we argue that the 'data turn' in the life sciences has increasingly re-structured major infrastructures, which often were created for biological samples and associated data, as predominantly data infrastructures. These have evolved and diversified over time in terms of tackling relevant issues such as harmonization and standardization, but also consent practices and risk assessment. In line with the datafication, an increased use of AI-based technologies marks the current developments at the forefront of the big data research in life science and medicine that engender new issues and concerns along with opportunities. At a time when secure health data environments, such as European Health Data Space, are in the making, we argue that such meta-infrastructures can benefit both from the experience and evolution of biobanking, but also the current state of affairs in AI in medicine, regarding good governance, the social aspects and practices, as well as critical thinking about data practices, which can contribute to trustworthiness of such meta-infrastructures.
Collapse
Affiliation(s)
- Kaya Akyüz
- Department of ELSI Services and Research, BBMRI-ERIC, Graz, Austria
| | | | | | | |
Collapse
|
16
|
Akin O, Lema-Dopico A, Paudyal R, Konar AS, Chenevert TL, Malyarenko D, Hadjiiski L, Al-Ahmadie H, Goh AC, Bochner B, Rosenberg J, Schwartz LH, Shukla-Dave A. Multiparametric MRI in Era of Artificial Intelligence for Bladder Cancer Therapies. Cancers (Basel) 2023; 15:5468. [PMID: 38001728 PMCID: PMC10670574 DOI: 10.3390/cancers15225468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/26/2023] Open
Abstract
This review focuses on the principles, applications, and performance of mpMRI for bladder imaging. Quantitative imaging biomarkers (QIBs) derived from mpMRI are increasingly used in oncological applications, including tumor staging, prognosis, and assessment of treatment response. To standardize mpMRI acquisition and interpretation, an expert panel developed the Vesical Imaging-Reporting and Data System (VI-RADS). Many studies confirm the standardization and high degree of inter-reader agreement to discriminate muscle invasiveness in bladder cancer, supporting VI-RADS implementation in routine clinical practice. The standard MRI sequences for VI-RADS scoring are anatomical imaging, including T2w images, and physiological imaging with diffusion-weighted MRI (DW-MRI) and dynamic contrast-enhanced MRI (DCE-MRI). Physiological QIBs derived from analysis of DW- and DCE-MRI data and radiomic image features extracted from mpMRI images play an important role in bladder cancer. The current development of AI tools for analyzing mpMRI data and their potential impact on bladder imaging are surveyed. AI architectures are often implemented based on convolutional neural networks (CNNs), focusing on narrow/specific tasks. The application of AI can substantially impact bladder imaging clinical workflows; for example, manual tumor segmentation, which demands high time commitment and has inter-reader variability, can be replaced by an autosegmentation tool. The use of mpMRI and AI is projected to drive the field toward the personalized management of bladder cancer patients.
Collapse
Affiliation(s)
- Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Alfonso Lema-Dopico
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | - Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | | | | | - Dariya Malyarenko
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Hikmat Al-Ahmadie
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Alvin C. Goh
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Bernard Bochner
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Jonathan Rosenberg
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Lawrence H. Schwartz
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| |
Collapse
|
17
|
Davis MA, Lim N, Jordan J, Yee J, Gichoya JW, Lee R. Imaging Artificial Intelligence: A Framework for Radiologists to Address Health Equity, From the AJR Special Series on DEI. AJR Am J Roentgenol 2023; 221:302-308. [PMID: 37095660 DOI: 10.2214/ajr.22.28802] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Artificial intelligence (AI) holds promise for helping patients access new and individualized health care pathways while increasing efficiencies for health care practitioners. Radiology has been at the forefront of this technology in medicine; many radiology practices are implementing and trialing AI-focused products. AI also holds great promise for reducing health disparities and promoting health equity. Radiology is ideally positioned to help reduce disparities given its central and critical role in patient care. The purposes of this article are to discuss the potential benefits and pitfalls of deploying AI algorithms in radiology, specifically highlighting the impact of AI on health equity; to explore ways to mitigate drivers of inequity; and to enhance pathways for creating better health care for all individuals, centering on a practical framework that helps radiologists address health equity during deployment of new tools.
Collapse
Affiliation(s)
- Melissa A Davis
- Department of Diagnostic Radiology, Yale University School of Medicine, 789 Howard Ave, PO Box 20842, New Haven, CT 06520
| | | | - John Jordan
- Stanford University School of Medicine, Stanford, CA
| | - Judy Yee
- Montefiore Medical Center, Albert Einstein College of Medicine, New York, NY
| | | | - Ryan Lee
- Jefferson Health, Philadelphia, PA
| |
Collapse
|
18
|
Alan R, Alan BM. Utilizing ChatGPT-4 for Providing Information on Periodontal Disease to Patients: A DISCERN Quality Analysis. Cureus 2023; 15:e46213. [PMID: 37908933 PMCID: PMC10613831 DOI: 10.7759/cureus.46213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/29/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND Due to their ability to mimic human responses, anthropomorphic entities such as ChatGPT have a higher likelihood of gaining people's trust. This study aimed to evaluate the quality of information generated by ChatGPT-4, as an artificial intelligence (AI) chatbot, on periodontal disease (PD) using the DISCERN instrument. METHODS Using Google Bard, the topics related to PD that had the highest search volume according to Google Trends were identified. An interactive dialogue was created by placing the topics in the standard question pattern. As a patient with PD, detailed information was requested from ChatGPT-4 regarding the relevant topics. The 'regenerate response' feature was not employed, and the initial response generated by ChatGPT-4 was carefully considered for each topic as new prompts in the form of questions were entered. The response to each question was independently assessed and rated by two experienced raters using the DISCERN instrument. RESULTS Based on the total DISCERN scores, the qualities of the responses generated by ChatGPT-4 were 'good', except for the two responses that rater-2 scored as 'fair'. It was also observed that the 'treatment choices' section of both raters had significantly fewer scores than the other sections. In both weighted kappa and Krippendorff alpha measures, the strength of agreement varied from 'substantial' to 'almost-perfect', and the correlation between values was statistically significant. CONCLUSION Despite some limitations in providing complete treatment choice information according to the DISCERN instrument, it is considered valuable for PD patients seeking information, as it consistently offered accurate guidance in the majority of responses.
Collapse
Affiliation(s)
- Raif Alan
- Periodontology, Faculty of Dentistry, Canakkale Onsekiz Mart University, Canakkale, TUR
| | | |
Collapse
|
19
|
Najjar R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics (Basel) 2023; 13:2760. [PMID: 37685300 PMCID: PMC10487271 DOI: 10.3390/diagnostics13172760] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/01/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023] Open
Abstract
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology-data quality, the 'black box' enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
Collapse
Affiliation(s)
- Reabal Najjar
- Canberra Health Services, Australian Capital Territory 2605, Australia
| |
Collapse
|
20
|
Naidoo J, Shelmerdine SC, -Charcape CFU, Sodhi AS. Artificial Intelligence in Paediatric Tuberculosis. Pediatr Radiol 2023; 53:1733-1745. [PMID: 36707428 PMCID: PMC9883137 DOI: 10.1007/s00247-023-05606-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 12/07/2022] [Accepted: 01/13/2023] [Indexed: 01/29/2023]
Abstract
Tuberculosis (TB) continues to be a leading cause of death in children despite global efforts focused on early diagnosis and interventions to limit the spread of the disease. This challenge has been made more complex in the context of the coronavirus pandemic, which has disrupted the "End TB Strategy" and framework set out by the World Health Organization (WHO). Since the inception of artificial intelligence (AI) more than 60 years ago, the interest in AI has risen and more recently we have seen the emergence of multiple real-world applications, many of which relate to medical imaging. Nonetheless, real-world AI applications and clinical studies are limited in the niche area of paediatric imaging. This review article will focus on how AI, or more specifically deep learning, can be applied to TB diagnosis and management in children. We describe how deep learning can be utilised in chest imaging to provide computer-assisted diagnosis to augment workflow and screening efforts. We also review examples of recent AI applications for TB screening in resource constrained environments and we explore some of the challenges and the future directions of AI in paediatric TB.
Collapse
Affiliation(s)
- Jaishree Naidoo
- Envisionit Deep AI LTD, Coveham House, Downside Bridge Road, Cobham, KT11 3 EP, UK.
| | - Susan Cheng Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
- Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK
- NIHR Great Ormond Street Hospital Biomedical Research Centre, London, UK
| | - Carlos F Ugas -Charcape
- Department of Diagnostic Imaging, Instituto Nacional de Salud del Niño San Borja, Lima, Peru
| | - Arhanjit Singh Sodhi
- Department of Computer Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| |
Collapse
|
21
|
Lopez-Suarez N, Abraham P, Carney M, Castro AA, Narayan AK, Willis M, Spalluto LB, Flores EJ. Practical Approaches to Advancing Health Equity in Radiology, From the AJR Special Series on DEI. AJR Am J Roentgenol 2023; 221:7-16. [PMID: 36629307 DOI: 10.2214/ajr.22.28783] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Despite significant advances in health care, many patients from medically under-served populations are impacted by existing health care disparities. Radiologists are uniquely positioned to decrease health disparities and advance health equity efforts in their practices. However, literature on practical tools for advancing radiology health equity efforts applicable to a wide variety of patient populations and care settings is lacking. Therefore, this article seeks to equip radiologists with an evidence-based and practical knowledge tool kit of health equity strategies, presented in terms of four pillars of research, clinical care, education, and innovation. For each pillar, equity efforts across diverse patient populations and radiology practice settings are examined through the lens of existing barriers, current best practices, and future directions, incorporating practical examples relevant to a spectrum of patient populations. Health equity efforts provide an opportune window to transform radiology through personalized care delivery that is responsive to diverse patient needs. Guided by compassion and empathy as core principles of health equity, the four pillars provide a helpful framework to advance health equity efforts as a step toward social justice in health.
Collapse
Affiliation(s)
- Nikki Lopez-Suarez
- Universidad Central del Caribe School of Medicine, Bayamón, PR
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, AUS-202, Boston, MA 02114
| | - Peter Abraham
- Department of Radiology, University of California San Diego, San Diego, CA
| | - Madeline Carney
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, AUS-202, Boston, MA 02114
| | - Arlin A Castro
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, AUS-202, Boston, MA 02114
| | - Anand K Narayan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Marc Willis
- Department of Radiology, Stanford Radiology, Redwood City, CA
| | - Lucy B Spalluto
- Department of Radiology, Vanderbilt University Medical Center, Nashville, TN
| | - Efrén J Flores
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, AUS-202, Boston, MA 02114
| |
Collapse
|
22
|
Walsh G, Stogiannos N, van de Venter R, Rainey C, Tam W, McFadden S, McNulty JP, Mekis N, Lewis S, O'Regan T, Kumar A, Huisman M, Bisdas S, Kotter E, Pinto dos Santos D, Sá dos Reis C, van Ooijen P, Brady AP, Malamateniou C. Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe. BJR Open 2023; 5:20230033. [PMID: 37953871 PMCID: PMC10636340 DOI: 10.1259/bjro.20230033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.
Collapse
Affiliation(s)
- Gemma Walsh
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | | | | | - Clare Rainey
- School of Health Sciences, Ulster University, Derry~Londonderry, Northern Ireland
| | - Winnie Tam
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | - Sonyia McFadden
- School of Health Sciences, Ulster University, Coleraine, United Kingdom
| | | | - Nejc Mekis
- Medical Imaging and Radiotherapy Department, University of Ljubljana, Faculty of Health Sciences, Ljubljana, Slovenia
| | - Sarah Lewis
- Discipline of Medical Imaging Science, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Amrita Kumar
- Frimley Health NHS Foundation Trust, Frimley, United Kingdom
| | - Merel Huisman
- Department of Radiology, University Medical Center Utrecht, Utrecht, Netherlands
| | | | | | | | - Cláudia Sá dos Reis
- School of Health Sciences (HESAV), University of Applied Sciences and Arts Western Switzerland (HES-SO), Lausanne, Switzerland
| | | | | | | |
Collapse
|
23
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
24
|
Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen? Ethik Med 2023. [DOI: 10.1007/s00481-023-00761-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Abstract
Definition of the problem
The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?
Arguments
We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.
Conclusion
We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
Collapse
|
25
|
Haneberg AG, Pierre K, Winter-Reinhold E, Hochhegger B, Peters KR, Grajo J, Arreola M, Asadizanjani N, Bian J, Mancuso A, Forghani R. Introduction to Radiomics and Artificial Intelligence: A Primer for Radiologists. Semin Roentgenol 2023; 58:152-157. [PMID: 37087135 DOI: 10.1053/j.ro.2023.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 02/06/2023] [Indexed: 04/03/2023]
Abstract
Health informatics and artificial intelligence (AI) are expected to transform the healthcare enterprise and the future practice of radiology. There is an increasing body of literature on radiomics and deep learning/AI applications in medical imaging. There are also a steadily increasing number of FDA cleared AI applications in radiology. It is therefore essential for radiologists to have a basic understanding of these approaches, whether in academia or private practice. In this article, we will provide an overview of the field and familiarize the readers with the fundamental concepts behind these approaches.
Collapse
|
26
|
Sinha RK, Deb Roy A, Kumar N, Mondal H. Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology. Cureus 2023; 15:e35237. [PMID: 36968864 PMCID: PMC10033699 DOI: 10.7759/cureus.35237] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 02/23/2023] Open
Abstract
Background Artificial intelligence (AI) is evolving for healthcare services. Higher cognitive thinking in AI refers to the ability of the system to perform advanced cognitive processes, such as problem-solving, decision-making, reasoning, and perception. This type of thinking goes beyond simple data processing and involves the ability to understand and manipulate abstract concepts, interpret, and use information in a contextually relevant way, and generate new insights based on past experiences and accumulated knowledge. Natural language processing models like ChatGPT is a conversational program that can interact with humans to provide answers to queries. Objective We aimed to ascertain the capability of ChatGPT in solving higher-order reasoning in the subject of pathology. Methods This cross-sectional study was conducted on the internet using an AI-based chat program that provides free service for research purposes. The current version of ChatGPT (January 30 version) was used to converse with a total of 100 higher-order reasoning queries. These questions were randomly selected from the question bank of the institution and categorized according to different systems. The responses to each question were collected and stored for further analysis. The responses were evaluated by three expert pathologists on a zero to five scale and categorized into the structure of the observed learning outcome (SOLO) taxonomy categories. The score was compared by a one-sample median test with hypothetical values to find its accuracy. Result A total of 100 higher-order reasoning questions were solved by the program in an average of 45.31±7.14 seconds for an answer. The overall median score was 4.08 (Q1-Q3: 4-4.33) which was below the hypothetical maximum value of five (one-test median test p <0.0001) and similar to four (one-test median test p = 0.14). The majority (86%) of the responses were in the "relational" category in the SOLO taxonomy. There was no difference in the scores of the responses for questions asked from various organ systems in the subject of Pathology (Kruskal Wallis p = 0.55). The scores rated by three pathologists had an excellent level of inter-rater reliability (ICC = 0.975 [95% CI: 0.965-0.983]; F = 40.26; p < 0.0001). Conclusion The capability of ChatGPT to solve higher-order reasoning questions in pathology had a relational level of accuracy. Hence, the text output had connections among its parts to provide a meaningful response. The answers from the program can score approximately 80%. Hence, academicians or students can get help from the program for solving reasoning-type questions also. As the program is evolving, further studies are needed to find its accuracy level in any further versions.
Collapse
|