1
|
Bettinger H, Lenczner G, Guigui J, Rotenberg L, Zerbib E, Attia A, Vidal J, Beaumel P. Evaluation of the Performance of an Artificial Intelligence (AI) Algorithm in Detecting Thoracic Pathologies on Chest Radiographs. Diagnostics (Basel) 2024; 14:1183. [PMID: 38893709 PMCID: PMC11171491 DOI: 10.3390/diagnostics14111183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 05/21/2024] [Accepted: 05/30/2024] [Indexed: 06/21/2024] Open
Abstract
The purpose of the study was to assess the performance of readers in diagnosing thoracic anomalies on standard chest radiographs (CXRs) with and without a deep-learning-based AI tool (Rayvolve) and to evaluate the standalone performance of Rayvolve in detecting thoracic pathologies on CXRs. This retrospective multicentric study was conducted in two phases. In phase 1, nine readers independently reviewed 900 CXRs from imaging group A and identified thoracic abnormalities with and without AI assistance. A consensus from three radiologists served as the ground truth. In phase 2, the standalone performance of Rayvolve was evaluated on 1500 CXRs from imaging group B. The average values of AUC across the readers significantly increased by 15.94%, with AI-assisted reading compared to unaided reading (0.88 ± 0.01 vs. 0.759 ± 0.07, p < 0.001). The time taken to read the CXRs decreased significantly, by 35.81% with AI assistance. The average values of sensitivity and specificity across the readers increased significantly by 11.44% and 2.95% with AI-assisted reading compared to unaided reading (0.857 ± 0.02 vs. 0.769 ± 0.02 and 0.974 ± 0.01 vs. 0.946 ± 0.01, p < 0.001). From the standalone perspective, the AI model achieved an average sensitivity, specificity, PPV, and NPV of 0.964, 0.844, 0.757, and 0.9798. The speed and performance of the readers improved significantly with AI assistance.
Collapse
Affiliation(s)
| | | | - Jean Guigui
- Radiologie Paris Ouest, 47 Rue du Rocher, 75008 Paris, France
| | - Luc Rotenberg
- Radiologie Paris Ouest, 47 Rue du Rocher, 75008 Paris, France
| | | | | | | | | |
Collapse
|
2
|
Kosar A, Asif M, Ahmad MB, Akram W, Mahmood K, Kumari S. Towards classification and comprehensive analysis of AI-based COVID-19 diagnostic techniques: A survey. Artif Intell Med 2024; 151:102858. [PMID: 38583369 DOI: 10.1016/j.artmed.2024.102858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 01/02/2024] [Accepted: 03/25/2024] [Indexed: 04/09/2024]
Abstract
The unpredictable pandemic came to light at the end of December 2019, known as the novel coronavirus, also termed COVID-19, identified by the World Health Organization (WHO). The virus first originated in Wuhan (China) and rapidly affected most of the world's population. This outbreak's impact is experienced worldwide because it causes high mortality risk, many cases, and economic falls. Around the globe, the total number of cases and deaths reported till November 12, 2022, were >600 million and 6.6 million, respectively. During the period of COVID-19, several diverse diagnostic techniques have been proposed. This work presents a systematic review of COVID-19 diagnostic techniques in response to such acts. Initially, these techniques are classified into different categories based on their working principle and detection modalities, i.e. chest X-ray imaging, cough sound or respiratory patterns, RT-PCR, antigen testing, and antibody testing. After that, a comparative analysis is performed to evaluate these techniques' efficacy which may help to determine an optimum solution for a particular scenario. The findings of the proposed work show that Artificial Intelligence plays a vital role in developing COVID-19 diagnostic techniques which support the healthcare system. The related work can be a footprint for all the researchers, available under a single umbrella. Additionally, all the techniques are long-lasting and can be used for future pandemics.
Collapse
Affiliation(s)
- Amna Kosar
- Department of Computer Science, Lahore Garrison University, Lahore, Pakistan
| | - Muhammad Asif
- Department of Computer Science, Lahore Garrison University, Lahore, Pakistan
| | - Maaz Bin Ahmad
- College of Computing and Information Sciences, Karachi Institute of Economics and Technology (KIET), Karachi, Pakistan
| | - Waseem Akram
- Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology, Douliu, Taiwan, ROC
| | - Khalid Mahmood
- Graduate School of Intelligent Data Science, National Yunlin University of Science and Technology, Douliu, Taiwan, ROC.
| | - Saru Kumari
- Departement of Mathematics, Chaudhary Charan Singh University, Meerut, India
| |
Collapse
|
3
|
Fanni SC, Volpi F, Colligiani L, Chimera D, Tonerini M, Pistelli F, Pancani R, Airoldi C, Bartholmai BJ, Cioni D, Carrozzi L, Neri E, De Liperi A, Romei C. Quantitative CT Texture Analysis of COVID-19 Hospitalized Patients during 3-24-Month Follow-Up and Correlation with Functional Parameters. Diagnostics (Basel) 2024; 14:550. [PMID: 38473022 DOI: 10.3390/diagnostics14050550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 02/21/2024] [Accepted: 02/27/2024] [Indexed: 03/14/2024] Open
Abstract
BACKGROUND To quantitatively evaluate CT lung abnormalities in COVID-19 survivors from the acute phase to 24-month follow-up. Quantitative CT features as predictors of abnormalities' persistence were investigated. METHODS Patients who survived COVID-19 were retrospectively enrolled and underwent a chest CT at baseline (T0) and 3 months (T3) after discharge, with pulmonary function tests (PFTs). Patients with residual CT abnormalities repeated the CT at 12 (T12) and 24 (T24) months after discharge. A machine-learning-based software, CALIPER, calculated the CT percentage of the whole lung of normal parenchyma, ground glass (GG), reticulation (Ret), and vascular-related structures (VRSs). Differences (Δ) were calculated between time points. Receiver operating characteristic (ROC) curve analyses were performed to test the baseline parameters as predictors of functional impairment at T3 and of the persistence of CT abnormalities at T12. RESULTS The cohort included 128 patients at T0, 133 at T3, 61 at T12, and 34 at T24. The GG medians were 8.44%, 0.14%, 0.13% and 0.12% at T0, T3, T12 and T24. The Ret medians were 2.79% at T0 and 0.14% at the following time points. All Δ significantly differed from 0, except between T12 and T24. The GG and VRSs at T0 achieved AUCs of 0.73 as predictors of functional impairment, and area under the curves (AUCs) of 0.71 and 0.72 for the persistence of CT abnormalities at T12. CONCLUSIONS CALIPER accurately quantified the CT changes up to the 24-month follow-up. Resolution mostly occurred at T3, and Ret persisting at T12 was almost unchanged at T24. The baseline parameters were good predictors of functional impairment at T3 and of abnormalities' persistence at T12.
Collapse
Affiliation(s)
- Salvatore Claudio Fanni
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Federica Volpi
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Leonardo Colligiani
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Davide Chimera
- Pneumology Unit, Pisa University Hospital, 56124 Pisa, Italy
| | - Michele Tonerini
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, 56124 Pisa, Italy
| | | | - Roberta Pancani
- Pneumology Unit, Pisa University Hospital, 56124 Pisa, Italy
| | - Chiara Airoldi
- Department of Translational Medicine, University of Eastern Piemonte, 28100 Novara, Italy
| | | | - Dania Cioni
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Laura Carrozzi
- Pneumology Unit, Pisa University Hospital, 56124 Pisa, Italy
| | - Emanuele Neri
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Annalisa De Liperi
- 2nd Radiology Unit, Department of Diagnostic Imaging, Pisa University-Hospital, Via Paradisa 2, 56124 Pisa, Italy
| | - Chiara Romei
- 2nd Radiology Unit, Department of Diagnostic Imaging, Pisa University-Hospital, Via Paradisa 2, 56124 Pisa, Italy
| |
Collapse
|
4
|
Sobiecki A, Hadjiiski LM, Chan HP, Samala RK, Zhou C, Stojanovska J, Agarwal PP. Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data. Diagnostics (Basel) 2024; 14:341. [PMID: 38337857 PMCID: PMC10855789 DOI: 10.3390/diagnostics14030341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/24/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
The diagnosis of severe COVID-19 lung infection is important because it carries a higher risk for the patient and requires prompt treatment with oxygen therapy and hospitalization while those with less severe lung infection often stay on observation. Also, severe infections are more likely to have long-standing residual changes in their lungs and may need follow-up imaging. We have developed deep learning neural network models for classifying severe vs. non-severe lung infections in COVID-19 patients on chest radiographs (CXR). A deep learning U-Net model was developed to segment the lungs. Inception-v1 and Inception-v4 models were trained for the classification of severe vs. non-severe COVID-19 infection. Four CXR datasets from multi-country and multi-institutional sources were used to develop and evaluate the models. The combined dataset consisted of 5748 cases and 6193 CXR images with physicians' severity ratings as reference standard. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. We studied the reproducibility of classification performance using the different combinations of training and validation data sets. We also evaluated the generalizability of the trained deep learning models using both independent internal and external test sets. The Inception-v1 based models achieved AUC ranging between 0.81 ± 0.02 and 0.84 ± 0.0, while the Inception-v4 models achieved AUC in the range of 0.85 ± 0.06 and 0.89 ± 0.01, on the independent test sets, respectively. These results demonstrate the promise of using deep learning models in differentiating COVID-19 patients with severe from non-severe lung infection on chest radiographs.
Collapse
Affiliation(s)
- André Sobiecki
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Lubomir M. Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Ravi K. Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | | | - Prachi P. Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| |
Collapse
|
5
|
Kumar S, Kumar H, Kumar G, Singh SP, Bijalwan A, Diwakar M. A methodical exploration of imaging modalities from dataset to detection through machine learning paradigms in prominent lung disease diagnosis: a review. BMC Med Imaging 2024; 24:30. [PMID: 38302883 PMCID: PMC10832080 DOI: 10.1186/s12880-024-01192-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 01/03/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Lung diseases, both infectious and non-infectious, are the most prevalent cause of mortality overall in the world. Medical research has identified pneumonia, lung cancer, and Corona Virus Disease 2019 (COVID-19) as prominent lung diseases prioritized over others. Imaging modalities, including X-rays, computer tomography (CT) scans, magnetic resonance imaging (MRIs), positron emission tomography (PET) scans, and others, are primarily employed in medical assessments because they provide computed data that can be utilized as input datasets for computer-assisted diagnostic systems. Imaging datasets are used to develop and evaluate machine learning (ML) methods to analyze and predict prominent lung diseases. OBJECTIVE This review analyzes ML paradigms, imaging modalities' utilization, and recent developments for prominent lung diseases. Furthermore, the research also explores various datasets available publically that are being used for prominent lung diseases. METHODS The well-known databases of academic studies that have been subjected to peer review, namely ScienceDirect, arXiv, IEEE Xplore, MDPI, and many more, were used for the search of relevant articles. Applied keywords and combinations used to search procedures with primary considerations for review, such as pneumonia, lung cancer, COVID-19, various imaging modalities, ML, convolutional neural networks (CNNs), transfer learning, and ensemble learning. RESULTS This research finding indicates that X-ray datasets are preferred for detecting pneumonia, while CT scan datasets are predominantly favored for detecting lung cancer. Furthermore, in COVID-19 detection, X-ray datasets are prioritized over CT scan datasets. The analysis reveals that X-rays and CT scans have surpassed all other imaging techniques. It has been observed that using CNNs yields a high degree of accuracy and practicability in identifying prominent lung diseases. Transfer learning and ensemble learning are complementary techniques to CNNs to facilitate analysis. Furthermore, accuracy is the most favored metric for assessment.
Collapse
Affiliation(s)
- Sunil Kumar
- Department of Computer Engineering, J. C. Bose University of Science and Technology, YMCA, Faridabad, India
- Department of Information Technology, School of Engineering and Technology (UIET), CSJM University, Kanpur, India
| | - Harish Kumar
- Department of Computer Engineering, J. C. Bose University of Science and Technology, YMCA, Faridabad, India
| | - Gyanendra Kumar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India
| | | | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| | - Manoj Diwakar
- Department of Computer Science and Engineering, Graphic Era Deemed to Be University, Dehradun, India
| |
Collapse
|
6
|
Park S, Park YH, Huh J, Baik SM, Park DJ. Deep learning model for differentiating acute myeloid and lymphoblastic leukemia in peripheral blood cell images via myeloblast and lymphoblast classification. Digit Health 2024; 10:20552076241258079. [PMID: 38812848 PMCID: PMC11135107 DOI: 10.1177/20552076241258079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 05/13/2024] [Indexed: 05/31/2024] Open
Abstract
Objective Acute leukemia (AL) is a life-threatening malignant disease that occurs in the bone marrow and blood, and is classified as either acute myeloid leukemia (AML) or acute lymphoblastic leukemia (ALL). Diagnosing AL warrants testing methods, such as flow cytometry, which require trained professionals, time, and money. We aimed to develop a model that can classify peripheral blood images of 12 cell types, including pathological cells associated with AL, using artificial intelligence. Methods We acquired 42,386 single-cell images of peripheral blood slides from 282 patients (82 with AML, 40 with ALL, and 160 with immature granulocytes). Results The performance of EfficientNet-V2 (B2) using the original image size exhibited the greatest accuracy (accuracy, 0.8779; precision, 0.7221; recall, 0.7225; and F1 score, 0.7210). The next-best accuracy was achieved by EfficientNet-V1 (B1), with a 256 × 256 pixels image. F1 score was the greatest for EfficientNet-V1 (B1) with the original image size. EfficientNet-V1 (B1) and EfficientNet-V2 (B2) were used to develop an ensemble model, and the accuracy (0.8858) and F1 score (0.7361) were improved. The classification performance of the developed ensemble model for the 12 cell types was good, with an area under the receiver operating characteristic curve above 0.9, and F1 scores for myeloblasts and lymphoblasts of 0.8873 and 0.8006, respectively. Conclusions The performance of the developed ensemble model for the 12 cell classifications was satisfactory, particularly for myeloblasts and lymphoblasts. We believe that the application of our model will benefit healthcare settings where the rapid and accurate diagnosis of AL is difficult.
Collapse
Affiliation(s)
- Sholhui Park
- Department of Laboratory Medicine, Ewha Womans University College of Medicine, Seoul, Korea
| | - Young Hoon Park
- Division of Hematology-Oncology, Department of Internal Medicine, Ewha Womans University Mokdong Hospital, Seoul, Korea
| | - Jungwon Huh
- Department of Laboratory Medicine, Ewha Womans University College of Medicine, Seoul, Korea
| | - Seung Min Baik
- Division of Critical Care Medicine, Department of Surgery, College of Medicine, Ewha Womans University, Seoul, Korea
| | - Dong Jin Park
- Department of Laboratory Medicine, Eunpyeong St Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
7
|
Buongiorno R, Del Corso G, Germanese D, Colligiani L, Python L, Romei C, Colantonio S. Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models. J Imaging 2023; 9:283. [PMID: 38132701 PMCID: PMC10744014 DOI: 10.3390/jimaging9120283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/09/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Imaging plays a key role in the clinical management of Coronavirus disease 2019 (COVID-19) as the imaging findings reflect the pathological process in the lungs. The visual analysis of High-Resolution Computed Tomography of the chest allows for the differentiation of parenchymal abnormalities of COVID-19, which are crucial to be detected and quantified in order to obtain an accurate disease stratification and prognosis. However, visual assessment and quantification represent a time-consuming task for radiologists. In this regard, tools for semi-automatic segmentation, such as those based on Convolutional Neural Networks, can facilitate the detection of pathological lesions by delineating their contour. In this work, we compared four state-of-the-art Convolutional Neural Networks based on the encoder-decoder paradigm for the binary segmentation of COVID-19 infections after training and testing them on 90 HRCT volumetric scans of patients diagnosed with COVID-19 collected from the database of the Pisa University Hospital. More precisely, we started from a basic model, the well-known UNet, then we added an attention mechanism to obtain an Attention-UNet, and finally we employed a recurrence paradigm to create a Recurrent-Residual UNet (R2-UNet). In the latter case, we also added attention gates to the decoding path of an R2-UNet, thus designing an R2-Attention UNet so as to make the feature representation and accumulation more effective. We compared them to gain understanding of both the cognitive mechanism that can lead a neural model to the best performance for this task and the good compromise between the amount of data, time, and computational resources required. We set up a five-fold cross-validation and assessed the strengths and limitations of these models by evaluating the performances in terms of Dice score, Precision, and Recall defined both on 2D images and on the entire 3D volume. From the results of the analysis, it can be concluded that Attention-UNet outperforms the other models by achieving the best performance of 81.93%, in terms of 2D Dice score, on the test set. Additionally, we conducted statistical analysis to assess the performance differences among the models. Our findings suggest that integrating the recurrence mechanism within the UNet architecture leads to a decline in the model's effectiveness for our particular application.
Collapse
Affiliation(s)
- Rossana Buongiorno
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Giulio Del Corso
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Danila Germanese
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Leonardo Colligiani
- Department of Translational Research, Academic Radiology, University of Pisa, 56124 Pisa, PI, Italy;
| | - Lorenzo Python
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Chiara Romei
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Sara Colantonio
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| |
Collapse
|
8
|
Neri E, Aghakhanyan G, Zerunian M, Gandolfo N, Grassi R, Miele V, Giovagnoni A, Laghi A. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01634-5. [PMID: 37155000 DOI: 10.1007/s11547-023-01634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/19/2023] [Indexed: 05/10/2023]
Abstract
The term Explainable Artificial Intelligence (xAI) groups together the scientific body of knowledge developed while searching for methods to explain the inner logic behind the AI algorithm and the model inference based on knowledge-based interpretability. The xAI is now generally recognized as a core area of AI. A variety of xAI methods currently are available to researchers; nonetheless, the comprehensive classification of the xAI methods is still lacking. In addition, there is no consensus among the researchers with regards to what an explanation exactly is and which are salient properties that must be considered to make it understandable for every end-user. The SIRM introduces an xAI-white paper, which is intended to aid Radiologists, medical practitioners, and scientists in the understanding an emerging field of xAI, the black-box problem behind the success of the AI, the xAI methods to unveil the black-box into a glass-box, the role, and responsibilities of the Radiologists for appropriate use of the AI-technology. Due to the rapidly changing and evolution of AI, a definitive conclusion or solution is far away from being defined. However, one of our greatest responsibilities is to keep up with the change in a critical manner. In fact, ignoring and discrediting the advent of AI a priori will not curb its use but could result in its application without awareness. Therefore, learning and increasing our knowledge about this very important technological change will allow us to put AI at our service and at the service of the patients in a conscious way, pushing this paradigm shift as far as it will benefit us.
Collapse
Affiliation(s)
- Emanuele Neri
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy
| | - Gayane Aghakhanyan
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy.
| | - Marta Zerunian
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| | - Nicoletta Gandolfo
- Diagnostic Imaging Department, VillaScassi Hospital-ASL 3, Corso Scassi 1, Genoa, Italy
| | - Roberto Grassi
- Radiology Unit, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Andrea Giovagnoni
- Department of Radiological Sciences, Radiology Clinic, Azienda Ospedaliera Universitaria, Ospedali Riuniti Di Ancona, Ancona, Italy
| | - Andrea Laghi
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| |
Collapse
|
9
|
Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI). Eur Radiol 2023:10.1007/s00330-023-09409-2. [PMID: 36729173 PMCID: PMC9892666 DOI: 10.1007/s00330-023-09409-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 11/29/2022] [Accepted: 12/27/2022] [Indexed: 02/03/2023]
Abstract
This statement from the European Society of Thoracic imaging (ESTI) explains and summarises the essentials for understanding and implementing Artificial intelligence (AI) in clinical practice in thoracic radiology departments. This document discusses the current AI scientific evidence in thoracic imaging, its potential clinical utility, implementation and costs, training requirements and validation, its' effect on the training of new radiologists, post-implementation issues, and medico-legal and ethical issues. All these issues have to be addressed and overcome, for AI to become implemented clinically in thoracic radiology. KEY POINTS: • Assessing the datasets used for training and validation of the AI system is essential. • A departmental strategy and business plan which includes continuing quality assurance of AI system and a sustainable financial plan is important for successful implementation. • Awareness of the negative effect on training of new radiologists is vital.
Collapse
|
10
|
Furtado A, da Purificação CAC, Badaró R, Nascimento EGS. A Light Deep Learning Algorithm for CT Diagnosis of COVID-19 Pneumonia. Diagnostics (Basel) 2022; 12:1527. [PMID: 35885433 PMCID: PMC9319098 DOI: 10.3390/diagnostics12071527] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/20/2022] [Accepted: 06/20/2022] [Indexed: 11/24/2022] Open
Abstract
A large number of reports present artificial intelligence (AI) algorithms, which support pneumonia detection caused by COVID-19 from chest CT (computed tomography) scans. Only a few studies provided access to the source code, which limits the analysis of the out-of-distribution generalization ability. This study presents Cimatec-CovNet-19, a new light 3D convolutional neural network inspired by the VGG16 architecture that supports COVID-19 identification from chest CT scans. We trained the algorithm with a dataset of 3000 CT Scans (1500 COVID-19-positive) with images from different parts of the world, enhanced with 3000 images obtained with data augmentation techniques. We introduced a novel pre-processing approach to perform a slice-wise selection based solely on the lung CT masks and an empirically chosen threshold for the very first slice. It required only 16 slices from a CT examination to identify COVID-19. The model achieved a recall of 0.88, specificity of 0.88, ROC-AUC of 0.95, PR-AUC of 0.95, and F1-score of 0.88 on a test set with 414 samples (207 COVID-19). These results support Cimatec-CovNet-19 as a good and light screening tool for COVID-19 patients. The whole code is freely available for the scientific community.
Collapse
Affiliation(s)
- Adhvan Furtado
- Supercomputing Center SENAI CIMATEC, Av. Orlando Gomes, 1845, Piatã, Salvador 41560-010, Brazil; (A.F.); (C.A.C.d.P.)
| | | | - Roberto Badaró
- Instituto SENAI de Inovação em Saúde, Av. Orlando Gomes, 1845, Piatã, Salvador 41560-010, Brazil;
| | | |
Collapse
|
11
|
Development of Machine-Learning Model to Predict COVID-19 Mortality: Application of Ensemble Model and Regarding Feature Impacts. Diagnostics (Basel) 2022; 12:diagnostics12061464. [PMID: 35741274 PMCID: PMC9221552 DOI: 10.3390/diagnostics12061464] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/13/2022] [Accepted: 06/13/2022] [Indexed: 11/16/2022] Open
Abstract
This study was designed to develop machine-learning models to predict COVID-19 mortality and identify its key features based on clinical characteristics and laboratory tests. For this, deep-learning (DL) and machine-learning (ML) models were developed using receiver operating characteristic (ROC) area under the curve (AUC) and F1 score optimization of 87 parameters. Of the two, the DL model exhibited better performance (AUC 0.8721, accuracy 0.84, and F1 score 0.76). However, we also blended DL with ML, and the ensemble model performed the best (AUC 0.8811, accuracy 0.85, and F1 score 0.77). The DL model is generally unable to extract feature importance; however, we succeeded by using the Shapley Additive exPlanations method for each model. This study demonstrated both the applicability of DL and ML models for classifying COVID-19 mortality using hospital-structured data and that the ensemble model had the best predictive ability.
Collapse
|
12
|
Mortality Prediction of COVID-19 Patients Using Radiomic and Neural Network Features Extracted from a Wide Chest X-ray Sample Size: A Robust Approach for Different Medical Imbalanced Scenarios. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083903] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Aim: The aim of this study was to develop robust prognostic models for mortality prediction of COVID-19 patients, applicable to different sets of real scenarios, using radiomic and neural network features extracted from chest X-rays (CXRs) with a certified and commercially available software. Methods: 1816 patients from 5 different hospitals in the Province of Reggio Emilia were included in the study. Overall, 201 radiomic features and 16 neural network features were extracted from each COVID-19 patient’s radiography. The initial dataset was balanced to train the classifiers with the same number of dead and survived patients, randomly selected. The pipeline had three main parts: balancing procedure; three-step feature selection; and mortality prediction with radiomic features through three machine learning (ML) classification models: AdaBoost (ADA), Quadratic Discriminant Analysis (QDA) and Random Forest (RF). Five evaluation metrics were computed on the test samples. The performance for death prediction was validated on both a balanced dataset (Case 1) and an imbalanced dataset (Case 2). Results: accuracy (ACC), area under the ROC-curve (AUC) and sensitivity (SENS) for the best classifier were, respectively, 0.72 ± 0.01, 0.82 ± 0.02 and 0.84 ± 0.04 for Case 1 and 0.70 ± 0.04, 0.79 ± 0.03 and 0.76 ± 0.06 for Case 2. These results show that the prediction of COVID-19 mortality is robust in a different set of scenarios. Conclusions: Our large and varied dataset made it possible to train ML algorithms to predict COVID-19 mortality using radiomic and neural network features of CXRs.
Collapse
|
13
|
Laino ME, Ammirabile A, Lofino L, Lundon DJ, Chiti A, Francone M, Savevski V. Prognostic findings for ICU admission in patients with COVID-19 pneumonia: baseline and follow-up chest CT and the added value of artificial intelligence. Emerg Radiol 2022; 29:243-262. [PMID: 35048222 PMCID: PMC8769787 DOI: 10.1007/s10140-021-02008-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 12/03/2021] [Indexed: 01/08/2023]
Abstract
Infection with SARS-CoV-2 has dominated discussion and caused global healthcare and economic crisis over the past 18 months. Coronavirus disease 19 (COVID-19) causes mild-to-moderate symptoms in most individuals. However, rapid deterioration to severe disease with or without acute respiratory distress syndrome (ARDS) can occur within 1-2 weeks from the onset of symptoms in a proportion of patients. Early identification by risk stratifying such patients who are at risk of severe complications of COVID-19 is of great clinical importance. Computed tomography (CT) is widely available and offers the potential for fast triage, robust, rapid, and minimally invasive diagnosis: Ground glass opacities (GGO), crazy-paving pattern (GGO with superimposed septal thickening), and consolidation are the most common chest CT findings in COVID pneumonia. There is growing interest in the prognostic value of baseline chest CT since an early risk stratification of patients with COVID-19 would allow for better resource allocation and could help improve outcomes. Recent studies have demonstrated the utility of baseline chest CT to predict intensive care unit (ICU) admission in patients with COVID-19. Furthermore, developments and progress integrating artificial intelligence (AI) with computer-aided design (CAD) software for diagnostic imaging allow for objective, unbiased, and rapid assessment of CT images.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Radiology, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Radiology, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Dara Joseph Lundon
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Humanitas Clinical and Research Center—IRCCS, Via Manzoni 56, 20089 Rozzano, Italy
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Radiology, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
14
|
Li D, Pehrson LM, Lauridsen CA, Tøttrup L, Fraccaro M, Elliott D, Zając HD, Darkner S, Carlsen JF, Nielsen MB. The Added Effect of Artificial Intelligence on Physicians' Performance in Detecting Thoracic Pathologies on CT and Chest X-ray: A Systematic Review. Diagnostics (Basel) 2021; 11:diagnostics11122206. [PMID: 34943442 PMCID: PMC8700414 DOI: 10.3390/diagnostics11122206] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/18/2021] [Accepted: 11/23/2021] [Indexed: 12/20/2022] Open
Abstract
Our systematic review investigated the additional effect of artificial intelligence-based devices on human observers when diagnosing and/or detecting thoracic pathologies using different diagnostic imaging modalities, such as chest X-ray and CT. Peer-reviewed, original research articles from EMBASE, PubMed, Cochrane library, SCOPUS, and Web of Science were retrieved. Included articles were published within the last 20 years and used a device based on artificial intelligence (AI) technology to detect or diagnose pulmonary findings. The AI-based device had to be used in an observer test where the performance of human observers with and without addition of the device was measured as sensitivity, specificity, accuracy, AUC, or time spent on image reading. A total of 38 studies were included for final assessment. The quality assessment tool for diagnostic accuracy studies (QUADAS-2) was used for bias assessment. The average sensitivity increased from 67.8% to 74.6%; specificity from 82.2% to 85.4%; accuracy from 75.4% to 81.7%; and Area Under the ROC Curve (AUC) from 0.75 to 0.80. Generally, a faster reading time was reported when radiologists were aided by AI-based devices. Our systematic review showed that performance generally improved for the physicians when assisted by AI-based devices compared to unaided interpretation.
Collapse
Affiliation(s)
- Dana Li
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Correspondence:
| | - Lea Marie Pehrson
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
| | - Carsten Ammitzbøl Lauridsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Technology, Faculty of Health and Technology, University College Copenhagen, 2200 Copenhagen, Denmark
| | - Lea Tøttrup
- Unumed Aps, 1055 Copenhagen, Denmark; (L.T.); (M.F.)
| | | | - Desmond Elliott
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Hubert Dariusz Zając
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Jonathan Frederik Carlsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
| | - Michael Bachmann Nielsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| |
Collapse
|