1
|
Lesaunier A, Khlaut J, Dancette C, Tselikas L, Bonnet B, Boeken T. Artificial intelligence in interventional radiology: Current concepts and future trends. Diagn Interv Imaging 2025; 106:5-10. [PMID: 39261225 DOI: 10.1016/j.diii.2024.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 08/17/2024] [Accepted: 08/23/2024] [Indexed: 09/13/2024]
Abstract
While artificial intelligence (AI) is already well established in diagnostic radiology, it is beginning to make its mark in interventional radiology. AI has the potential to dramatically change the daily practice of interventional radiology at several levels. In the preoperative setting, recent advances in deep learning models, particularly foundation models, enable effective management of multimodality and increased autonomy through their ability to function minimally without supervision. Multimodality is at the heart of patient-tailored management and in interventional radiology, this translates into the development of innovative models for patient selection and outcome prediction. In the perioperative setting, AI is manifesting itself in applications that assist radiologists in image analysis and real-time decision making, thereby improving the efficiency, accuracy, and safety of interventions. In synergy with advances in robotic technologies, AI is laying the groundwork for an increased autonomy. From a research perspective, the development of artificial health data, such as AI-based data augmentation, offers an innovative solution to this central issue and promises to stimulate research in this area. This review aims to provide the medical community with the most important current and future applications of AI in interventional radiology.
Collapse
Affiliation(s)
- Armelle Lesaunier
- Department of Vascular and Oncological Interventional Radiology, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France; Université Paris Cité, Faculté de Médecine, 75006 Paris, France.
| | | | | | - Lambros Tselikas
- Gustave Roussy, Département d'Anesthésie, Chirurgie et Interventionnel (DACI), 94805 Villejuif, France; Faculté de Médecine, Paris-Saclay University, 94276 Le Kremlin Bicêtre, France
| | - Baptiste Bonnet
- Gustave Roussy, Département d'Anesthésie, Chirurgie et Interventionnel (DACI), 94805 Villejuif, France; Faculté de Médecine, Paris-Saclay University, 94276 Le Kremlin Bicêtre, France
| | - Tom Boeken
- Department of Vascular and Oncological Interventional Radiology, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France; Université Paris Cité, Faculté de Médecine, 75006 Paris, France; HEKA INRIA, INSERM PARCC U 970, 75015 Paris, France
| |
Collapse
|
2
|
Izadi MA, Alemohammad N, Geramifar P, Salimi A, Paymani Z, Eisazadeh R, Samimi R, Nikkholgh B, Sabouri Z. Automatic detection and segmentation of lesions in 18 F-FDG PET/CT imaging of patients with Hodgkin lymphoma using 3D dense U-Net. Nucl Med Commun 2024; 45:963-973. [PMID: 39224914 DOI: 10.1097/mnm.0000000000001892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
OBJECTIVE The accuracy of automatic tumor segmentation in PET/computed tomography (PET/CT) images is crucial for the effective treatment and monitoring of Hodgkin lymphoma. This study aims to address the challenges faced by certain segmentation algorithms in accurately differentiating lymphoma from normal organ uptakes due to PET image resolution and tumor heterogeneity. MATERIALS AND METHODS Variants of the encoder-decoder architectures are state-of-the-art models for image segmentation. Among these kinds of architectures, U-Net is one of the most famous and predominant for medical image segmentation. In this study, we propose a fully automatic approach for Hodgkin lymphoma segmentation that combines U-Net and DenseNet architectures to reduce network loss for very small lesions, which is trained using the Tversky loss function. The hypothesis is that the fusion of these two deep learning models can improve the accuracy and robustness of Hodgkin lymphoma segmentation. A dataset with 141 samples was used to train our proposed network. Also, to test and evaluate the proposed network, we allocated two separate datasets of 20 samples. RESULTS We achieved 0.759 as the mean Dice similarity coefficient with a median value of 0.767, and interquartile range (0.647-0.837). A good agreement was observed between the ground truth of test images against the predicted volume with precision and recall scores of 0.798 and 0.763, respectively. CONCLUSION This study demonstrates that the integration of U-Net and DenseNet architectures, along with the Tversky loss function, can significantly enhance the accuracy of Hodgkin lymphoma segmentation in PET/CT images compared to similar studies.
Collapse
Affiliation(s)
| | | | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences,
| | - Ali Salimi
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences,
| | - Zeinab Paymani
- Nuclear Medicine Department, Children Medical Center Hospital, Tehran University of Medical Science,
- Research Center for Nuclear Medicine, Shariati Hospital,
| | - Roya Eisazadeh
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences,
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran,
- Khatam PET/CT Center, Specialty and Subspecialty Hospital of Khatam-ol-Anbia, Tehran, Iran
| | - Babak Nikkholgh
- Khatam PET/CT Center, Specialty and Subspecialty Hospital of Khatam-ol-Anbia, Tehran, Iran
| | - Zaynab Sabouri
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences,
- The Centre for Computational Biology, University of Birmingham, Birmingham, United Kingdom and
| |
Collapse
|
3
|
Kahraman AT, Fröding T, Toumpanakis D, Gustafsson CJ, Sjöblom T. Enhanced classification performance using deep learning based segmentation for pulmonary embolism detection in CT angiography. Heliyon 2024; 10:e38118. [PMID: 39398015 PMCID: PMC11471166 DOI: 10.1016/j.heliyon.2024.e38118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 09/17/2024] [Accepted: 09/18/2024] [Indexed: 10/15/2024] Open
Abstract
Purpose To develop a deep learning-based algorithm that automatically and accurately classifies patients as either having pulmonary emboli or not in CT pulmonary angiography (CTPA) examinations. Materials and methods For model development, 700 CTPA examinations from 652 patients performed at a single institution were used, of which 149 examinations contained 1497 PE traced by radiologists. The nnU-Net deep learning-based segmentation framework was trained using 5-fold cross-validation. To enhance classification, we applied logical rules based on PE volume and probability thresholds. External model evaluation was performed in 770 and 34 CTPAs from two independent datasets. Results A total of 1483 CTPA examinations were evaluated. In internal cross-validation and test set, the trained model correctly classified 123 of 128 examinations as positive for PE (sensitivity 96.1 %; 95 % C.I. 91-98 %; P < .05) and 521 of 551 as negative (specificity 94.6 %; 95 % C.I. 92-96 %; P < .05), achieving an area under the receiver operating characteristic (AUROC) of 96.4 % (95 % C.I. 79-99 %; P < .05). In the first external test dataset, the trained model correctly classified 31 of 32 examinations as positive (sensitivity 96.9 %; 95 % C.I. 84-99 %; P < .05) and 2 of 2 as negative (specificity 100 %; 95 % C.I. 34-100 %; P < .05), achieving an AUROC of 98.6 % (95 % C.I. 83-100 %; P < .05). In the second external test dataset, the trained model correctly classified 379 of 385 examinations as positive (sensitivity 98.4 %; 95 % C.I. 97-99 %; P < .05) and 346 of 385 as negative (specificity 89.9 %; 95 % C.I. 86-93 %; P < .05), achieving an AUROC of 98.5 % (95 % C.I. 83-100 %; P < .05). Conclusion Our automatic pipeline achieved beyond state-of-the-art diagnostic performance of PE in CTPA using nnU-Net for segmentation and volume- and probability-based post-processing for classification.
Collapse
Affiliation(s)
- Ali Teymur Kahraman
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| | - Tomas Fröding
- Department of Radiology, Nyköping Hospital, Nyköping, Sweden
| | - Dimitris Toumpanakis
- Karolinska University Hospital, Stockholm, Sweden
- Department of Surgical Sciences, Uppsala University, Sweden
| | - Christian Jamtheim Gustafsson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
- Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Tobias Sjöblom
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Hemalakshmi GR, Murugappan M, Sikkandar MY, Santhi D, Prakash NB, Mohanarathinam A. PE-Ynet: a novel attention-based multi-task model for pulmonary embolism detection using CT pulmonary angiography (CTPA) scan images. Phys Eng Sci Med 2024; 47:863-880. [PMID: 38546819 DOI: 10.1007/s13246-024-01410-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 02/19/2024] [Indexed: 09/18/2024]
Abstract
Pulmonary Embolism (PE) has diverse manifestations with different etiologies such as venous thromboembolism, septic embolism, and paradoxical embolism. In this study, a novel attention-based multi-task model is proposed for PE segmentation and detection from Computed Tomography Pulmonary Angiography (CTPA) images. A Y-Net architecture is used to implement this model, which facilitates segmentation and classification jointly, improving performance and efficiency. It is leveraged with Multi Head Attention (MHA), which allows the model to focus on important regions of the image while suppressing irrelevant information, improving the accuracy of the segmentation and detection tasks. The proposed PE-YNet model is tested with two public datasets, achieving a maximum mean detection and segmentation accuracy of 99.89% and 99.83%, respectively, on the CAD-PE challenge dataset. Similarly, it also achieves a detection accuracy of 99.75% and a segmentation accuracy of 99.81% on the FUMPE dataset. Additionally, sensitivity analysis also shows a high sensitivity of 0.9885 for the localization error ɛ = 0 for the CAD-PE dataset, demonstrating the model's robustness against false predictions compared to state-of-the-art models. Further, this model also exhibits lower inference time, size, and memory usage compared to representative models. An automated PE-YNet tool can assist physicians with PE diagnosis, treatment, and prognosis monitoring in the clinical management of CoVID-19.
Collapse
Affiliation(s)
- G R Hemalakshmi
- School of Computing Science and Engineering, Vellore Institute of Technology, Bhopal, Madhya Pradesh, India
| | - M Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, 13133, Doha, Kuwait.
- Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of Sciences, Technology, and Advanced Studies, Chennai, Tamil Nadu, India.
- Center of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, 02600, Arau, Perlis, Malaysia.
| | - Mohamed Yacin Sikkandar
- Biomedical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majma'ah, Saudi Arabia
| | - D Santhi
- Department of Biomedical Engineering, Mepco Schlenk Engineering College, Sivakasi, India
| | - N B Prakash
- Department of Electrical and Electronics Engineering, National Engineering College, Kovilpatti, India
| | - A Mohanarathinam
- Department of Electronics and Communication Engineering, Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, 641021, India
| |
Collapse
|
5
|
Amadou AA, Peralta L, Dryburgh P, Klein P, Petkov K, Housden RJ, Singh V, Liao R, Kim YH, Ghesu FC, Mansi T, Rajani R, Young A, Rhode K. Cardiac ultrasound simulation for autonomous ultrasound navigation. Front Cardiovasc Med 2024; 11:1384421. [PMID: 39193499 PMCID: PMC11347295 DOI: 10.3389/fcvm.2024.1384421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 07/19/2024] [Indexed: 08/29/2024] Open
Abstract
Introduction Ultrasound is well-established as an imaging modality for diagnostic and interventional purposes. However, the image quality varies with operator skills as acquiring and interpreting ultrasound images requires extensive training due to the imaging artefacts, the range of acquisition parameters and the variability of patient anatomies. Automating the image acquisition task could improve acquisition reproducibility and quality but training such an algorithm requires large amounts of navigation data, not saved in routine examinations. Methods We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions, such that this pipeline can later be used by learning algorithms for navigation. We present a novel simulation pipeline which uses segmentations from other modalities, an optimized volumetric data representation and GPU-accelerated Monte Carlo path tracing to generate view-dependent and patient-specific ultrasound images. Results We extensively validate the correctness of our pipeline with a phantom experiment, where structures' sizes, contrast and speckle noise properties are assessed. Furthermore, we demonstrate its usability to train neural networks for navigation in an echocardiography view classification experiment by generating synthetic images from more than 1,000 patients. Networks pre-trained with our simulations achieve significantly superior performance in settings where large real datasets are not available, especially for under-represented classes. Discussion The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
Collapse
Affiliation(s)
- Abdoul Aziz Amadou
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
- Digital Technology and Innovation, Siemens Healthcare Limited, Camberley, United Kingdom
| | - Laura Peralta
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| | - Paul Dryburgh
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| | - Paul Klein
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - Kaloian Petkov
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - R. James Housden
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| | - Vivek Singh
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - Rui Liao
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - Young-Ho Kim
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - Florin C. Ghesu
- Siemens Healthineers AG, Digital Technology and Innovation, Erlangen, Germany
| | - Tommaso Mansi
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, United States
| | - Ronak Rajani
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| | - Alistair Young
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| | - Kawal Rhode
- Department of Surgical & Interventional Engineering, King’s College London, School of Biomedical Engineering & Imaging Sciences, London, United Kingdom
| |
Collapse
|
6
|
Djahnine A, Lazarus C, Lederlin M, Mulé S, Wiemker R, Si-Mohamed S, Jupin-Delevaux E, Nempont O, Skandarani Y, De Craene M, Goubalan S, Raynaud C, Belkouchi Y, Afia AB, Fabre C, Ferretti G, De Margerie C, Berge P, Liberge R, Elbaz N, Blain M, Brillet PY, Chassagnon G, Cadour F, Caramella C, Hajjam ME, Boussouar S, Hadchiti J, Fablet X, Khalil A, Talbot H, Luciani A, Lassau N, Boussel L. Detection and severity quantification of pulmonary embolism with 3D CT data using an automated deep learning-based artificial solution. Diagn Interv Imaging 2024; 105:97-103. [PMID: 38261553 DOI: 10.1016/j.diii.2023.09.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 01/25/2024]
Abstract
PURPOSE The purpose of this study was to propose a deep learning-based approach to detect pulmonary embolism and quantify its severity using the Qanadli score and the right-to-left ventricle diameter (RV/LV) ratio on three-dimensional (3D) computed tomography pulmonary angiography (CTPA) examinations with limited annotations. MATERIALS AND METHODS Using a database of 3D CTPA examinations of 1268 patients with image-level annotations, and two other public datasets of CTPA examinations from 91 (CAD-PE) and 35 (FUME-PE) patients with pixel-level annotations, a pipeline consisting of: (i), detecting blood clots; (ii), performing PE-positive versus negative classification; (iii), estimating the Qanadli score; and (iv), predicting RV/LV diameter ratio was followed. The method was evaluated on a test set including 378 patients. The performance of PE classification and severity quantification was quantitatively assessed using an area under the curve (AUC) analysis for PE classification and a coefficient of determination (R²) for the Qanadli score and the RV/LV diameter ratio. RESULTS Quantitative evaluation led to an overall AUC of 0.870 (95% confidence interval [CI]: 0.850-0.900) for PE classification task on the training set and an AUC of 0.852 (95% CI: 0.810-0.890) on the test set. Regression analysis yielded R² value of 0.717 (95% CI: 0.668-0.760) and of 0.723 (95% CI: 0.668-0.766) for the Qanadli score and the RV/LV diameter ratio estimation, respectively on the test set. CONCLUSION This study shows the feasibility of utilizing AI-based assistance tools in detecting blood clots and estimating PE severity scores with 3D CTPA examinations. This is achieved by leveraging blood clots and cardiac segmentations. Further studies are needed to assess the effectiveness of these tools in clinical practice.
Collapse
Affiliation(s)
- Aissam Djahnine
- Philips Research France, 92150 Suresnes, France; CREATIS, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, France.
| | | | | | - Sébastien Mulé
- Medical Imaging Department, Henri Mondor University Hospital, AP-HP, Créteil, France, Inserm, U955, Team 18, 94000 Créteil, France
| | | | - Salim Si-Mohamed
- Department of Radiology, Hospices Civils de Lyon, 69500 Lyon, France
| | | | | | | | | | | | | | - Younes Belkouchi
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; OPIS - Optimisation Imagerie et Santé, Université Paris-Saclay, Inria, CentraleSupélec, CVN - Centre de vision numérique, 91190 Gif-Sur-Yvette, France
| | - Amira Ben Afia
- Department of Radiology, APHP Nord, Hôpital Bichat, 75018 Paris, France
| | - Clement Fabre
- Department of Radiology, Centre Hospitalier de Laval, 53000 Laval, France
| | - Gilbert Ferretti
- Universite Grenobles Alpes, Service de Radiologie et Imagerie Médicale, CHU Grenoble-Alpes, 38000 Grenoble, France
| | - Constance De Margerie
- Université Paris Cité, 75006 Paris, France, Department of Radiology, Hôpital Saint-Louis, Assistance Publique-Hôpitaux de Paris, 75010 Paris, France
| | - Pierre Berge
- Department of Radiology, CHU Angers, 49000 Angers, France
| | - Renan Liberge
- Department of Radiology, CHU Nantes, 44000 Nantes, France
| | - Nicolas Elbaz
- Department of Radiology, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Maxime Blain
- Department of Radiology, Hopital Henri Mondor, AP-HP, 94000 Créteil, France
| | - Pierre-Yves Brillet
- Department of Radiology, Hôpital Avicenne, Paris 13 University, 93000 Bobigny, France
| | - Guillaume Chassagnon
- Department of Radiology, Hopital Cochin, APHP, 75014 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Farah Cadour
- APHM, Hôpital Universitaire Timone, CEMEREM, 13005 Marseille, France
| | - Caroline Caramella
- Department of Radiology, Groupe Hospitalier Paris Saint-Joseph, 75015 Paris, France
| | - Mostafa El Hajjam
- Department of Radiology, Hôpital Ambroise Paré Hospital, UMR 1179 INSERM/UVSQ, Team 3, 92100 Boulogne-Billancourt, France
| | - Samia Boussouar
- Sorbonne Université, Hôpital La Pitié-Salpêtrière, APHP, Unité d'Imagerie Cardiovasculaire et Thoracique (ICT), 75013 Paris, France
| | - Joya Hadchiti
- Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay. 94800 Villejuif, France
| | - Xavier Fablet
- Department of Radiology, CHU Rennes, 35000 Rennes, France
| | - Antoine Khalil
- Department of Radiology, APHP Nord, Hôpital Bichat, 75018 Paris, France
| | - Hugues Talbot
- OPIS - Optimisation Imagerie et Santé, Université Paris-Saclay, Inria, CentraleSupélec, CVN - Centre de vision numérique, 91190 Gif-Sur-Yvette, France
| | - Alain Luciani
- Medical Imaging Department, Henri Mondor University Hospital, AP-HP, Créteil, France, Inserm, U955, Team 18, 94000 Créteil, France
| | - Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay. 94800 Villejuif, France
| | - Loic Boussel
- CREATIS, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, France; Department of Radiology, Hospices Civils de Lyon, 69500 Lyon, France
| |
Collapse
|
7
|
Islam NU, Zhou Z, Gehlot S, Gotway MB, Liang J. Seeking an optimal approach for Computer-aided Diagnosis of Pulmonary Embolism. Med Image Anal 2024; 91:102988. [PMID: 37924750 PMCID: PMC11039560 DOI: 10.1016/j.media.2023.102988] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 11/06/2023]
Abstract
Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.
Collapse
Affiliation(s)
- Nahid Ul Islam
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
| | - Zongwei Zhou
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Shiv Gehlot
- Biomedical Informants Program, Arizona State University, Phoenix, AZ 85054, USA
| | | | - Jianming Liang
- Biomedical Informants Program, Arizona State University, Phoenix, AZ 85054, USA.
| |
Collapse
|
8
|
Kahraman AT, Fröding T, Toumpanakis D, Sladoje N, Sjöblom T. Automated detection, segmentation and measurement of major vessels and the trachea in CT pulmonary angiography. Sci Rep 2023; 13:18407. [PMID: 37891213 PMCID: PMC10611811 DOI: 10.1038/s41598-023-45509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 10/20/2023] [Indexed: 10/29/2023] Open
Abstract
Mediastinal structure measurements are important for the radiologist's review of computed tomography pulmonary angiography (CTPA) examinations. In the reporting process, radiologists make measurements of diameters, volumes, and organ densities for image quality assessment and risk stratification. However, manual measurement of these features is time consuming. Here, we sought to develop a time-saving automated algorithm that can accurately detect, segment and measure mediastinal structures in routine clinical CTPA examinations. In this study, 700 CTPA examinations collected and annotated. Of these, a training set of 180 examinations were used to develop a fully automated deterministic algorithm. On the test set of 520 examinations, two radiologists validated the detection and segmentation performance quantitatively, and ground truth was annotated to validate the measurement performance. External validation was performed in 47 CTPAs from two independent datasets. The system had 86-100% detection and segmentation accuracy in the different tasks. The automatic measurements correlated well to those of the radiologist (Pearson's r 0.68-0.99). Taken together, the fully automated algorithm accurately detected, segmented, and measured mediastinal structures in routine CTPA examinations having an adequate representation of common artifacts and medical conditions.
Collapse
Affiliation(s)
- Ali T Kahraman
- Department of Immunology, Genetics and Pathology, Uppsala University, 751 85, Uppsala, Sweden
| | - Tomas Fröding
- Department of Radiology, Nyköping Hospital, 611 39, Nyköping, Sweden
| | - Dimitrios Toumpanakis
- Department of Radiology, Uppsala University Hospital, 751 85, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Nataša Sladoje
- Centre for Image Analysis, Department of Information Technology, Uppsala University, 751 05, Uppsala, Sweden
| | - Tobias Sjöblom
- Department of Immunology, Genetics and Pathology, Uppsala University, 751 85, Uppsala, Sweden.
| |
Collapse
|
9
|
de Andrade JMC, Olescki G, Escuissato DL, Oliveira LF, Basso ACN, Salvador GL. Pixel-level annotated dataset of computed tomography angiography images of acute pulmonary embolism. Sci Data 2023; 10:518. [PMID: 37542053 PMCID: PMC10403591 DOI: 10.1038/s41597-023-02374-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/11/2023] [Indexed: 08/06/2023] Open
Abstract
Pulmonary embolism has a high incidence and mortality, especially if undiagnosed. The examination of choice for diagnosing the disease is computed tomography pulmonary angiography. As many factors can lead to misinterpretations and diagnostic errors, different groups are utilizing deep learning methods to help improve this process. The diagnostic accuracy of these methods tends to increase by augmenting the training dataset. Deep learning methods can potentially benefit from the use of images acquired with devices from different vendors. To the best of our knowledge, we have developed the first public dataset annotated at the pixel and image levels and the first pixel-level annotated dataset to contain examinations performed with equipment from Toshiba and GE. This dataset includes 40 examinations, half performed with each piece of equipment, representing samples from two medical services. We also included measurements related to the cardiac and circulatory consequences of pulmonary embolism. We encourage the use of this dataset to develop, evaluate and compare the performance of new AI algorithms designed to diagnose PE.
Collapse
Affiliation(s)
| | - Gabriel Olescki
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| | - Dante Luiz Escuissato
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| | | | - Ana Carolina Nicolleti Basso
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| | - Gabriel Lucca Salvador
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| |
Collapse
|
10
|
Cheng TW, Chua YW, Huang CC, Chang J, Kuo C, Cheng YC. Feature-enhanced adversarial semi-supervised semantic segmentation network for pulmonary embolism annotation. Heliyon 2023; 9:e16060. [PMID: 37215788 PMCID: PMC10196850 DOI: 10.1016/j.heliyon.2023.e16060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 04/28/2023] [Accepted: 05/04/2023] [Indexed: 05/24/2023] Open
Abstract
This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism (PE) lesion areas in computed tomography pulmonary angiogram (CTPA) images. In the current study, all of the PE CTPA image segmentation methods were trained by supervised learning. However, when CTPA images come from different hospitals, the supervised learning models need to be retrained and the images need to be relabeled. Therefore, this study proposed a semi-supervised learning method to make the model applicable to different datasets by the addition of a small number of unlabeled images. By training the model with both labeled and unlabeled images, the accuracy of unlabeled images was improved and the labeling cost was reduced. Our proposed semi-supervised segmentation model included a segmentation network and a discriminator network. We added feature information generated from the encoder of the segmentation network to the discriminator so that it could learn the similarities between the prediction label and ground truth label. The HRNet-based architecture was modified and used as the segmentation network. This HRNet-based architecture could maintain a higher resolution for convolutional operations to improve the prediction of small PE lesion areas. We used a labeled open-source dataset and an unlabeled National Cheng Kung University Hospital (NCKUH) (IRB number: B-ER-108-380) dataset to train the semi-supervised learning model, and the resulting mean intersection over union (mIOU), dice score, and sensitivity reached 0.3510, 0.4854, and 0.4253, respectively, on the NCKUH dataset. Then we fine-tuned and tested the model with a small number of unlabeled PE CTPA images in a dataset from China Medical University Hospital (CMUH) (IRB number: CMUH110-REC3-173). Comparing the results of our semi-supervised model with those of the supervised model, the mIOU, dice score, and sensitivity improved from 0.2344, 0.3325, and 0.3151 to 0.3721, 0.5113, and 0.4967, respectively. In conclusion, our semi-supervised model can improve the accuracy on other datasets and reduce the labor cost of labeling with the use of only a small number of unlabeled images for fine-tuning.
Collapse
Affiliation(s)
- Ting-Wei Cheng
- Department of Mechanical Engineering, College of Engineering, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| | - Yi Wei Chua
- Department of Mechanical Engineering, College of Engineering, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| | - Ching-Chun Huang
- Department of Computer Science, College of Computer Science, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| | - Jerry Chang
- Department of Mechanical Engineering, College of Engineering, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| | - Chin Kuo
- Department of Oncology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
- College of Artificial Intelligence, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| | - Yun-Chien Cheng
- Department of Mechanical Engineering, College of Engineering, National Yang Ming Chiao Tung University, Hsin-Chu, Taiwan
| |
Collapse
|
11
|
Xu H, Li H, Xu Q, Zhang Z, Wang P, Li D, Guo L. Automatic detection of pulmonary embolism in computed tomography pulmonary angiography using Scaled-YOLOv4. Med Phys 2023. [PMID: 36633186 DOI: 10.1002/mp.16218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 11/10/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023] Open
Abstract
BACKGROUND Pulmonary embolism (PE) is a common but fatal clinical condition and the gold standard of diagnosis is computed tomography pulmonary angiography (CTPA). Prompt diagnosis and rapid treatment can dramatically reduce mortality in patients. However, the diagnosis of PE is often delayed and missed. METHODS In this study, we identified a deep learning model Scaled-YOLOv4 that enables end-to-end automated detection of PE to help solve these problems. A total of 307 CTPA data (Tianjin 142 cases, Linyi 133 cases, and FUMPE 32 cases) were included in this study. The Tianjin dataset was divided 10 times in the ratio of training set: validation set: test set = 7:2:1 for model tuning, and both the Linyi and FUMPE datasets were used as independent external test sets to evaluate the generalization of the model. RESULTS Scaled-YOLOv4 was able to process one patient in average 3.55 s [95% CI: 3.51-3.59 s]. It also achieved an average precision (AP) of 83.04 [95% CI: 79.36-86.72] for PE detection on the Tianjin test set, and 75.86 [95% CI: 75.48-76.24] and 72.74 [95% CI: 72.10-73.38] on Linyi and FUMPE, respectively. CONCLUSIONS This deep learning algorithm helps detect PE in real time, providing radiologists with aided diagnostic evidence without increasing their workload, and can effectively reduce the probability of delayed patient diagnosis.
Collapse
Affiliation(s)
- Haijun Xu
- School of Medical Imaging, Tianjin Medical University, Tianjin, China
| | - Huiyao Li
- Department of MR, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Qifei Xu
- Department of Radiology, Linyi people's Hospital, Linyi, Shandong, China
| | - Zewei Zhang
- Department of Nuclear Medicine, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ping Wang
- School of Medical Imaging, Tianjin Medical University, Tianjin, China
| | - Dong Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, China
| | - Li Guo
- School of Medical Imaging, Tianjin Medical University, Tianjin, China
| |
Collapse
|
12
|
Cahan N, Marom EM, Soffer S, Barash Y, Konen E, Klang E, Greenspan H. Weakly supervised attention model for RV strain classification from volumetric CTPA scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106815. [PMID: 35461128 DOI: 10.1016/j.cmpb.2022.106815] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 03/28/2022] [Accepted: 04/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Evaluation of the right ventricle (RV) is a key component of the clinical assessment of many cardiovascular and pulmonary disorders. In this work, we focus on RV strain classification from patients who were diagnosed with pulmonary embolism (PE) in computed tomography pulmonary angiography (CTPA) scans. PE is a life-threatening condition, often without warning signs or symptoms. Early diagnosis and accurate risk stratification are critical for decreasing mortality rates. High-risk PE relies on the presence of RV dysfunction resulting from acute pressure overload. PE severity classification and specifically, high-risk PE diagnosis are crucial for appropriate therapy. CTPA is the golden standard in the diagnostic workup of suspected PE. Therefore, it can link between diagnosis and risk stratification strategies. METHODS We retrieved data of consecutive patients who underwent CTPA and were diagnosed with PE and extracted a single binary label of "RV strain biomarker" from the CTPA scan report. This label was used as a weak label for classification. Our solution applies a 3D DenseNet network architecture, further improved by integrating residual attention blocks into the network's layers. RESULTS This model achieved an area under the receiver operating characteristic curve (AUC) of 0.88 for classifying RV strain. For Youden's index, the model showed a sensitivity of 87% and specificity of 83.7%. Our solution outperforms state-of-the-art 3D CNN networks. The proposed design allows for a fully automated network that can be trained easily in an end-to-end manner without requiring computationally intensive and time-consuming preprocessing or strenuous labeling of the data. CONCLUSIONS This current solution demonstrates that a small dataset of readily available unmarked CTPAs can be used for effective RV strain classification. To our knowledge, this is the first work that attempts to solve the problem of RV strain classification from CTPA scans and this is the first work where medical images are used in such an architecture. Our generalized self-attention blocks can be incorporated into various existing classification architectures making this a general methodology that can be applied to 3D medical datasets.
Collapse
Affiliation(s)
- Noa Cahan
- Faculty of Engineering, Tel-Aviv University, Tel-Aviv, Israel.
| | - Edith M Marom
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Shelly Soffer
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Hayit Greenspan
- Faculty of Engineering, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
13
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
14
|
Müller-Peltzer K, Kretzschmar L, Negrão de Figueiredo G, Crispin A, Stahl R, Bamberg F, Trumm CG. Present Limitations of Artificial Intelligence in the Emergency Setting - Performance Study of a Commercial, Computer-Aided Detection Algorithm for Pulmonary Embolism. ROFO-FORTSCHR RONTG 2021; 193:1436-1444. [PMID: 34352914 DOI: 10.1055/a-1515-2923] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
PURPOSE Since artificial intelligence is transitioning from an experimental stage to clinical implementation, the aim of our study was to evaluate the performance of a commercial, computer-aided detection algorithm of computed tomography pulmonary angiograms regarding the presence of pulmonary embolism in the emergency room. MATERIALS AND METHODS This retrospective study includes all pulmonary computed tomography angiogram studies performed in a large emergency department over a period of 36 months that were analyzed by two radiologists experienced in emergency radiology to set a reference standard. Original reports and computer-aided detection results were compared regarding the detection of lobar, segmental, and subsegmental pulmonary embolism. All computer-aided detection findings were analyzed concerning the underlying pathology. False-positive findings were correlated to the contrast-to-noise ratio. RESULTS Expert reading revealed pulmonary embolism in 182 of 1229 patients (49 % men, 10-97 years) with a total of 504 emboli. The computer-aided detection algorithm reported 3331 findings, including 258 (8 %) true-positive findings and 3073 (92 %) false-positive findings. Computer-aided detection analysis showed a sensitivity of 47 % (95 %CI: 33-61 %) on the lobar level and 50 % (95 %CI 43-56 %) on the subsegmental level. On average, there were 2.25 false-positive findings per study (median 2, range 0-25). There was no significant correlation between the number of false-positive findings and the contrast-to-noise ratio (Spearman's Rank Correlation Coefficient = 0.09). Soft tissue (61.0 %) and pulmonary veins (24.1 %) were the most common underlying reasons for false-positive findings. CONCLUSION Applied to a population at a large emergency room, the tested commercial computer-aided detection algorithm faced relevant performance challenges that need to be addressed in future development projects. KEY POINTS · Computed tomography pulmonary angiograms are frequently acquired in emergency radiology.. · Computer-aided detection algorithms (CADs) can support image analysis.. · CADs face challenges regarding false-positive and false-negative findings.. · Radiologists using CADs need to be aware of these limitations.. · Further software improvements are necessary ahead of implementation in the daily routine.. CITATION FORMAT · Müller-Peltzer K, Kretzschmar L, Negrão de Figueiredo G et al. Present Limitations of Artificial Intelligence in the Emergency Setting - Performance Study of a Commercial, Computer-Aided Detection Algorithm for Pulmonary Embolism. Fortschr Röntgenstr 2021; DOI: 10.1055/a-1515-2923.
Collapse
Affiliation(s)
- Katharina Müller-Peltzer
- Klinik für Diagnostische und Interventionelle Radiologie, Albert-Ludwigs-Universität Freiburg Medizinische Fakultät, Freiburg, Deutschland
| | - Lena Kretzschmar
- Klinik und Poliklinik für Radiologie, Ludwig-Maximilians-Universität, München, Deutschland
| | | | - Alexander Crispin
- Institut für Medizinische Informationsverarbeitung, Biometrie und Epidemiologie, Klinikum der Universität München-Großhadern, München, Deutschland
| | - Robert Stahl
- Institut für Diagnostische und Interventionelle Neuroradiologie, Klinikum der Universität München-Großhadern, München, Deutschland
| | - Fabian Bamberg
- Klinik für Diagnostische und Interventionelle Radiologie, Albert-Ludwigs-Universität Freiburg Medizinische Fakultät, Freiburg, Deutschland
| | - Christoph Gregor Trumm
- Institut für Diagnostische und Interventionelle Neuroradiologie, Klinikum der Universität München-Großhadern, München, Deutschland
| |
Collapse
|
15
|
Leuschner J, Schmidt M, Baguer DO, Maass P. LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction. Sci Data 2021; 8:109. [PMID: 33863917 PMCID: PMC8052416 DOI: 10.1038/s41597-021-00893-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 03/04/2021] [Indexed: 11/28/2022] Open
Abstract
Deep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios. Measurement(s) | Low Dose Computed Tomography of the Chest • feature extraction objective | Technology Type(s) | digital curation • image processing technique | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.13526360
Collapse
Affiliation(s)
- Johannes Leuschner
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany.
| | - Maximilian Schmidt
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany.
| | - Daniel Otero Baguer
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany
| | - Peter Maass
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany
| |
Collapse
|
16
|
|
17
|
Sun ZT, Hao FE, Guo YM, Liu AS, Zhao L. Assessment of Acute Pulmonary Embolism by Computer-Aided Technique: A Reliability Study. Med Sci Monit 2020; 26:e920239. [PMID: 32111815 PMCID: PMC7063852 DOI: 10.12659/msm.920239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Background Acute pulmonary embolism is one of the most common cardiovascular diseases. Computer-aided technique is widely used in chest imaging, especially for assessing pulmonary embolism. The reliability and quantitative analyses of computer-aided technique are necessary. This study aimed to evaluate the reliability of geometry-based computer-aided detection and quantification for emboli morphology and severity of acute pulmonary embolism. Material/Methods Thirty patients suspected of acute pulmonary embolism were analyzed by both manual and computer-aided interpretation of vascular obstruction index and computer-aided measurements of emboli quantitative parameters. The reliability of Qanadli and Mastora scores was analyzed using computer-aided and manual interpretation. Results The time costs of manual and computer-aided interpretation were statistically different (374.90±150.16 versus 121.07±51.76, P<0.001). The difference between the computer-aided and manual interpretation of Qanadli score was 1.83±2.19, and 96.7% (29 out of 30) of the measurements were within 95% confidence interval (intraclass correlation coefficient, ICC=0.998). The difference between the computer-aided and manual interpretation of Mastora score was 1.46±1.62, and 96.7% (29 out of 30) of the measurements were within 95% confidence interval (ICC=0.997). The emboli quantitative parameters were moderately correlated with the Qanadli and Mastora scores (all P<0.001). Conclusions Computer-aided technique could reduce the time costs, improve the and reliability of vascular obstruction index and provided additional quantitative parameters for disease assessment.
Collapse
Affiliation(s)
- Zhen-Ting Sun
- Department of Radiology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China (mainland)
| | - Fen-E Hao
- Department of Radiology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China (mainland)
| | - You-Min Guo
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China (mainland)
| | - Ai-Shi Liu
- Department of Radiology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China (mainland)
| | - Lei Zhao
- Department of Radiology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China (mainland)
| |
Collapse
|
18
|
Tajbakhsh N, Shin JY, Gotway MB, Liang J. Computer-aided detection and visualization of pulmonary embolism using a novel, compact, and discriminative image representation. Med Image Anal 2019; 58:101541. [PMID: 31416007 DOI: 10.1016/j.media.2019.101541] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 01/15/2023]
Abstract
Diagnosing pulmonary embolism (PE) and excluding disorders that may clinically and radiologically simulate PE poses a challenging task for both human and machine perception. In this paper, we propose a novel vessel-oriented image representation (VOIR) that can improve the machine perception of PE through a consistent, compact, and discriminative image representation, and can also improve radiologists' diagnostic capabilities for PE assessment by serving as the backbone of an effective PE visualization system. Specifically, our image representation can be used to train more effective convolutional neural networks for distinguishing PE from PE mimics, and also allows radiologists to inspect the vessel lumen from multiple perspectives, so that they can report filling defects (PE), if any, with confidence. Our image representation offers four advantages: (1) Efficiency and compactness-concisely summarizing the 3D contextual information around an embolus in only three image channels, (2) consistency-automatically aligning the embolus in the 3-channel images according to the orientation of the affected vessel, (3) expandability-naturally supporting data augmentation for training CNNs, and (4) multi-view visualization-maximally revealing filling defects. To evaluate the effectiveness of VOIR for PE diagnosis, we use 121 CTPA datasets with a total of 326 emboli. We first compare VOIR with two other compact alternatives using six CNN architectures of varying depths and under varying amounts of labeled training data. Our experiments demonstrate that VOIR enables faster training of a higher-performing model compared to the other compact representations, even in the absence of deep architectures and large labeled training sets. Our experiments comparing VOIR with the 3D image representation further demonstrate that the 2D CNN trained with VOIR achieves a significant performance gain over the 3D CNNs. Our robustness analyses also show that the suggested PE CAD is robust to the choice of CT scanner machines and the physical size of crops used for training. Finally, our PE CAD is ranked second at the PE challenge in the category of 0 mm localization error.
Collapse
Affiliation(s)
- Nima Tajbakhsh
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | - Jae Y Shin
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | | | - Jianming Liang
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA.
| |
Collapse
|