1
|
Salimi Y, Shiri I, Mansouri Z, Sanaat A, Hajianfar G, Hervier E, Bitarafan A, Caobelli F, Hundertmark M, Mainta I, Gräni C, Nkoulou R, Zaidi H. Artificial intelligence-based cardiac transthyretin amyloidosis detection and scoring in scintigraphy imaging: multi-tracer, multi-scanner, and multi-center development and evaluation study. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07117-1. [PMID: 39907796 DOI: 10.1007/s00259-025-07117-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 01/24/2025] [Indexed: 02/06/2025]
Abstract
INTRODUCTION Providing tools for comprehensively evaluating scintigraphy images could enhance transthyretin amyloid cardiomyopathy (ATTR-CM) diagnosis. This study aims to automatically detect and score ATTR-CM in total body scintigraphy images using deep learning on multi-tracer, multi-scanner, and multi-center datasets. METHODS In the current study, we employed six datasets (from 12 cameras) for various tasks and purposes. Dataset #1 (93 patients, 99mTc-MDP) was used to develop the 2D-planar segmentation and localization models. Dataset #2 (216 patients, 99mTc-DPD) was used for the detection (grade 0 vs. grades 1, 2, and 3) and scoring (0 and 1 vs. grades 2 and 3) of ATTR-CM. Datasets #3 (41 patients, 99mTc-HDP), #4 (53 patients, 99mTc-PYP), and #5 (129 patients, 99mTc-DPD) were used as external centers. ATTR-CM detection and scouring were performed by two physicians in each center. Moreover, Dataset #6 consisting of 3215 patients without labels, was employed for retrospective model performance evaluation. Different regions of interest were cropped and fed into the classification model for the detection and scoring of ATTR-CM. Ensembling was performed on the outputs of different models to improve their performance. Model performance was measured by classification accuracy, sensitivity, specificity, and AUC. Grad-CAM and saliency maps were generated to explain the models' decision-making process. RESULTS In the internal test set, all models for detection and scoring achieved an AUC of more than 0.95 and an F1 score of more than 0.90. For detection in the external dataset, AUCs of 0.93, 0.95, and 1 were achieved for datasets 3, 4, and 5, respectively. For the scoring task, AUCs of 0.95, 0.83, and 0.96 were achieved for these datasets, respectively. In dataset #6, we found ten cases flagged as ATTR-CM by the network. Out of these, four cases were confirmed by a nuclear medicine specialist as possibly having ATTR-CM. GradCam and saliency maps showed that the deep-learning models focused on clinically relevant cardiac areas. CONCLUSION In the current study, we developed and evaluated a fully automated pipeline to detect and score ATTR-CM using large multi-tracer, multi-scanner, and multi-center datasets, achieving high performance on total body images. This fully automated pipeline could lead to more timely and accurate diagnoses, ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Isaac Shiri
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Elsa Hervier
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Ahmad Bitarafan
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Federico Caobelli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Moritz Hundertmark
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Christoph Gräni
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - René Nkoulou
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
2
|
Salimi Y, Shiri I, Mansouri Z, Zaidi H. Development and validation of fully automated robust deep learning models for multi-organ segmentation from whole-body CT images. Phys Med 2025; 130:104911. [PMID: 39899952 DOI: 10.1016/j.ejmp.2025.104911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 12/02/2024] [Accepted: 01/24/2025] [Indexed: 02/05/2025] Open
Abstract
PURPOSE This study aimed to develop a deep-learning framework to generate multi-organ masks from CT images in adult and pediatric patients. METHODS A dataset consisting of 4082 CT images and ground-truth manual segmentation from various databases, including 300 pediatric cases, were collected. In strategy#1, the manual segmentation masks provided by public databases were split into training (90%) and testing (10% of each database named subset #1) cohort. The training set was used to train multiple nnU-Net networks in five-fold cross-validation (CV) for 26 separate organs. In the next step, the trained models from strategy #1 were used to generate missing organs for the entire dataset. This generated data was then used to train a multi-organ nnU-Net segmentation model in a five-fold CV (strategy#2). Models' performance were evaluated in terms of Dice coefficient (DSC) and other well-established image segmentation metrics. RESULTS The lowest CV DSC for strategy#1 was 0.804 ± 0.094 for adrenal glands while average DSC > 0.90 were achieved for 17/26 organs. The lowest DSC for strategy#2 (0.833 ± 0.177) was obtained for the pancreas, whereas DSC > 0.90 was achieved for 13/19 of the organs. For all mutual organs included in subset #1 and subset #2, our model outperformed the TotalSegmentator models in both strategies. In addition, our models outperformed the TotalSegmentator models on subset #3. CONCLUSIONS Our model was trained on images with significant variability from different databases, producing acceptable results on both pediatric and adult cases, making it well-suited for implementation in clinical setting.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
3
|
Damilakis J, Stratakis J. Descriptive overview of AI applications in x-ray imaging and radiotherapy. JOURNAL OF RADIOLOGICAL PROTECTION : OFFICIAL JOURNAL OF THE SOCIETY FOR RADIOLOGICAL PROTECTION 2024; 44:041001. [PMID: 39681008 DOI: 10.1088/1361-6498/ad9f71] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2024] [Accepted: 12/16/2024] [Indexed: 12/18/2024]
Abstract
Artificial intelligence (AI) is transforming medical radiation applications by handling complex data, learning patterns, and making accurate predictions, leading to improved patient outcomes. This article examines the use of AI in optimising radiation doses for x-ray imaging, improving radiotherapy outcomes, and briefly addresses the benefits, challenges, and limitations of AI integration into clinical workflows. In diagnostic radiology, AI plays a pivotal role in optimising radiation exposure, reducing noise, enhancing image contrast, and lowering radiation doses, especially in high-dose procedures like computed tomography (CT). Deep learning (DL)-powered CT reconstruction methods have already been incorporated into clinical routine. Moreover, AI-powered methodologies have been developed to provide real-time, patient-specific radiation dose estimates. These AI-driven tools have the potential to streamline workflows and potentially become integral parts of imaging practices. In radiotherapy, AI's ability to automate and enhance the precision of treatment planning is emphasised. Traditional methods, such as manual contouring, are time-consuming and prone to variability. AI-driven techniques, particularly DL models, are automating the segmentation of organs and tumours, improving the accuracy of radiation delivery, and minimising damage to healthy tissues. Moreover, AI supports adaptive radiotherapy, allowing continuous optimisation of treatment plans based on changes in a patient's anatomy over time, ensuring the highest accuracy in radiation delivery and better therapeutic outcomes. Some of these methods have been validated and integrated into radiation treatment systems, while others are not yet ready for routine clinical use mainly due to challenges in validation, particularly ensuring reliability across diverse patient populations and clinical settings. Despite the potential of AI, there are challenges in fully integrating these technologies into clinical practice. Issues such as data protection, privacy, data quality, model validation, and the need for large and diverse datasets are crucial to ensuring the reliability of AI systems.
Collapse
Affiliation(s)
- John Damilakis
- School of Medicine, University of Crete, Heraklion, Greece
- University Hospital of Heraklion, Crete, Greece
| | | |
Collapse
|
4
|
Golbus AE, Schuzer JL, Steveson C, Rollison SF, Matthews J, Henry-Ellis J, Razeto M, Chen MY. Reduced dose helical CT scout imaging on next generation wide volume CT system decreases scan length and overall radiation exposure. Eur J Radiol Open 2024; 13:100578. [PMID: 38993285 PMCID: PMC11237680 DOI: 10.1016/j.ejro.2024.100578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/24/2024] [Accepted: 06/02/2024] [Indexed: 07/13/2024] Open
Abstract
Purpose Traditional CT acquisition planning is based on scout projection images from planar anterior-posterior and lateral projections where the radiographer estimates organ locations. Alternatively, a new scout method utilizing ultra-low dose helical CT (3D Landmark Scan) offers cross-sectional imaging to identify anatomic structures in conjunction with artificial intelligence based Anatomic Landmark Detection (ALD) for automatic CT acquisition planning. The purpose of this study is to quantify changes in scan length and radiation dose of CT examinations planned using 3D Landmark Scan and ALD and performed on next generation wide volume CT versus examinations planned using traditional scout methods. We additionally aim to quantify changes in radiation dose reduction of scans planned with 3D Landmark Scan and performed on next generation wide volume CT. Methods Single-center retrospective analysis of consecutive patients with prior CT scan of the same organ who underwent clinical CT using 3D Landmark Scan and automatic scan planning. Acquisition length and dose-length-product (DLP) were collected. Data was analyzed by paired t-tests. Results 104 total CT examinations (48.1 % chest, 15.4 % abdomen, 36.5 % chest/abdomen/pelvis) on 61 individual consecutive patients at a single center were retrospectively analyzed. 79.8 % of scans using 3D Landmark Scan had reduction in acquisition length compared to the respective prior acquisition. Median acquisition length using 3D Landmark Scan was 26.7 mm shorter than that using traditional scout methods (p < 0.001) with a 23.3 % median total radiation dose reduction (245.6 (IQR 150.0-400.8) mGy cm vs 320.3 (IQR 184.1-547.9) mGy cm). CT dose index similarly was overall decreased for scans planned with 3D Landmark and ALD and performed on next generation CT versus traditional methods (4.85 (IQR 3.8-7) mGy vs. 6.70 (IQR 4.43-9.18) mGy, respectively, p < 0.001). Conclusion Scout imaging using reduced dose 3D Landmark Scan images and Anatomic Landmark Detection reduces acquisition range in chest, abdomen, and chest/abdomen/pelvis CT scans. This technology, in combination with next generation wide volume CT reduces total radiation dose.
Collapse
Affiliation(s)
- Alexa E Golbus
- Cardiovascular Branch, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | | | | | - Shirley F Rollison
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA
| | | | | | - Marco Razeto
- Canon Medical Research Europe, Edinburgh, Scotland, UK
| | - Marcus Y Chen
- Cardiovascular Branch, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
5
|
Jorg T, Halfmann MC, Müller L, Stoehr F, Mildenberger P, Hierath M, Paulo G, Santos J, Damilakis J, Kralik I, Brkljacic B, Cvetko D, Kuhleji D, Bosmans H, Petrov D, Foley S, Toroi P, McNulty JP, Hoeschen C. Implementing verifiable oncological imaging by quality assurance and optimization (i‑Violin) : Protocol for a European multicenter study. RADIOLOGIE (HEIDELBERG, GERMANY) 2024; 64:160-165. [PMID: 39477833 PMCID: PMC11602843 DOI: 10.1007/s00117-024-01389-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 09/24/2024] [Indexed: 11/28/2024]
Abstract
BACKGROUND Advanced imaging techniques play a pivotal role in oncology. A large variety of computed tomography (CT) scanners, scan protocols, and acquisition techniques have led to a wide range in image quality and radiation exposure. This study aims at implementing verifiable oncological imaging by quality assurance and optimization (i-Violin) through harmonizing image quality and radiation dose across Europe. METHODS The 2‑year multicenter implementation study outlined here will focus on CT imaging of lung, stomach, and colorectal cancer and include imaging for four radiological indications: diagnosis, radiation therapy planning, staging, and follow-up. Therefore, 480 anonymized CT data sets of patients will be collected by the associated university hospitals and uploaded to a repository. Radiologists will determine key abdominopelvic structures for image quality assessment by consensus and subsequently adapt a previously developed lung CT tool for the objective evaluation of image quality. The quality metrics will be evaluated for their correlation with perceived image quality and the standardized optimization strategy will be disseminated across Europe. RESULTS The results of the outlined study will be used to obtain European reference data, to build teaching programs for the developed tools, and to create a culture of optimization in oncological CT imaging. CONCLUSION The study protocol and rationale for i‑Violin, a European approach for standardization and harmonization of image quality and optimization of CT procedures in oncological imaging, is presented. Future results will be disseminated across all EU member states, and i‑Violin is thus expected to have a sustained impact on CT imaging for cancer patients across Europe.
Collapse
Affiliation(s)
- Tobias Jorg
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Moritz C Halfmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany.
| | - Lukas Müller
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Fabian Stoehr
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Peter Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Monika Hierath
- European Institute for Biomedical Imaging Research (EIBIR), Am Gestade 1, 1010, Vienna, Austria
| | - Graciano Paulo
- Coimbra Health School, Polytechnic Institute of Coimbra, Rua 5 de Outubro-S. Martinho do Bispo, Apartado 7006, 3046-854, Coimbra, Portugal
| | - Joana Santos
- Coimbra Health School, Polytechnic Institute of Coimbra, Rua 5 de Outubro-S. Martinho do Bispo, Apartado 7006, 3046-854, Coimbra, Portugal
| | - John Damilakis
- School of Medicine, University of Crete, 2208, 71003, Iraklion, Crete, Greece
| | - Ivana Kralik
- Dubrava University Hospital, Avenija Gojka Suska 6, 10000, Zagreb, Croatia
| | - Boris Brkljacic
- Dubrava University Hospital, Avenija Gojka Suska 6, 10000, Zagreb, Croatia
| | - Danijel Cvetko
- Dubrava University Hospital, Avenija Gojka Suska 6, 10000, Zagreb, Croatia
| | | | - Hilde Bosmans
- Medical Imaging Research Center, Department of Radiology, University Hospitals Leuven, Herestraat 49, 3000, Leuven, Belgium
| | - Dimitar Petrov
- Medical Imaging Research Center, Department of Radiology, University Hospitals Leuven, Herestraat 49, 3000, Leuven, Belgium
| | - Shane Foley
- Radiography & Diagnostic Imaging, School of Medicine University College Dublin, Dublin, Ireland
| | - Paula Toroi
- STUK-Radiation and Nuclear Safety Authority, Jokiniemenkuja 1, 01370, Vantaa, Finland
| | - Jonathan P McNulty
- Radiography & Diagnostic Imaging, School of Medicine University College Dublin, Dublin, Ireland
| | - Christoph Hoeschen
- Institute of Medical Technology, Faculty for electrical engineering and Information Technology, Otto-von-Guericke-University Magdeburg, Otto-Hahn-Str. 2, 39106, Magdeburg, Germany
| |
Collapse
|
6
|
Sun C, Salimi Y, Angeliki N, Boudabbous S, Zaidi H. An efficient dual-domain deep learning network for sparse-view CT reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108376. [PMID: 39173481 DOI: 10.1016/j.cmpb.2024.108376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 08/02/2024] [Accepted: 08/15/2024] [Indexed: 08/24/2024]
Abstract
BACKGROUND AND OBJECTIVE We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners. METHODS We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1-5 likert scoring system. RESULTS Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data. CONCLUSION This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.
Collapse
Affiliation(s)
- Chang Sun
- Beijing University of Posts and Telecommunications, School of Information and Communication Engineering, 100876 Beijing, China; Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Yazdan Salimi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Neroladaki Angeliki
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Sana Boudabbous
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
7
|
Fernández-Fabeiro J, Carballido Á, Fernández-Fernández ÁM, Moldes MR, Villar D, Mouriño JC. The SINFONIA project repository for AI-based algorithms and health data. Front Public Health 2024; 12:1448988. [PMID: 39507665 PMCID: PMC11539176 DOI: 10.3389/fpubh.2024.1448988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 09/26/2024] [Indexed: 11/08/2024] Open
Abstract
The SINFONIA project's main objective is to develop novel methodologies and tools that will provide a comprehensive risk appraisal for detrimental effects of radiation exposure on patients, workers, caretakers, and comforters, the public, and the environment during the management of patients suspected or diagnosed with lymphoma, brain tumors, and breast cancers. The project plan defines a series of key objectives to be achieved on the way to the main objective. One of these objectives is to develop and operate a repository to collect, pool, and share data from imaging and non-imaging examinations and radiation therapy sessions, histological results, and demographic information related to individual patients with lymphoma, brain tumors, and breast cancers. This paper presents the final version of that repository, a cloud-based platform for imaging and non-imaging data. It results from the implementation and integration of several software tools and programming frameworks under an evolutive architecture according to the project partners' needs and the constraints of the General Data Protection Regulation. It provides, among other services, data uploading and downloading, data sharing, file decompression, data searching, DICOM previsualization, and an infrastructure for submitting and running Artificial Intelligence models.
Collapse
Affiliation(s)
| | | | | | | | | | - Jose C. Mouriño
- Galicia Supercomputing Center (CESGA), Santiago de Compostela, Galicia, Spain
| |
Collapse
|
8
|
Garajová L, Garbe S, Sprinkart AM. [Artificial intelligence in diagnostic radiology for dose management : Advances and perspectives using the example of computed tomography]. RADIOLOGIE (HEIDELBERG, GERMANY) 2024; 64:787-792. [PMID: 38877140 DOI: 10.1007/s00117-024-01330-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/29/2024] [Indexed: 06/16/2024]
Abstract
CLINICAL-METHODOLOGICAL PROBLEM Imaging procedures employing ionizing radiation require compliance with European directives and national regulations in order to protect patients. Each exposure must be indicated, individually adapted, and documented. Unacceptable dose exceedances must be detected and reported. These tasks are time-consuming and require meticulous diligence. STANDARD RADIOLOGICAL METHODS Computed tomography (CT) is the most important contributor to medical radiation exposure. Optimizing the patient's dose is therefore mandatory. Use of modern technology and reconstruction algorithms already reduces exposure. Checking the indication, planning, and performing the examination are further important process steps with regard to radiation protection. Patient exposure is usually monitored by dose management systems (DMS). In special cases, a risk assessment is required by calculating the organ doses. METHODOLOGICAL INNOVATIONS Artificial intelligence (AI)-assisted techniques are increasingly used in various steps of the process: they support examination planning, improve patient positioning, and enable automated scan length adjustments. They also provide real-time estimates of individual organ doses. EVALUATION The integration of AI into medical imaging is proving successful in terms of dose optimization in various areas of the radiological workflow, from reconstruction to examination planning and performing exams. However, the use of AI in conjunction with DMS has not yet been considered on a large scale. PRACTICAL RECOMMENDATION AI processes offer promising tools to support dose management. However, their implementation in the clinical setting requires further research, extensive validation, and continuous monitoring.
Collapse
Affiliation(s)
- Laura Garajová
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
| | - Stephan Garbe
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
- Klinik für Strahlentherapie und Radioonkologie, Universitätsklinikum Bonn, Bonn, Deutschland
| | - Alois M Sprinkart
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland.
| |
Collapse
|
9
|
Salimi Y, Mansouri Z, Hajianfar G, Sanaat A, Shiri I, Zaidi H. Fully automated explainable abdominal CT contrast media phase classification using organ segmentation and machine learning. Med Phys 2024; 51:4095-4104. [PMID: 38629779 DOI: 10.1002/mp.17076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/19/2024] [Accepted: 04/02/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
10
|
Hajianfar G, Sabouri M, Salimi Y, Amini M, Bagheri S, Jenabi E, Hekmat S, Maghsudi M, Mansouri Z, Khateri M, Hosein Jamshidi M, Jafari E, Bitarafan Rajabi A, Assadi M, Oveisi M, Shiri I, Zaidi H. Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance. Z Med Phys 2024; 34:242-257. [PMID: 36932023 PMCID: PMC11156776 DOI: 10.1016/j.zemedi.2023.01.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/22/2022] [Accepted: 01/18/2023] [Indexed: 03/17/2023]
Abstract
PURPOSE Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. MATERIALS AND METHODS After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. RESULTS DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. CONCLUSION Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.
Collapse
Affiliation(s)
- Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Maziar Sabouri
- Department of Medical Physics, School of Medicine, Iran University of Medical Science, Tehran, Iran; Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Soroush Bagheri
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sepideh Hekmat
- Hasheminejad Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mohammad Hosein Jamshidi
- Department of Medical Imaging and Radiation Sciences, School of Allied Medical Sciences, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Ahmad Bitarafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, BC, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
11
|
Salimi Y, Akhavanallaf A, Mansouri Z, Shiri I, Zaidi H. Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks. Eur Radiol 2023; 33:9411-9424. [PMID: 37368113 PMCID: PMC10667156 DOI: 10.1007/s00330-023-09839-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/28/2023] [Accepted: 04/14/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. METHODS The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. RESULTS The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. CONCLUSION Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. CLINICAL RELEVANCE STATEMENT We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. KEY POINTS • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, Geneva University, CH_1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
| |
Collapse
|
12
|
Potočnik J, Foley S, Thomas E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J Med Imaging Radiat Sci 2023; 54:376-385. [PMID: 37062603 DOI: 10.1016/j.jmir.2023.03.033] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 04/18/2023]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence (AI) is present in many areas of our lives. Much of the digital data generated in health care can be used for building automated systems to bring improvements to existing workflows and create a more personalised healthcare experience for patients. This review outlines select current and potential AI applications in medical imaging practice and provides a view of how diagnostic imaging suites will operate in the future. Challenges associated with potential applications will be discussed and healthcare staff considerations necessary to benefit from AI-enabled solutions will be outlined. METHODS Several electronic databases, including PubMed, ScienceDirect, Google Scholar, and University College Dublin Library Database, were used to identify relevant articles with a Boolean search strategy. Textbooks, government sources and vendor websites were also considered. RESULTS/DISCUSSION Many AI-enabled solutions in radiographic practice are available with more automation on the horizon. Traditional workflow will become faster, more effective, and more user friendly. AI can handle administrative or technical types of work, meaning it is applicable across all aspects of medical imaging practice. CONCLUSION AI offers significant potential to automate most of the manual tasks, ensure service consistency, and improve patient care. Radiographers, radiation therapists, and clinicians should ensure they have adequate understanding of the technology to enable ethical oversight of its implementation.
Collapse
Affiliation(s)
- Jaka Potočnik
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland.
| | - Shane Foley
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| | - Edel Thomas
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| |
Collapse
|
13
|
Salehi M, Vafaei Sadr A, Mahdavi SR, Arabi H, Shiri I, Reiazi R. Deep Learning-based Non-rigid Image Registration for High-dose Rate Brachytherapy in Inter-fraction Cervical Cancer. J Digit Imaging 2023; 36:574-587. [PMID: 36417026 PMCID: PMC10039214 DOI: 10.1007/s10278-022-00732-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 07/04/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022] Open
Abstract
In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.
Collapse
Affiliation(s)
- Mohammad Salehi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Vafaei Sadr
- Department of Theoretical Physics and Center for Astroparticle Physics, University of Geneva, Geneva, Switzerland
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Seied Rabi Mahdavi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
| | - Reza Reiazi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.
- Division of Radiation Oncology, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, USA.
| |
Collapse
|
14
|
Shiri I, Vafaei Sadr A, Akhavan A, Salimi Y, Sanaat A, Amini M, Razeghi B, Saberi A, Arabi H, Ferdowsi S, Voloshynovskiy S, Gündüz D, Rahmim A, Zaidi H. Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning. Eur J Nucl Med Mol Imaging 2023; 50:1034-1050. [PMID: 36508026 PMCID: PMC9742659 DOI: 10.1007/s00259-022-06053-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 11/18/2022] [Indexed: 12/15/2022]
Abstract
PURPOSE Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Alireza Vafaei Sadr
- Department of Theoretical Physics and Center for Astroparticle Physics, University of Geneva, Geneva, Switzerland
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Azadeh Akhavan
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Abdollah Saberi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | | | | | - Deniz Gündüz
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
15
|
Salimi Y, Shiri I, Akavanallaf A, Mansouri Z, Arabi H, Zaidi H. Fully automated accurate patient positioning in computed tomography using anterior-posterior localizer images and a deep neural network: a dual-center study. Eur Radiol 2023; 33:3243-3252. [PMID: 36703015 PMCID: PMC9879741 DOI: 10.1007/s00330-023-09424-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/29/2022] [Accepted: 01/02/2023] [Indexed: 01/28/2023]
Abstract
OBJECTIVES This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.
Collapse
Affiliation(s)
- Yazdan Salimi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Azadeh Akavanallaf
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Zahra Mansouri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
16
|
Cerdá-Alberich L, Solana J, Mallol P, Ribas G, García-Junco M, Alberich-Bayarri A, Marti-Bonmati L. MAIC-10 brief quality checklist for publications using artificial intelligence and medical images. Insights Imaging 2023; 14:11. [PMID: 36645542 PMCID: PMC9842808 DOI: 10.1186/s13244-022-01355-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/20/2022] [Indexed: 01/17/2023] Open
Abstract
The use of artificial intelligence (AI) with medical images to solve clinical problems is becoming increasingly common, and the development of new AI solutions is leading to more studies and publications using this computational technology. As a novel research area, the use of common standards to aid AI developers and reviewers as quality control criteria will improve the peer review process. Although some guidelines do exist, their heterogeneity and extension advocate that more explicit and simple schemes should be applied on the publication practice. Based on a review of existing AI guidelines, a proposal which collects, unifies, and simplifies the most relevant criteria was developed. The MAIC-10 (Must AI Criteria-10) checklist with 10 items was implemented as a guide to design studies and evaluate publications related to AI in the field of medical imaging. Articles published in Insights into Imaging in 2021 were selected to calculate their corresponding MAIC-10 quality score. The mean score was found to be 5.6 ± 1.6, with critical items present in most articles, such as "Clinical need", "Data annotation", "Robustness", and "Transparency" present in more than 80% of papers, while improvements in other areas were identified. MAIC-10 was also observed to achieve the highest intra-observer reproducibility when compared to other existing checklists, with an overall reduction in terms of checklist length and complexity. In summary, MAIC-10 represents a short and simple quality assessment tool which is objective, robust and widely applicable to AI studies in medical imaging.
Collapse
Affiliation(s)
- Leonor Cerdá-Alberich
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Jimena Solana
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Pedro Mallol
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Gloria Ribas
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Miguel García-Junco
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Angel Alberich-Bayarri
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain ,Quantitative Imaging Biomarkers in Medicine, Quibim SL, Valencia, Spain
| | - Luis Marti-Bonmati
- grid.84393.350000 0001 0360 9602Clinical Medical Imaging Area and Biomedical Imaging Research Group (GIBI230-PREBI), Hospital Universitario y Politécnico La Fe – Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| |
Collapse
|
17
|
Salimi Y, Shiri I, Akhavanallaf A, Mansouri Z, Sanaat A, Pakbin M, Ghasemian M, Arabi H, Zaidi H. Deep Learning-based Calculation of Patient Size and Attenuation Surrogates from Localizer Image: Toward Personalized Chest CT Protocol Optimization. Eur J Radiol 2022; 157:110602. [DOI: 10.1016/j.ejrad.2022.110602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 11/02/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
|
18
|
Kim S, Jeong WK, Choi JH, Kim JH, Chun M. Development of deep learning-assisted overscan decision algorithm in low-dose chest CT: Application to lung cancer screening in Korean National CT accreditation program. PLoS One 2022; 17:e0275531. [PMID: 36174098 PMCID: PMC9522252 DOI: 10.1371/journal.pone.0275531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 09/19/2022] [Indexed: 12/01/2022] Open
Abstract
We propose a deep learning-assisted overscan decision algorithm in chest low-dose computed tomography (LDCT) applicable to the lung cancer screening. The algorithm reflects the radiologists’ subjective evaluation criteria according to the Korea institute for accreditation of medical imaging (KIAMI) guidelines, where it judges whether a scan range is beyond landmarks’ criterion. The algorithm consists of three stages: deep learning-based landmark segmentation, rule-based logical operations, and overscan determination. A total of 210 cases from a single institution (internal data) and 50 cases from 47 institutions (external data) were utilized for performance evaluation. Area under the receiver operating characteristic (AUROC), accuracy, sensitivity, specificity, and Cohen’s kappa were used as evaluation metrics. Fisher’s exact test was performed to present statistical significance for the overscan detectability, and univariate logistic regression analyses were performed for validation. Furthermore, an excessive effective dose was estimated by employing the amount of overscan and the absorbed dose to effective dose conversion factor. The algorithm presented AUROC values of 0.976 (95% confidence interval [CI]: 0.925–0.987) and 0.997 (95% CI: 0.800–0.999) for internal and external dataset, respectively. All metrics showed average performance scores greater than 90% in each evaluation dataset. The AI-assisted overscan decision and the radiologist’s manual evaluation showed a statistically significance showing a p-value less than 0.001 in Fisher’s exact test. In the logistic regression analysis, demographics (age and sex), data source, CT vendor, and slice thickness showed no statistical significance on the algorithm (each p-value > 0.05). Furthermore, the estimated excessive effective doses were 0.02 ± 0.01 mSv and 0.03 ± 0.05 mSv for each dataset, not a concern within slight deviations from an acceptable scan range. We hope that our proposed overscan decision algorithm enables the retrospective scan range monitoring in LDCT for lung cancer screening program, and follows an as low as reasonably achievable (ALARA) principle.
Collapse
Affiliation(s)
- Sihwan Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
| | - Woo Kyoung Jeong
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jin Hwa Choi
- Department of Radiation Oncology, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| | - Jong Hyo Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Minsoo Chun
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
- Department of Radiation Oncology, Chung-Ang University Gwang Myeong Hospital, Gyeonggi-do, Republic of Korea
- * E-mail:
| |
Collapse
|
19
|
Shiri I, Mostafaei S, Haddadi Avval A, Salimi Y, Sanaat A, Akhavanallaf A, Arabi H, Rahmim A, Zaidi H. High-dimensional multinomial multiclass severity scoring of COVID-19 pneumonia using CT radiomics features and machine learning algorithms. Sci Rep 2022; 12:14817. [PMID: 36050434 PMCID: PMC9437017 DOI: 10.1038/s41598-022-18994-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/23/2022] [Indexed: 12/11/2022] Open
Abstract
We aimed to construct a prediction model based on computed tomography (CT) radiomics features to classify COVID-19 patients into severe-, moderate-, mild-, and non-pneumonic. A total of 1110 patients were studied from a publicly available dataset with 4-class severity scoring performed by a radiologist (based on CT images and clinical features). The entire lungs were segmented and followed by resizing, bin discretization and radiomic features extraction. We utilized two feature selection algorithms, namely bagging random forest (BRF) and multivariate adaptive regression splines (MARS), each coupled to a classifier, namely multinomial logistic regression (MLR), to construct multiclass classification models. The dataset was divided into 50% (555 samples), 20% (223 samples), and 30% (332 samples) for training, validation, and untouched test datasets, respectively. Subsequently, nested cross-validation was performed on train/validation to select the features and tune the models. All predictive power indices were reported based on the testing set. The performance of multi-class models was assessed using precision, recall, F1-score, and accuracy based on the 4 × 4 confusion matrices. In addition, the areas under the receiver operating characteristic curves (AUCs) for multi-class classifications were calculated and compared for both models. Using BRF, 23 radiomic features were selected, 11 from first-order, 9 from GLCM, 1 GLRLM, 1 from GLDM, and 1 from shape. Ten features were selected using the MARS algorithm, namely 3 from first-order, 1 from GLDM, 1 from GLRLM, 1 from GLSZM, 1 from shape, and 3 from GLCM features. The mean absolute deviation, skewness, and variance from first-order and flatness from shape, and cluster prominence from GLCM features and Gray Level Non Uniformity Normalize from GLRLM were selected by both BRF and MARS algorithms. All selected features by BRF or MARS were significantly associated with four-class outcomes as assessed within MLR (All p values < 0.05). BRF + MLR and MARS + MLR resulted in pseudo-R2 prediction performances of 0.305 and 0.253, respectively. Meanwhile, there was a significant difference between the feature selection models when using a likelihood ratio test (p value = 0.046). Based on confusion matrices for BRF + MLR and MARS + MLR algorithms, the precision was 0.856 and 0.728, the recall was 0.852 and 0.722, whereas the accuracy was 0.921 and 0.861, respectively. AUCs (95% CI) for multi-class classification were 0.846 (0.805-0.887) and 0.807 (0.752-0.861) for BRF + MLR and MARS + MLR algorithms, respectively. Our models based on the utilization of radiomic features, coupled with machine learning were able to accurately classify patients according to the severity of pneumonia, thus highlighting the potential of this emerging paradigm in the prognostication and management of COVID-19 patients.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | | | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland.
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
20
|
Sartoretti T, Racine D, Mergen V, Jungblut L, Monnin P, Flohr TG, Martini K, Frauenfelder T, Alkadhi H, Euler A. Quantum Iterative Reconstruction for Low-Dose Ultra-High-Resolution Photon-Counting Detector CT of the Lung. Diagnostics (Basel) 2022; 12:522. [PMID: 35204611 PMCID: PMC8871296 DOI: 10.3390/diagnostics12020522] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/10/2022] [Accepted: 02/14/2022] [Indexed: 02/06/2023] Open
Abstract
The aim of this study was to characterize image quality and to determine the optimal strength levels of a novel iterative reconstruction algorithm (quantum iterative reconstruction, QIR) for low-dose, ultra-high-resolution (UHR) photon-counting detector CT (PCD-CT) of the lung. Images were acquired on a clinical dual-source PCD-CT in the UHR mode and reconstructed with a sharp lung reconstruction kernel at different strength levels of QIR (QIR-1 to QIR-4) and without QIR (QIR-off). Noise power spectrum (NPS) and target transfer function (TTF) were analyzed in a cylindrical phantom. 52 consecutive patients referred for low-dose UHR chest PCD-CT were included (CTDIvol: 1 ± 0.6 mGy). Quantitative image quality analysis was performed computationally which included the calculation of the global noise index (GNI) and the global signal-to-noise ratio index (GSNRI). The mean attenuation of the lung parenchyma was measured. Two readers graded images qualitatively in terms of overall image quality, image sharpness, and subjective image noise using 5-point Likert scales. In the phantom, an increase in the QIR level slightly decreased spatial resolution and considerably decreased noise amplitude without affecting the frequency content. In patients, GNI decreased from QIR-off (202 ± 34 HU) to QIR-4 (106 ± 18 HU) (p < 0.001) by 48%. GSNRI increased from QIR-off (4.4 ± 0.8) to QIR-4 (8.2 ± 1.6) (p < 0.001) by 87%. Attenuation of lung parenchyma was highly comparable among reconstructions (QIR-off: -849 ± 53 HU to QIR-4: -853 ± 52 HU, p < 0.001). Subjective noise was best in QIR-4 (p < 0.001), while QIR-3 was best for sharpness and overall image quality (p < 0.001). Thus, our phantom and patient study indicates that QIR-3 provides the optimal iterative reconstruction level for low-dose, UHR PCD-CT of the lungs.
Collapse
Affiliation(s)
- Thomas Sartoretti
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - Damien Racine
- Institute of Radiation Physics (IRA), Lausanne University Hospital (CHUV), University of Lausanne (UNIL), CH-1010 Lausanne, Switzerland; (D.R.); (P.M.)
| | - Victor Mergen
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - Lisa Jungblut
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - Pascal Monnin
- Institute of Radiation Physics (IRA), Lausanne University Hospital (CHUV), University of Lausanne (UNIL), CH-1010 Lausanne, Switzerland; (D.R.); (P.M.)
| | | | - Katharina Martini
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - Hatem Alkadhi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| | - André Euler
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland; (T.S.); (V.M.); (L.J.); (K.M.); (T.F.); (H.A.)
| |
Collapse
|