1
|
Strittmatter A, Schad LR, Zöllner FG. Deep learning-based affine medical image registration for multimodal minimal-invasive image-guided interventions - A comparative study on generalizability. Z Med Phys 2024; 34:291-317. [PMID: 37355435 PMCID: PMC11156775 DOI: 10.1016/j.zemedi.2023.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 06/26/2023]
Abstract
Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks' performance and the networks' generalizability to new datasets were evaluated using two multimodal datasets - a synthetic and a real patient dataset - of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (p-value < 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.
Collapse
Affiliation(s)
- Anika Strittmatter
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany.
| | - Lothar R Schad
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| |
Collapse
|
2
|
Zecevic M, Hasenstab KA, Wang K, Dhyani M, Cunha GM. Signal Intensity Trajectories Clustering for Liver Vasculature Segmentation and Labeling (LiVaS) on Contrast-Enhanced MR Images: A Feasibility Pilot Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:873-883. [PMID: 38319438 PMCID: PMC11031533 DOI: 10.1007/s10278-024-00970-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/03/2023] [Accepted: 11/27/2023] [Indexed: 02/07/2024]
Abstract
This study aims to develop a semiautomated pipeline and user interface (LiVaS) for rapid segmentation and labeling of MRI liver vasculature and evaluate its time efficiency and accuracy against manual reference standard. Retrospective feasibility pilot study. Liver MR images from different scanners from 36 patients were included, and 4/36 patients were randomly selected for manual segmentation as referenced standard. The liver was segmented in each contrast phase and masks registered to the pre-contrast segmentation. Voxel-wise signal trajectories were clustered using the k-means algorithm. Voxel clusters that best segment the liver vessels were selected and labeled by three independent radiologists and a research scientist using LiVaS. Segmentation times were compared using a paired-sample t-test on log-transformed data. The agreement was analyzed qualitatively and quantitatively using DSC for hepatic and portal vein segmentations. The mean segmentation time among four readers was significantly shorter than manual (3.6 ± 1.4 vs. 70.0 ± 29.2 min; p < 0.001), even when using a higher number of clusters to enhance accuracy. The DSC for portal and hepatic veins reached up to 0.69 and 0.70, respectively. LiVaS segmentations were overall of good quality, with variations in performance related to the presence/severity of liver disease, acquisition timing, and image quality. Our semi-automated pipeline was robust to different MRI vendors in producing segmentation and labeling of liver vasculature in agreement with expert manual annotations, with significantly higher time efficiency. LiVaS could facilitate the creation of large, annotated datasets for training and validation of neural networks for automated MRI liver vascularity segmentation. HIGHLIGHTS: Key Finding: In this pilot feasibility study, our semiautomated pipeline for segmentation of liver vascularity (LiVaS) on MR images produced segmentations with simultaneous labeling of portal and hepatic veins in good agreement with the manual reference standard but at significantly shorter times (mean LiVaS 3.6 ± 1.4 vs. mean manual 70.0 ± 29.2 min; p < 0.001). Importance: LiVaS was robust in producing liver MRI vascular segmentations across images from different scanners in agreement with expert manual annotations, with significant ly higher time efficiency, and therefore potential scalability.
Collapse
Affiliation(s)
- Mladen Zecevic
- Department of Radiology, University of Washington, 1705 NE Pacific St, BB308, Seattle, WA, 98195, USA
| | - Kyle A Hasenstab
- Department of Mathematics and Statistics, San Diego State University, San Diego, CA, USA
| | - Kang Wang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Manish Dhyani
- Department of Radiology, University of Washington, 1705 NE Pacific St, BB308, Seattle, WA, 98195, USA
| | - Guilherme Moura Cunha
- Department of Radiology, University of Washington, 1705 NE Pacific St, BB308, Seattle, WA, 98195, USA.
| |
Collapse
|
3
|
Abdusalomov AB, Nasimov R, Nasimova N, Muminov B, Whangbo TK. Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:3440. [PMID: 37050503 PMCID: PMC10098960 DOI: 10.3390/s23073440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/18/2023] [Accepted: 03/18/2023] [Indexed: 06/19/2023]
Abstract
In recent years, considerable work has been conducted on the development of synthetic medical images, but there are no satisfactory methods for evaluating their medical suitability. Existing methods mainly evaluate the quality of noise in the images, and the similarity of the images to the real images used to generate them. For this purpose, they use feature maps of images extracted in different ways or distribution of images set. Then, the proximity of synthetic images to the real set is evaluated using different distance metrics. However, it is not possible to determine whether only one synthetic image was generated repeatedly, or whether the synthetic set exactly repeats the training set. In addition, most evolution metrics take a lot of time to calculate. Taking these issues into account, we have proposed a method that can quantitatively and qualitatively evaluate synthetic images. This method is a combination of two methods, namely, FMD and CNN-based evaluation methods. The estimation methods were compared with the FID method, and it was found that the FMD method has a great advantage in terms of speed, while the CNN method has the ability to estimate more accurately. To evaluate the reliability of the methods, a dataset of different real images was checked.
Collapse
Affiliation(s)
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Nigorakhon Nasimova
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
4
|
Khoshboresh-Masouleh M, Shah-Hosseini R. Real-time multiple target segmentation with multimodal few-shot learning. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1062792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Deep learning-based target segmentation requires a big training dataset to achieve good results. In this regard, few-shot learning a model that quickly adapts to new targets with a few labeled support samples is proposed to tackle this issue. In this study, we introduce a new multimodal few-shot learning [e.g., red-green-blue (RGB), thermal, and depth] for real-time multiple target segmentation in a real-world application with a few examples based on a new squeeze-and-attentions mechanism for multiscale and multiple target segmentation. Compared to the state-of-the-art methods (HSNet, CANet, and PFENet), the proposed method demonstrates significantly better performance on the PST900 dataset with 32 time-series sets in both Hand-Drill, and Survivor classes.
Collapse
|
5
|
Güllmar D, Jacobsen N, Deistung A, Timmann D, Ropele S, Reichenbach JR. Investigation of biases in convolutional neural networks for semantic segmentation using performance sensitivity analysis. Z Med Phys 2022; 32:346-360. [PMID: 35016819 PMCID: PMC9948839 DOI: 10.1016/j.zemedi.2021.11.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 10/31/2021] [Accepted: 11/12/2021] [Indexed: 12/31/2022]
Abstract
The application of deep neural networks for segmentation in medical imaging has gained substantial interest in recent years. In many cases, this variant of machine learning has been shown to outperform other conventional segmentation approaches. However, little is known about its general applicability. Especially the robustness against image modifications (e.g., intensity variations, contrast variations, spatial alignment) has hardly been investigated. Data augmentation is often used to compensate for sensitivity to such changes, although its effectiveness has not yet been studied. Therefore, the goal of this study was to systematically investigate the sensitivity to variations in input data with respect to segmentation of medical images using deep learning. This approach was tested with two publicly available segmentation frameworks (DeepMedic and TractSeg). In the case of DeepMedic, the performance was tested using ground truth data, while in the case of TractSeg, the STAPLE technique was employed. In both cases, sensitivity analysis revealed significant dependence of the segmentation performance on input variations. The effects of different data augmentation strategies were also shown, making this type of analysis a useful tool for selecting the right parameters for augmentation. The proposed analysis should be applied to any deep learning image segmentation approach, unless the assessment of sensitivity to input variations can be directly derived from the network.
Collapse
Affiliation(s)
- Daniel Güllmar
- Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena University Hospital - Friedrich Schiller University Jena, Germany.
| | - Nina Jacobsen
- Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena University Hospital - Friedrich Schiller University Jena, Germany
| | - Andreas Deistung
- University Clinic and Outpatient Clinic for Radiology, Department for Radiation Medicine, University Hospital Halle (Saale), Germany
| | - Dagmar Timmann
- Department of Neurology, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Stefan Ropele
- Department of Neurology, Karl-Franzens University of Graz, Austria
| | - Jürgen R Reichenbach
- Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena University Hospital - Friedrich Schiller University Jena, Germany; Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich-Schiller-University Jena, Jena, Germany
| |
Collapse
|
6
|
Heilemann G, Matthewman M, Kuess P, Goldner G, Widder J, Georg D, Zimmermann L. Can Generative Adversarial Networks help to overcome the limited data problem in segmentation? Z Med Phys 2022; 32:361-368. [PMID: 34930685 PMCID: PMC9948880 DOI: 10.1016/j.zemedi.2021.11.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 11/23/2021] [Accepted: 11/23/2021] [Indexed: 11/16/2022]
Abstract
PURPOSE For image translational tasks, the application of deep learning methods showed that Generative Adversarial Network (GAN) architectures outperform the traditional U-Net networks, when using the same training data size. This study investigates whether this performance boost can also be expected for segmentation tasks with small training dataset size. MATERIALS/METHODS Two models were trained on varying training dataset sizes ranging from 1-100 patients: a) U-Net and b) U-Net with patch discriminator (conditional GAN). The performance of both models to segment the male pelvis on CT-data was evaluated (Dice similarity coefficient, Hausdorff) with respect to training data size. RESULTS No significant differences were observed between the U-Net and cGAN when the models were trained with the same training sizes up to 100 patients. The training dataset size had a significant impact on the models' performances, with vast improvements when increasing dataset sizes from 1 to 20 patients. CONCLUSION When introducing GANs for the segmentation task no significant performance boost was observed in our experiments, even in segmentation models developed on small datasets.
Collapse
Affiliation(s)
- Gerd Heilemann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria.
| | | | - Peter Kuess
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Gregor Goldner
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Joachim Widder
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Lukas Zimmermann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Competence Center for Preclinical Imaging and Biomedical Engineering, University of Applied Sciences Wiener Neustadt, Austria; Faculty of Engineering, University of Applied Sciences Wiener Neustadt, Austria
| |
Collapse
|
7
|
Brumer I, Bauer DF, Schad LR, Zöllner FG. Synthetic Arterial Spin Labeling MRI of the Kidneys for Evaluation of Data Processing Pipeline. Diagnostics (Basel) 2022; 12:1854. [PMID: 36010205 PMCID: PMC9406826 DOI: 10.3390/diagnostics12081854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 07/28/2022] [Accepted: 07/29/2022] [Indexed: 11/30/2022] Open
Abstract
Accurate quantification of perfusion is crucial for diagnosis and monitoring of kidney function. Arterial spin labeling (ASL), a completely non-invasive magnetic resonance imaging technique, is a promising method for this application. However, differences in acquisition (e.g., ASL parameters, readout) and processing (e.g., registration, segmentation) between studies impede the comparison of results. To alleviate challenges arising solely from differences in processing pipelines, synthetic data are of great value. In this work, synthetic renal ASL data were generated using body models from the XCAT phantom and perfusion was added using the general kinetic model. Our in-house developed processing pipeline was then evaluated in terms of registration, quantification, and segmentation using the synthetic data. Registration performance was evaluated qualitatively with line profiles and quantitatively with mean structural similarity index measures (MSSIMs). Perfusion values obtained from the pipeline were compared to the values assumed when generating the synthetic data. Segmentation masks obtained by semi-automated procedure of the processing pipeline were compared to the original XCAT organ masks using the Dice index. Overall, the pipeline evaluation yielded good results. After registration, line profiles were smoother and, on average, MSSIMs increased by 25%. Mean perfusion values for cortex and medulla were close to the assumed perfusion of 250 mL/100 g/min and 50 mL/100 g/min, respectively. Dice indices ranged 0.80-0.93, 0.78-0.89, and 0.64-0.84 for whole kidney, cortex, and medulla, respectively. The generation of synthetic ASL data allows flexible choice of parameters and the generated data are well suited for evaluation of processing pipelines.
Collapse
Affiliation(s)
- Irène Brumer
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany; (D.F.B.); (L.R.S.); (F.G.Z.)
| | | | | | | |
Collapse
|
8
|
Deep Learning-Based Total Kidney Volume Segmentation in Autosomal Dominant Polycystic Kidney Disease Using Attention, Cosine Loss, and Sharpness Aware Minimization. Diagnostics (Basel) 2022; 12:diagnostics12051159. [PMID: 35626314 PMCID: PMC9139731 DOI: 10.3390/diagnostics12051159] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 04/27/2022] [Accepted: 05/04/2022] [Indexed: 11/17/2022] Open
Abstract
Early detection of the autosomal dominant polycystic kidney disease (ADPKD) is crucial as it is one of the most common causes of end-stage renal disease (ESRD) and kidney failure. The total kidney volume (TKV) can be used as a biomarker to quantify disease progression. The TKV calculation requires accurate delineation of kidney volumes, which is usually performed manually by an expert physician. However, this is time-consuming and automated segmentation is warranted. Furthermore, the scarcity of large annotated datasets hinders the development of deep learning solutions. In this work, we address this problem by implementing three attention mechanisms into the U-Net to improve TKV estimation. Additionally, we implement a cosine loss function that works well on image classification tasks with small datasets. Lastly, we apply a technique called sharpness aware minimization (SAM) that helps improve the generalizability of networks. Our results show significant improvements (p-value < 0.05) over the reference kidney segmentation U-Net. We show that the attention mechanisms and/or the cosine loss with SAM can achieve a dice score (DSC) of 0.918, a mean symmetric surface distance (MSSD) of 1.20 mm with the mean TKV difference of −1.72%, and R2 of 0.96 while using only 100 MRI datasets for training and testing. Furthermore, we tested four ensembles and obtained improvements over the best individual network, achieving a DSC and MSSD of 0.922 and 1.09 mm, respectively.
Collapse
|
9
|
Bauer DF, Rosenkranz J, Golla AK, Tönnes C, Hermann I, Russ T, Kabelitz G, Rothfuss AJ, Schad LR, Stallkamp JL, Zöllner FG. Development of an abdominal phantom for the validation of an oligometastatic disease diagnosis workflow. Med Phys 2022; 49:4445-4454. [PMID: 35510908 DOI: 10.1002/mp.15701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 12/01/2021] [Accepted: 04/14/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The liver is a common site for metastatic disease, which is a challenging and life-threatening condition with a grim prognosis and outcome. We propose a standardized workflow for the diagnosis of oligometastatic disease (OMD), as a gold standard workflow has not been established yet. The envisioned workflow comprises the acquisition of a multimodal image dataset, novel image processing techniques, and cone beam computed tomography (CBCT)-guided biopsy for subsequent molecular subtyping. By combining morphological, molecular, and functional information about the tumor, a patient-specific treatment planning is possible. We designed and manufactured an abdominal liver phantom that we used to demonstrate multimodal image acquisition, image processing, and biopsy of the OMD diagnosis workflow. METHODS The anthropomorphic abdominal phantom contains a rib cage, a portal vein, lungs, a liver with six lesions, and a hepatic vessel tree. This phantom incorporates three different lesion types with varying visibility under computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography CT (PET-CT), which reflects clinical reality. The phantom is puncturable and the size of the corpus and the organs is comparable to those of a real human abdomen. By using several modern additive manufacturing techniques, the manufacturing process is reproducible and allows to incorporate patient-specific anatomies. As a first step of the OMD diagnosis workflow, a pre-interventional CT, MRI, and PET-CT dataset of the phantom was acquired. The image information was fused using image registration and organ information was extracted via image segmentation. A CBCT-guided needle puncture experiment was performed, where all six liver lesions were punctured with coaxial biopsy needles. RESULTS Qualitative observation of the image data and quantitative evaluation using contrast-to-noise ratio (CNR) confirms that one lesion type is visible only in MRI and not CT. The other two lesion types are visible in CT and MRI. The CBCT-guided needle placement was performed for all six lesions, including those visible only in MRI and not CBCT. This was possible by successfully merging multimodal pre-interventional image data. Lungs, bones, and liver vessels serve as realistic inhibitions during needle path planning. CONCLUSIONS We have developed a reusable abdominal phantom that has been used to validate a standardized OMD diagnosis workflow. Utilizing the phantom, we have been able to show that a multimodal imaging pipeline is advantageous for a comprehensive detection of liver lesions. In a CBCT-guided needle placement experiment we have punctured lesions that are invisible in CBCT using registered pre-interventional MRI scans for needle path planning. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Dominik F Bauer
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Julian Rosenkranz
- Fraunhofer Institute for Manufacturing Engineering and Automation, Department of Clinical Health Technologies, Mannheim, Germany
| | - Alena-Kathrin Golla
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Christian Tönnes
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Ingo Hermann
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Tom Russ
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Gordian Kabelitz
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | | | - Lothar R Schad
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jan L Stallkamp
- Automation in Medicine and Biotechnology, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
10
|
Akhavanallaf A, Fayad H, Salimi Y, Aly A, Kharita H, Al Naemi H, Zaidi H. An update on computational anthropomorphic anatomical models. Digit Health 2022; 8:20552076221111941. [PMID: 35847523 PMCID: PMC9277432 DOI: 10.1177/20552076221111941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 06/19/2022] [Indexed: 11/15/2022] Open
Abstract
The prevalent availability of high-performance computing coupled with validated computerized simulation platforms as open-source packages have motivated progress in the development of realistic anthropomorphic computational models of the human anatomy. The main application of these advanced tools focused on imaging physics and computational internal/external radiation dosimetry research. This paper provides an updated review of state-of-the-art developments and recent advances in the design of sophisticated computational models of the human anatomy with a particular focus on their use in radiation dosimetry calculations. The consolidation of flexible and realistic computational models with biological data and accurate radiation transport modeling tools enables the capability to produce dosimetric data reflecting actual setup in clinical setting. These simulation methodologies and results are helpful resources for the medical physics and medical imaging communities and are expected to impact the fields of medical imaging and dosimetry calculations profoundly.
Collapse
Affiliation(s)
- Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hadi Fayad
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Antar Aly
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | | | - Huda Al Naemi
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva,
Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University
Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark,
Odense, Denmark
| |
Collapse
|
11
|
End-to-End Deep Learning CT Image Reconstruction for Metal Artifact Reduction. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010404] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Metal artifacts are common in CT-guided interventions due to the presence of metallic instruments. These artifacts often obscure clinically relevant structures, which can complicate the intervention. In this work, we present a deep learning CT reconstruction called iCTU-Net for the reduction of metal artifacts. The network emulates the filtering and back projection steps of the classical filtered back projection (FBP). A U-Net is used as post-processing to refine the back projected image. The reconstruction is trained end-to-end, i.e., the inputs of the iCTU-Net are sinograms and the outputs are reconstructed images. The network does not require a predefined back projection operator or the exact X-ray beam geometry. Supervised training is performed on simulated interventional data of the abdomen. For projection data exhibiting severe artifacts, the iCTU-Net achieved reconstructions with SSIM = 0.970±0.009 and PSNR = 40.7±1.6. The best reference method, an image based post-processing network, only achieved SSIM = 0.944±0.024 and PSNR = 39.8±1.9. Since the whole reconstruction process is learned, the network was able to fully utilize the raw data, which benefited from the removal of metal artifacts. The proposed method was the only studied method that could eliminate the metal streak artifacts.
Collapse
|