1
|
Guha I, Nadeem SA, Zhang X, DiCamillo PA, Levy SM, Wang G, Saha PK. Deep learning-based harmonization of trabecular bone microstructures between high- and low-resolution CT imaging. Med Phys 2024; 51:4258-4270. [PMID: 38415781 PMCID: PMC11147700 DOI: 10.1002/mp.17003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 02/09/2024] [Accepted: 02/12/2024] [Indexed: 02/29/2024] Open
Abstract
BACKGROUND Osteoporosis is a bone disease related to increased bone loss and fracture-risk. The variability in bone strength is partially explained by bone mineral density (BMD), and the remainder is contributed by bone microstructure. Recently, clinical CT has emerged as a viable option for in vivo bone microstructural imaging. Wide variations in spatial-resolution and other imaging features among different CT scanners add inconsistency to derived bone microstructural metrics, urging the need for harmonization of image data from different scanners. PURPOSE This paper presents a new deep learning (DL) method for the harmonization of bone microstructural images derived from low- and high-resolution CT scanners and evaluates the method's performance at the levels of image data as well as derived microstructural metrics. METHODS We generalized a three-dimensional (3D) version of GAN-CIRCLE that applies two generative adversarial networks (GANs) constrained by the identical, residual, and cycle learning ensemble (CIRCLE). Two GAN modules simultaneously learn to map low-resolution CT (LRCT) to high-resolution CT (HRCT) and vice versa. Twenty volunteers were recruited. LRCT and HRCT scans of the distal tibia of their left legs were acquired. Five-hundred pairs of LRCT and HRCT image blocks of64 × 64 × 64 $64 \times 64 \times 64 $ voxels were sampled for each of the twelve volunteers and used for training in supervised as well as unsupervised setups. LRCT and HRCT images of the remaining eight volunteers were used for evaluation. LRCT blocks were sampled at 32 voxel intervals in each coordinate direction and predicted HRCT blocks were stitched to generate a predicted HRCT image. RESULTS Mean ± standard deviation of structural similarity (SSIM) values between predicted and true HRCT using both 3DGAN-CIRCLE-based supervised (0.84 ± 0.03) and unsupervised (0.83 ± 0.04) methods were significantly (p < 0.001) higher than the mean SSIM value between LRCT and true HRCT (0.75 ± 0.03). All Tb measures derived from predicted HRCT by the supervised 3DGAN-CIRCLE showed higher agreement (CCC ∈ $ \in $ [0.956 0.991]) with the reference values from true HRCT as compared to LRCT-derived values (CCC ∈ $ \in $ [0.732 0.989]). For all Tb measures, except Tb plate-width (CCC = 0.866), the unsupervised 3DGAN-CIRCLE showed high agreement (CCC ∈ $ \in $ [0.920 0.964]) with the true HRCT-derived reference measures. Moreover, Bland-Altman plots showed that supervised 3DGAN-CIRCLE predicted HRCT reduces bias and variability in residual values of different Tb measures as compared to LRCT and unsupervised 3DGAN-CIRCLE predicted HRCT. The supervised 3DGAN-CIRCLE method produced significantly improved performance (p < 0.001) for all Tb measures as compared to the two DL-based supervised methods available in the literature. CONCLUSIONS 3DGAN-CIRCLE, trained in either unsupervised or supervised fashion, generates HRCT images with high structural similarity to the reference true HRCT images. The supervised 3DGAN-CIRCLE improves agreements of computed Tb microstructural measures with their reference values and outperforms the unsupervised 3DGAN-CIRCLE. 3DGAN-CIRCLE offers a viable DL solution to retrospectively improve image resolution, which may aid in data harmonization in multi-site longitudinal studies where scanner mismatch is unavoidable.
Collapse
Affiliation(s)
- Indranil Guha
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Syed Ahmed Nadeem
- Department of Radiology, Carver College of Medicine, University of Iowa, Iowa City, Iowa, USA
| | - Xiaoliu Zhang
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Paul A DiCamillo
- Department of Radiology, Carver College of Medicine, University of Iowa, Iowa City, Iowa, USA
| | - Steven M Levy
- Department of Preventive and Community Dentistry, University of Iowa, Iowa City, Iowa, USA
- Department of Epidemiology, University of Iowa, Iowa City, Iowa, USA
| | - Ge Wang
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Punam K Saha
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, Carver College of Medicine, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Fok WYR, Fieselmann A, Herbst M, Ritschl L, Kappler S, Saalfeld S. Deep learning in computed tomography super resolution using multi-modality data training. Med Phys 2024; 51:2846-2860. [PMID: 37972365 DOI: 10.1002/mp.16825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 10/25/2023] [Accepted: 10/25/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND One of the limitations in leveraging the potential of artificial intelligence in X-ray imaging is the limited availability of annotated training data. As X-ray and CT shares similar imaging physics, one could achieve cross-domain data sharing, so to generate labeled synthetic X-ray images from annotated CT volumes as digitally reconstructed radiographs (DRRs). To account for the lower resolution of CT and the CT-generated DRRs as compared to the real X-ray images, we propose the use of super-resolution (SR) techniques to enhance the CT resolution before DRR generation. PURPOSE As spatial resolution can be defined by the modulation transfer function kernel in CT physics, we propose to train a SR network using paired low-resolution (LR) and high-resolution (HR) images by varying the kernel's shape and cutoff frequency. This is different to previous deep learning-based SR techniques on RGB and medical images which focused on refining the sampling grid. Instead of generating LR images by bicubic interpolation, we aim to create realistic multi-detector CT (MDCT) like LR images from HR cone-beam CT (CBCT) scans. METHODS We propose and evaluate the use of a SR U-Net for the mapping between LR and HR CBCT image slices. We reconstructed paired LR and HR training volumes from the same CT scans with small in-plane sampling grid size of0.20 × 0.20 mm 2 $0.20 \times 0.20 \, {\rm mm}^2$ . We used the residual U-Net architecture to train two models. SRUNR e s K $^K_{Res}$ : trained with kernel-based LR images, and SRUNR e s I $^I_{Res}$ : trained with bicubic downsampled data as baseline. Both models are trained on one CBCT dataset (n = 13 391). The performance of both models was then evaluated on unseen kernel-based and interpolation-based LR CBCT images (n = 10 950), and also on MDCT images (n = 1392). RESULTS Five-fold cross validation and ablation study were performed to find the optimal hyperparameters. Both SRUNR e s K $^K_{Res}$ and SRUNR e s I $^I_{Res}$ models show significant improvements (p-value < $<$ 0.05) in mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index measures (SSIMs) on unseen CBCT images. Also, the improvement percentages in MAE, PSNR, and SSIM by SRUNR e s K $^K_{Res}$ is larger than SRUNR e s I $^I_{Res}$ . For SRUNR e s K $^K_{Res}$ , MAE is reduced by 14%, and PSNR and SSIMs increased by 6 and 8%, respectively. To conclude, SRUNR e s K $^K_{Res}$ outperforms SRUNR e s I $^I_{Res}$ , which the former generates sharper images when tested with kernel-based LR CBCT images as well as cross-modality LR MDCT data. CONCLUSIONS Our proposed method showed better performance than the baseline interpolation approach on unseen LR CBCT. We showed that the frequency behavior of the used data is important for learning the SR features. Additionally, we showed cross-modality resolution improvements to LR MDCT images. Our approach is, therefore, a first and essential step in enabling realistic high spatial resolution CT-generated DRRs for deep learning training.
Collapse
Affiliation(s)
- Wai Yan Ryana Fok
- X-ray Products, Siemens Healthcare GmbH, Forchheim, Germany
- Faculty of Computer Science, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany
| | | | | | - Ludwig Ritschl
- X-ray Products, Siemens Healthcare GmbH, Forchheim, Germany
| | | | - Sylvia Saalfeld
- Computational Medicine Group, Ilmenau University of Technology, Ilmenau, Germany
- Research Campus STIMULATE, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany
| |
Collapse
|
3
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
4
|
Li G, Chen X, You C, Huang X, Deng Z, Luo S. A nonconvex model-based combined geometric calibration scheme for micro cone-beam CT with irregular trajectories. Med Phys 2023; 50:2759-2774. [PMID: 36718546 DOI: 10.1002/mp.16257] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/21/2022] [Accepted: 01/17/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Many dedicated cone-beam CT (CBCT) systems have irregular scanning trajectories. Compared with the standard CBCT calibration, accurate calibration for CBCT systems with irregular trajectories is a more complex task, since the geometric parameters for each scanning view are variable. Most of the existing calibration methods assume that the intrinsic geometric relationship of the fiducials in the phantom is precisely known, and rarely delve deeper into the issue of whether the phantom accuracy is adapted to the calibration model. PURPOSE A high-precision phantom and a highly robust calibration model are interdependent and mutually supportive, and they are both important for calibration accuracy, especially for the high-resolution CBCT. Therefore, we propose a calibration scheme that considers both accurate phantom measurement and robust geometric calibration. METHODS Our proposed scheme consists of two parts: (1) introducing a measurement model to acquire the accurate intrinsic geometric relationship of the fiducials in the phantom; (2) developing a highly noise-robust nonconvex model-based calibration method. The measurement model in the first part is achieved by extending our previous high-precision geometric calibration model suitable for CBCT with circular trajectories. In the second part, a novel iterative method with optimization constraints based on a back-projection model is developed to solve the geometric parameters of each view. RESULTS The simulations and real experiments show that the measurement errors of the fiducial ball bearings (BBs) are within the subpixel level. With the help of the geometric relationship of the BBs obtained by our measurement method, the classic calibration method can achieve good calibration based on far fewer BBs. All metrics obtained in simulated experiments as well as in real experiments on Micro CT systems with resolutions of 9 and 4.5 μm show that the proposed calibration method has higher calibration accuracy than the competing classic method. It is particularly worth noting that although our measurement model proves to be very accurate, the classic calibration method based on this measurement model can only achieve good calibration results when the resolution of the measurement system is close to that of the system to be calibrated, but our calibration scheme enables high-accuracy calibration even when the resolution of the system to be calibrated is twice that of the measurement system. CONCLUSIONS The proposed combined geometrical calibration scheme does not rely on a phantom with an intricate pattern of fiducials, so it is applicable in Micro CT with high resolution. The two parts of the scheme, the "measurement model" and the "calibration model," prove to be of high accuracy. The combination of these two models can effectively improve the calibration accuracy, especially in some extreme cases.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Xue Chen
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven, Connecticut, USA
| | - Xinhai Huang
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Zhenhao Deng
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Yang Q, Lizotte DL, Cong W, Wang G. Preliminary landscape analysis of deep tomographic imaging patents. Vis Comput Ind Biomed Art 2023; 6:3. [PMID: 36683096 PMCID: PMC9868030 DOI: 10.1186/s42492-023-00130-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/10/2023] [Indexed: 01/24/2023] Open
Abstract
Over recent years, the importance of the patent literature has become increasingly more recognized in the academic setting. In the context of artificial intelligence, deep learning, and data sciences, patents are relevant to not only industry but also academe and other communities. In this article, we focus on deep tomographic imaging and perform a preliminary landscape analysis of the related patent literature. Our search tool is PatSeer. Our patent bibliometric data is summarized in various figures and tables. In particular, we qualitatively analyze key deep tomographic patent literature.
Collapse
Affiliation(s)
- Qingsong Yang
- grid.33647.350000 0001 2160 9198Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | | | - Wenxiang Cong
- grid.33647.350000 0001 2160 9198Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Ge Wang
- grid.33647.350000 0001 2160 9198Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| |
Collapse
|
6
|
You C, Zhao R, Liu F, Dong S, Chinchali S, Topcu U, Staib L, Duncan JS. Class-Aware Adversarial Transformers for Medical Image Segmentation. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:29582-29596. [PMID: 37533756 PMCID: PMC10395073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/04/2023]
Abstract
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model's inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
Collapse
|
7
|
Zhao R, Sui X, Qin R, Du H, Song L, Tian D, Wang J, Lu X, Wang Y, Song W, Jin Z. Can deep learning improve image quality of low-dose CT: a prospective study in interstitial lung disease. Eur Radiol 2022; 32:8140-8151. [PMID: 35748899 DOI: 10.1007/s00330-022-08870-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 04/11/2022] [Accepted: 05/10/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To investigate whether deep learning reconstruction (DLR) could keep image quality and reduce radiation dose in interstitial lung disease (ILD) patients compared with HRCT reconstructed with hybrid iterative reconstruction (hybrid-IR). METHODS Seventy ILD patients were prospectively enrolled and underwent HRCT (120 kVp, automatic tube current) and LDCT (120 kVp, 30 mAs) scans. HRCT images were reconstructed with hybrid-IR (Adaptive Iterative Dose Reduction 3-Dimensional [AIDR3D], standard-setting); LDCT images were reconstructed with DLR (Advanced Intelligence Clear-IQ Engine [AiCE], lung/bone, mild/standard/strong setting). Image noise, streak artifact, overall image quality, and visualization of normal and abnormal features of ILD were evaluated. RESULTS The mean radiation dose of LDCT was 38% of HRCT. Objective image noise of reconstructed LDCT images was 33.6 to 111.3% of HRCT, and signal-to-noise ratio (SNR) was 0.9 to 3.1 times of the latter (p < 0.001). LDCT-AiCE was not significantly different from or even better than HRCT in overall image quality and visualization of normal lung structures. LDCT-AiCE (lung, mild/standard/strong) showed progressively better recognition of ground glass opacity than HRCT-AIDR3D (p < 0.05, p < 0.01, p < 0.001), and LDCT-AiCE (lung, mild/standard/strong; bone, mild) was superior to HRCT-AIDR3D in visualization of architectural distortion (p < 0.01, p < 0.01, p < 0.01; p < 0.05). LDCT-AiCE (bone, strong) was better than HRCT-AIDR3D in the assessment of bronchiectasis and/or bronchiolectasis (p < 0.05). LDCT-AiCE (bone, mild/standard/strong) was significantly better at the visualization of honeycombing than HRCT-AIDR3D (p < 0.05, p < 0.05, p < 0.01). CONCLUSION Deep learning reconstruction could effectively reduce radiation dose and keep image quality in ILD patients compared to HRCT with hybrid-IR. KEY POINTS • Deep learning reconstruction was a novel image reconstruction algorithm based on deep convolutional neural networks. It was applied in chest CT studies and received auspicious results. • HRCT plays an essential role in the whole process of diagnosis, treatment efficacy evaluation, and follow-ups for interstitial lung disease patients. However, cumulative radiation exposure could increase the risks of cancer. • Deep learning reconstruction method could effectively reduce the radiation dose and keep the image quality compared with HRCT reconstructed with hybrid iterative reconstruction in patients with interstitial lung disease.
Collapse
Affiliation(s)
- Ruijie Zhao
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Xin Sui
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Ruiyao Qin
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Huayang Du
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Lan Song
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Duxue Tian
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Jinhua Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Xiaoping Lu
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Yun Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China
| | - Wei Song
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China.
| | - Zhengyu Jin
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, No. 1 Shuaifuyuan Wangfujing Dongcheng District, Beijing, 100730, China.
| |
Collapse
|
8
|
The use of deep learning methods in low-dose computed tomography image reconstruction: a systematic review. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00724-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractConventional reconstruction techniques, such as filtered back projection (FBP) and iterative reconstruction (IR), which have been utilised widely in the image reconstruction process of computed tomography (CT) are not suitable in the case of low-dose CT applications, because of the unsatisfying quality of the reconstructed image and inefficient reconstruction time. Therefore, as the demand for CT radiation dose reduction continues to increase, the use of artificial intelligence (AI) in image reconstruction has become a trend that attracts more and more attention. This systematic review examined various deep learning methods to determine their characteristics, availability, intended use and expected outputs concerning low-dose CT image reconstruction. Utilising the methodology of Kitchenham and Charter, we performed a systematic search of the literature from 2016 to 2021 in Springer, Science Direct, arXiv, PubMed, ACM, IEEE, and Scopus. This review showed that algorithms using deep learning technology are superior to traditional IR methods in noise suppression, artifact reduction and structure preservation, in terms of improving the image quality of low-dose reconstructed images. In conclusion, we provided an overview of the use of deep learning approaches in low-dose CT image reconstruction together with their benefits, limitations, and opportunities for improvement.
Collapse
|
9
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
10
|
Zheng T, Oda H, Hayashi Y, Moriya T, Nakamura S, Mori M, Takabatake H, Natori H, Oda M, Mori K. SR-CycleGAN: super-resolution of clinical CT to micro-CT level with multi-modality super-resolution loss. J Med Imaging (Bellingham) 2022; 9:024003. [PMID: 35399301 PMCID: PMC8983071 DOI: 10.1117/1.jmi.9.2.024003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 03/08/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: We propose a super-resolution (SR) method, named SR-CycleGAN, for SR of clinical computed tomography (CT) images to the micro-focus x-ray CT CT ( μ CT ) level. Due to the resolution limitations of clinical CT (about 500 × 500 × 500 μ m 3 / voxel ), it is challenging to obtain enough pathological information. On the other hand, μ CT scanning allows the imaging of lung specimens with significantly higher resolution (about 50 × 50 × 50 μ m 3 / voxel or higher), which allows us to obtain and analyze detailed anatomical information. As a way to obtain detailed information such as cancer invasion and bronchioles from preoperative clinical CT images of lung cancer patients, the SR of clinical CT images to the μ CT level is desired. Approach: Typical SR methods require aligned pairs of low-resolution (LR) and high-resolution images for training, but it is infeasible to obtain precisely aligned paired clinical CT and μ CT images. To solve this problem, we propose an unpaired SR approach that can perform SR on clinical CT to the μ CT level. We modify a conventional image-to-image translation network named CycleGAN to an inter-modality translation network named SR-CycleGAN. The modifications consist of three parts: (1) an innovative loss function named multi-modality super-resolution loss, (2) optimized SR network structures for enlarging the input LR image to2 k -times by width and height to obtain the SR output, and (3) sub-pixel shuffling layers for reducing computing time. Results: Experimental results demonstrated that our method successfully performed SR of lung clinical CT images. SSIM and PSNR scores of our method were 0.54 and 17.71, higher than the conventional CycleGAN's scores of 0.05 and 13.64, respectively. Conclusions: The proposed SR-CycleGAN is usable for the SR of a lung clinical CT into μ CT scale, while conventional CycleGAN output images with low qualitative and quantitative values. More lung micro-anatomy information could be observed to aid diagnosis, such as the shape of bronchioles walls.
Collapse
Affiliation(s)
- Tong Zheng
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Hirohisa Oda
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Yuichiro Hayashi
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Takayasu Moriya
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Shota Nakamura
- Nagoya University, Graduate School of Medicine, Nagoya, Japan
| | - Masaki Mori
- Sapporo-Kosei General Hospital, Sapporo, Japan
| | | | | | - Masahiro Oda
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
- Nagoya University, Information Strategy Office, Information and Communications, Nagoya, Japan
| | - Kensaku Mori
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
- Nagoya University, Information Technology Center, Nagoya, Japan
- National Institute of Informatics, Research Center of Medical BigData, Tokyo, Japan
| |
Collapse
|
11
|
Li C, Ma C, Zhuo X, Li L, Li B, Li S, Lu WW. Focal osteoporosis defect is associated with vertebral compression fracture prevalence in a bone mineral density-independent manner. JOR Spine 2022; 5:e1195. [PMID: 35386753 PMCID: PMC8966878 DOI: 10.1002/jsp2.1195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 12/19/2021] [Accepted: 02/07/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Focal osteoporosis defect has shown a high association with the bone fragility and osteoporotic fracture prevalence. However, no routine computed tomography (CT)-based vertebral focal osteoporosis defect measurement and its association with vertebral compression fracture (VCF) were discussed yet. This study aimed to develop a routine CT-based measurement method for focal osteoporosis defect quantification, and to assess its association with the VCF prevalence. Materials and Methods A total of 205 cases who underwent routine CT scanning, were retrospectively reviewed and enrolled into either the VCF or the control group. The focal bone mineral content loss (focal BMC loss), measured as the cumulated demineralization within bone void space, was proposed for focal osteoporosis defect quantification. Its scan-rescan reproducibility and its correlation with trabecular bone mineral density (BMD) and apparent microarchitecture parameters were evaluated. The association between focal BMC loss and the prevalence of VCF was studied by logistic regression. Results The measurement of focal BMC loss showed high reproducibility (RMSSD = 0.011 mm, LSC = 0.030 mm, ICC = 0.97), and good correlation with focal bone volume fraction (r = 0.79, P < 0.001), trabecular bone separation (r = 0.76, P < 0.001), but poor correlation with trabecular BMD (r = 0.37, P < 0.001). The focal BMC loss was significantly higher in the fracture group than the control (1.03 ± 0.13 vs. 0.93 ± 0.11 mm; P < 0.001), and was associated with prevalent VCF (1.87, 95% CI = 1.31-2.65, P < 0.001) independent of trabecular BMD level. Discussion As a surrogate measure of focal osteoporosis defect, focal BMC Loss independently associated with the VCF prevalence. It suggests that focal osteoporosis defect is a common manifestation that positively contributed to compression fracture risk and can be quantified with routine CT using focal BMC Loss.
Collapse
Affiliation(s)
- Chentian Li
- Department of Orthopedics and TaumatologyZhujiang Hospital, Southern Medical UniversityGuangzhouGuangdongChina
- Department of Orthopaedics & Traumatology, Li Ka Shing Faculty of MedicineThe University of Hong KongHong Kong SARChina
| | - Chi Ma
- Department of Orthopaedics & Traumatology, Li Ka Shing Faculty of MedicineThe University of Hong KongHong Kong SARChina
| | - Xianglong Zhuo
- Department of OrthopaedicsLiuzhou Worker's Hospital, Guangxi Medical UniversityLiuzhouGuangxiChina
| | - Li Li
- Department of Orthopaedics & Traumatology, Li Ka Shing Faculty of MedicineThe University of Hong KongHong Kong SARChina
- Department of OrthopaedicsLiuzhou Worker's Hospital, Guangxi Medical UniversityLiuzhouGuangxiChina
| | - Bing Li
- Department of OrthopaedicsLiuzhou Worker's Hospital, Guangxi Medical UniversityLiuzhouGuangxiChina
| | - Songjian Li
- Department of Orthopedics and TaumatologyZhujiang Hospital, Southern Medical UniversityGuangzhouGuangdongChina
| | - William W. Lu
- Department of Orthopaedics & Traumatology, Li Ka Shing Faculty of MedicineThe University of Hong KongHong Kong SARChina
- SIAT & Shenzhen Institutes of Advanced TechnologyChinese Academy of ScienceShenzhenGuangdongChina
| |
Collapse
|
12
|
Wu X, Zhang Y, Zhang P, Hui H, Jing J, Tian F, Jiang J, Yang X, Chen Y, Tian J. Structure attention co-training neural network for neovascularization segmentation in intravascular optical coherence tomography. Med Phys 2022; 49:1723-1738. [PMID: 35061247 DOI: 10.1002/mp.15477] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 01/09/2022] [Accepted: 01/10/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To development and validate a Neovascularization (NV) segmentation model in intravascular optical coherence tomography (IVOCT) through deep learning methods. METHODS AND MATERIALS A total of 1950 2D slices of 70 IVOCT pullbacks were used in our study. We randomly selected 1273 2D slices from 44 patients as the training set, 379 2D slices from 11 patients as the validation set, and 298 2D slices from the last 15 patients as the testing set. Automatic NV segmentation is quite challenging, as it must address issues of speckle noise, shadow artifacts, high distribution variation, etc. To meet these challenges, a new deep learning-based segmentation method is developed based on a co-training architecture with an integrated structural attention mechanism. Co-training is developed to exploit the features of three consecutive slices. The structural attention mechanism comprises spatial and channel attention modules and is integrated into the co-training architecture at each up-sampling step. A cascaded fixed network is further incorporated to achieve segmentation at the image level in a coarse-to-fine manner. RESULTS Extensive experiments were performed involving a comparison with several state-of-the-art deep learning-based segmentation methods. Moreover, the consistency of the results with those of manual segmentation was also investigated. Our proposed NV automatic segmentation method achieved the highest correlation with the manual delineation by interventional cardiologists (the Pearson correlation coefficient is 0.825). CONCLUSION In this work, we proposed a co-training architecture with an integrated structural attention mechanism to segment NV in IVOCT images. The good agreement between our segmentation results and manual segmentation indicates that the proposed method has great potential for application in the clinical investigation of NV-related plaque diagnosis and treatment. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiangjun Wu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China
| | - Yingqian Zhang
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Peng Zhang
- Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Jing Jing
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Feng Tian
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Jingying Jiang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China
| | - Yundai Chen
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China.,Southern Medical University, Guangzhou, 510515, China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China.,Zhuhai Precision Medical Center, Zhuhai People's Hospital, affiliated with Jinan University, Zhuhai, 519000, China
| |
Collapse
|
13
|
Bori E, Pancani S, Vigliotta S, Innocenti B. Validation and accuracy evaluation of automatic segmentation for knee joint pre-planning. Knee 2021; 33:275-281. [PMID: 34739958 DOI: 10.1016/j.knee.2021.10.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 09/28/2021] [Accepted: 10/12/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Proper use of three-dimensional (3D) models generated from medical imaging data in clinical preoperative planning, training and consultation is based on the preliminary proved accuracy of the replication of the patient anatomy. Therefore, this study investigated the dimensional accuracy of 3D reconstructions of the knee joint generated from computed tomography scans via automatic segmentation by comparing them with 3D models generated through manual segmentation. METHODS Three unpaired, fresh-frozen right legs were investigated. Three-dimensional models of the femur and the tibia of each leg were manually segmented using a commercial software and compared in terms of geometrical accuracy with the 3D models automatically segmented using proprietary software. Bony landmarks were identified and used to calculate clinically relevant distances: femoral epicondylar distance; posterior femoral epicondylar distance; femoral trochlear groove length; tibial knee center tubercle distance (TKCTD). Pearson's correlation coefficient and Bland and Altman plots were used to evaluate the level of agreement between measured distances. RESULTS Differences between parameters measured on 3D models manually and automatically segmented were below 1 mm (range: -0.06 to 0.72 mm), except for TKCTD (between 1.00 and 1.40 mm in two specimens). In addition, there was a significant strong correlation between measurements. CONCLUSIONS The results obtained are comparable to those reported in previous studies where accuracy of bone 3D reconstruction was investigated. Automatic segmentation techniques can be used to quickly reconstruct reliable 3D models of bone anatomy and these results may contribute to enhance the spread of this technology in preoperative and operative settings, where it has shown considerable potential.
Collapse
Affiliation(s)
- Edoardo Bori
- BEAMS Department, Université Libre de Bruxelles, Bruxelles, Belgium.
| | | | | | | |
Collapse
|
14
|
Abstract
PURPOSE OF REVIEW In this paper, we discuss how recent advancements in image processing and machine learning (ML) are shaping a new and exciting era for the osteoporosis imaging field. With this paper, we want to give the reader a basic exposure to the ML concepts that are necessary to build effective solutions for image processing and interpretation, while presenting an overview of the state of the art in the application of machine learning techniques for the assessment of bone structure, osteoporosis diagnosis, fracture detection, and risk prediction. RECENT FINDINGS ML effort in the osteoporosis imaging field is largely characterized by "low-cost" bone quality estimation and osteoporosis diagnosis, fracture detection, and risk prediction, but also automatized and standardized large-scale data analysis and data-driven imaging biomarker discovery. Our effort is not intended to be a systematic review, but an opportunity to review key studies in the recent osteoporosis imaging research landscape with the ultimate goal of discussing specific design choices, giving the reader pointers to possible solutions of regression, segmentation, and classification tasks as well as discussing common mistakes.
Collapse
Affiliation(s)
- Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 1700 Fourth Street, Suite 201, QB3 Building, San Francisco, CA, 94158, USA.
| | - Francesco Caliva
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 1700 Fourth Street, Suite 201, QB3 Building, San Francisco, CA, 94158, USA
| | - Galateia Kazakia
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 1700 Fourth Street, Suite 201, QB3 Building, San Francisco, CA, 94158, USA
| | - Andrew J Burghardt
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 1700 Fourth Street, Suite 201, QB3 Building, San Francisco, CA, 94158, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 1700 Fourth Street, Suite 201, QB3 Building, San Francisco, CA, 94158, USA
| |
Collapse
|
15
|
de Farias EC, di Noia C, Han C, Sala E, Castelli M, Rundo L. Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features. Sci Rep 2021; 11:21361. [PMID: 34725417 PMCID: PMC8560955 DOI: 10.1038/s41598-021-00898-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 10/13/2021] [Indexed: 12/25/2022] Open
Abstract
Robust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At [Formula: see text] SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at [Formula: see text] SR. We also evaluated the robustness of our model's radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
Collapse
Affiliation(s)
- Erick Costa de Farias
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312, Lisbon, Portugal
| | - Christian di Noia
- Department of Physics, University of Milano-Bicocca, 20126, Milan, Italy
| | - Changhee Han
- Saitama Prefectural University, Saitama, 343-8540, Japan
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312, Lisbon, Portugal.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK.
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK.
| |
Collapse
|
16
|
Mali SA, Ibrahim A, Woodruff HC, Andrearczyk V, Müller H, Primakov S, Salahuddin Z, Chatterjee A, Lambin P. Making Radiomics More Reproducible across Scanner and Imaging Protocol Variations: A Review of Harmonization Methods. J Pers Med 2021; 11:842. [PMID: 34575619 PMCID: PMC8472571 DOI: 10.3390/jpm11090842] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/21/2021] [Accepted: 08/24/2021] [Indexed: 12/13/2022] Open
Abstract
Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.
Collapse
Affiliation(s)
- Shruti Atul Mali
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
- Department of Medical Physics, Division of Nuclear Medicine and Oncological Imaging, Hospital Center Universitaire de Liege, 4000 Liege, Belgium
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
| | - Henry C. Woodruff
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Henning Müller
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Zohaib Salahuddin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Avishek Chatterjee
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| |
Collapse
|
17
|
Podgorsak AR, Shiraz Bhurwani MM, Ionita CN. CT artifact correction for sparse and truncated projection data using generative adversarial networks. Med Phys 2020; 48:615-626. [PMID: 32996149 DOI: 10.1002/mp.14504] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Computed tomography image reconstruction using truncated or sparsely acquired projection data to reduce radiation dose, iodine volume, and patient motion artifacts has been widely investigated. To continue these efforts, we investigated the use of machine learning-based reconstruction techniques using deep convolutional generative adversarial networks (DCGANs) and evaluated its effect using standard imaging metrics. METHODS Ten thousand head computed tomography (CT) scans were collected from the 2019 RSNA Intracranial Hemorrhage Detection and Classification Challenge dataset. Sinograms were simulated and then resampled in both a one-third truncated and one-third sparse manner. DCGANs were tasked with correcting the incomplete projection data, either in the sinogram domain where the full sinogram was recovered by the DCGAN and then reconstructed, or the reconstruction domain where the incomplete data were first reconstructed and the sparse or truncation artifacts were corrected by the DCGAN. Seventy-five hundred images were used for network training and 2500 were withheld for network assessment using mean absolute error (MAE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) between results of different correction techniques. Image data from a quality-assurance phantom were also resampled in the two manners and corrected and reconstructed for network performance assessment using line profiles across high-contrast features, the modulation transfer function (MTF), noise power spectrum (NPS), and Hounsfield Unit (HU) linearity analysis. RESULTS Better agreement with the fully sampled reconstructions were achieved from sparse acquisition corrected in the sinogram domain and the truncated acquisition corrected in the reconstruction domain. MAE, SSIM, and PSNR showed quantitative improvement from the DCGAN correction techniques. HU linearity of the reconstructions was maintained by the correction techniques for the sparse and truncated acquisitions. MTF curves reached the 10% modulation cutoff frequency at 5.86 lp/cm for the truncated corrected reconstruction compared with 2.98 lp/cm for the truncated uncorrected reconstruction, and 5.36 lp/cm for the sparse corrected reconstruction compared with around 2.91 lp/cm for the sparse uncorrected reconstruction. NPS analyses yielded better agreement across a range of frequencies between the resampled corrected phantom and truth reconstructions. CONCLUSIONS We demonstrated the use of DCGANs for CT-image correction from sparse and truncated simulated projection data, while preserving imaging quality of the fully sampled projection data.
Collapse
Affiliation(s)
- Alexander R Podgorsak
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Mohammad Mahdi Shiraz Bhurwani
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Ciprian N Ionita
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| |
Collapse
|
18
|
Edge-guided second-order total generalized variation for Gaussian noise removal from depth map. Sci Rep 2020; 10:16329. [PMID: 33004951 PMCID: PMC7530766 DOI: 10.1038/s41598-020-73342-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 09/15/2020] [Indexed: 11/24/2022] Open
Abstract
Total generalized variation models have recently demonstrated high-quality denoising capacity for single image. In this paper, we present an accurate denoising method for depth map. Our method uses a weighted second-order total generalized variational model for Gaussian noise removal. By fusing an edge indicator function into the regularization term of the second-order total generalized variational model to guide the diffusion of gradients, our method aims to use the first or second derivative to enhance the intensity of the diffusion tensor. We use the first-order primal–dual algorithm to minimize the proposed energy function and achieve high-quality denoising and edge preserving result for depth maps with high -intensity noise. Extensive quantitative and qualitative evaluations in comparison to bench-mark datasets show that the proposed method provides significant higher accuracy and visual improvements than many state-of-the-art denoising algorithms.
Collapse
|
19
|
Richert C, Huber N. A Review of Experimentally Informed Micromechanical Modeling of Nanoporous Metals: From Structural Descriptors to Predictive Structure-Property Relationships. MATERIALS (BASEL, SWITZERLAND) 2020; 13:E3307. [PMID: 32722289 PMCID: PMC7435653 DOI: 10.3390/ma13153307] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 07/17/2020] [Accepted: 07/20/2020] [Indexed: 11/16/2022]
Abstract
Nanoporous metals made by dealloying take the form of macroscopic (mm- or cm-sized) porous bodies with a solid fraction of around 30%. The material exhibits a network structure of "ligaments" with an average ligament diameter that can be adjusted between 5 and 500 nm. Current research explores the use of nanoporous metals as functional materials with respect to electrochemical conversion and storage, bioanalytical and biomedical applications, and actuation and sensing. The mechanical behavior of the network structure provides the scope for fundamental research, particularly because of the high complexity originating from the randomness of the structure and the challenges arising from the nanosized ligaments, which can be accessed through an experiment only indirectly via the testing of the macroscopic properties. The strength of nanoscale ligaments increases systematically with decreasing size, and owing to the high surface-to-volume ratio their elastic and plastic properties can be additionally tuned by applying an electric potential. Therefore, nanoporous metals offer themselves as suitable model systems for exploring the structure-property relationships of complex interconnected microstructures as well as the basic mechanisms of the chemo-electro-mechanical coupling at interfaces. The micromechanical modeling of nanoporous metals is a rapidly growing field that strongly benefits from developments in computational methods, high-performance computing, and visualization techniques; it also benefits at the same time through advances in characterization techniques, including nanotomography, 3D image processing, and algorithms for geometrical and topological analysis. The review article collects articles on the structural characterization and micromechanical modeling of nanoporous metals and discusses the acquired understanding in the context of advancements in the experimental discipline. The concluding remarks are given in the form of a summary and an outline of future perspectives.
Collapse
Affiliation(s)
- Claudia Richert
- Institute of Materials Research, Materials Mechanics, Helmholtz-Zentrum Geesthacht, 21502 Geesthacht, Germany;
| | - Norbert Huber
- Institute of Materials Research, Materials Mechanics, Helmholtz-Zentrum Geesthacht, 21502 Geesthacht, Germany;
- Institute of Materials Physics and Technology, Hamburg University of Technology, 21073 Hamburg, Germany
| |
Collapse
|