1
|
Zhou Y, Chen T, Hou J, Xie H, Dvornek NC, Zhou SK, Wilson DL, Duncan JS, Liu C, Zhou B. Cascaded Multi-path Shortcut Diffusion Model for Medical Image Translation. Med Image Anal 2024; 98:103300. [PMID: 39226710 DOI: 10.1016/j.media.2024.103300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/29/2024] [Accepted: 08/06/2024] [Indexed: 09/05/2024]
Abstract
Image-to-image translation is a vital component in medical imaging processing, with many uses in a wide range of imaging modalities and clinical scenarios. Previous methods include Generative Adversarial Networks (GANs) and Diffusion Models (DMs), which offer realism but suffer from instability and lack uncertainty estimation. Even though both GAN and DM methods have individually exhibited their capability in medical image translation tasks, the potential of combining a GAN and DM to further improve translation performance and to enable uncertainty estimation remains largely unexplored. In this work, we address these challenges by proposing a Cascade Multi-path Shortcut Diffusion Model (CMDM) for high-quality medical image translation and uncertainty estimation. To reduce the required number of iterations and ensure robust performance, our method first obtains a conditional GAN-generated prior image that will be used for the efficient reverse translation with a DM in the subsequent step. Additionally, a multi-path shortcut diffusion strategy is employed to refine translation results and estimate uncertainty. A cascaded pipeline further enhances translation quality, incorporating residual averaging between cascades. We collected three different medical image datasets with two sub-tasks for each dataset to test the generalizability of our approach. Our experimental results found that CMDM can produce high-quality translations comparable to state-of-the-art methods while providing reasonable uncertainty estimations that correlate well with the translation error.
Collapse
Affiliation(s)
- Yinchi Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Tianqi Chen
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Bo Zhou
- Department of Radiology, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
2
|
Chen X, Zhou B, Guo X, Xie H, Liu Q, Duncan JS, Sinusas AJ, Liu C. DuDoCFNet: Dual-Domain Coarse-to-Fine Progressive Network for Simultaneous Denoising, Limited-View Reconstruction, and Attenuation Correction of Cardiac SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3110-3125. [PMID: 38578853 DOI: 10.1109/tmi.2024.3385650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps ( μ -maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free μ -map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived μ -maps. Our experiments demonstrate DuDoCFNet's superior accuracy in estimating projections, generating μ -maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask.
Collapse
|
3
|
彭 声, 王 永, 边 兆, 马 建, 黄 静. [A dual-domain cone beam computed tomography reconstruction framework with improved differentiable domain transform for cone-angle artifact correction]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2024; 44:1188-1197. [PMID: 38977350 PMCID: PMC11237300 DOI: 10.12122/j.issn.1673-4254.2024.06.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Indexed: 07/10/2024]
Abstract
OBJECTIVE We propose a dual-domain cone beam computed tomography (CBCT) reconstruction framework DualCBR-Net based on improved differentiable domain transform for cone-angle artifact correction. METHODS The proposed CBCT dual-domain reconstruction framework DualCBR-Net consists of 3 individual modules: projection preprocessing, differentiable domain transform, and image post-processing. The projection preprocessing module first extends the original projection data in the row direction to ensure full coverage of the scanned object by X-ray. The differentiable domain transform introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes, where the geometric parameters correspond to the extended data dimension to provide crucial prior information in the forward pass of the network and ensure the accuracy in the gradient backpropagation, thus enabling precise learning of cone-beam region data. The image post-processing module further fine-tunes the domain-transformed image to remove residual artifacts and noises. RESULTS The results of validation experiments conducted on Mayo's public chest dataset showed that the proposed DualCBR-Net framework was superior to other comparison methods in terms of artifact removal and structural detail preservation. Compared with the latest methods, the DualCBR-Net framework improved the PSNR and SSIM by 0.6479 and 0.0074, respectively. CONCLUSION The proposed DualCBR-Net framework for cone-angle artifact correction allows effective joint training of the CBCT dual-domain network and is especially effective for large cone-angle region.
Collapse
|
4
|
Qiao Z, Liu P, Fang C, Redler G, Epel B, Halpern H. Directional TV algorithm for image reconstruction from sparse-view projections in EPR imaging. Phys Med Biol 2024; 69:115051. [PMID: 38729205 DOI: 10.1088/1361-6560/ad4a1b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 05/10/2024] [Indexed: 05/12/2024]
Abstract
Objective.Electron paramagnetic resonance (EPR) imaging is an advanced in vivo oxygen imaging modality. The main drawback of EPR imaging is the long scanning time. Sparse-view projections collection is an effective fast scanning pattern. However, the commonly-used filtered back projection (FBP) algorithm is not competent to accurately reconstruct images from sparse-view projections because of the severe streak artifacts. The aim of this work is to develop an advanced algorithm for sparse reconstruction of 3D EPR imaging.Methods.The optimization based algorithms including the total variation (TV) algorithm have proven to be effective in sparse reconstruction in EPR imaging. To further improve the reconstruction accuracy, we propose the directional TV (DTV) model and derive its Chambolle-Pock solving algorithm.Results.After the algorithm correctness validation on simulation data, we explore the sparse reconstruction capability of the DTV algorithm via a simulated six-sphere phantom and two real bottle phantoms filled with OX063 trityl solution and scanned by an EPR imager with a magnetic field strength of 250 G.Conclusion.Both the simulated and real data experiments show that the DTV algorithm is superior to the existing FBP and TV-type algorithms and a deep learning based method according to visual inspection and quantitative evaluations in sparse reconstruction of EPR imaging.Significance.These insights gained in this work may be used in the development of fast EPR imaging workflow of practical significance.
Collapse
Affiliation(s)
- Zhiwei Qiao
- School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, People's Republic of China
| | - Peng Liu
- School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, People's Republic of China
- Department of Big Data and Intelligent Engineering, Shanxi Institute of Technology, Yangquan, Shanxi, People's Republic of China
| | - Chenyun Fang
- School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, People's Republic of China
| | - Gage Redler
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, FL, United States of America
| | - Boris Epel
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, IL, United States of America
| | - Howard Halpern
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, IL, United States of America
| |
Collapse
|
5
|
Li X, Jing K, Yang Y, Wang Y, Ma J, Zheng H, Xu Z. Noise-Generating and Imaging Mechanism Inspired Implicit Regularization Learning Network for Low Dose CT Reconstrution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1677-1689. [PMID: 38145543 DOI: 10.1109/tmi.2023.3347258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Low-dose computed tomography (LDCT) helps to reduce radiation risks in CT scanning while maintaining image quality, which involves a consistent pursuit of lower incident rays and higher reconstruction performance. Although deep learning approaches have achieved encouraging success in LDCT reconstruction, most of them treat the task as a general inverse problem in either the image domain or the dual (sinogram and image) domains. Such frameworks have not considered the original noise generation of the projection data and suffer from limited performance improvement for the LDCT task. In this paper, we propose a novel reconstruction model based on noise-generating and imaging mechanism in full-domain, which fully considers the statistical properties of intrinsic noises in LDCT and prior information in sinogram and image domains. To solve the model, we propose an optimization algorithm based on the proximal gradient technique. Specifically, we derive the approximate solutions of the integer programming problem on the projection data theoretically. Instead of hand-crafting the sinogram and image regularizers, we propose to unroll the optimization algorithm to be a deep network. The network implicitly learns the proximal operators of sinogram and image regularizers with two deep neural networks, providing a more interpretable and effective reconstruction procedure. Numerical results demonstrate our proposed method improvements of > 2.9 dB in peak signal to noise ratio, > 1.4% promotion in structural similarity metric, and > 9 HU decrements in root mean square error over current state-of-the-art LDCT methods.
Collapse
|
6
|
Selles M, Wellenberg RHH, Slotman DJ, Nijholt IM, van Osch JAC, van Dijke KF, Maas M, Boomsma MF. Image quality and metal artifact reduction in total hip arthroplasty CT: deep learning-based algorithm versus virtual monoenergetic imaging and orthopedic metal artifact reduction. Eur Radiol Exp 2024; 8:31. [PMID: 38480603 PMCID: PMC10937891 DOI: 10.1186/s41747-024-00427-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/02/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND To compare image quality, metal artifacts, and diagnostic confidence of conventional computed tomography (CT) images of unilateral total hip arthroplasty patients (THA) with deep learning-based metal artifact reduction (DL-MAR) to conventional CT and 130-keV monoenergetic images with and without orthopedic metal artifact reduction (O-MAR). METHODS Conventional CT and 130-keV monoenergetic images with and without O-MAR and DL-MAR images of 28 unilateral THA patients were reconstructed. Image quality, metal artifacts, and diagnostic confidence in bone, pelvic organs, and soft tissue adjacent to the prosthesis were jointly scored by two experienced musculoskeletal radiologists. Contrast-to-noise ratios (CNR) between bladder and fat and muscle and fat were measured. Wilcoxon signed-rank tests with Holm-Bonferroni correction were used. RESULTS Significantly higher image quality, higher diagnostic confidence, and less severe metal artifacts were observed on DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001 for all comparisons). Higher image quality, higher diagnostic confidence for bone and soft tissue adjacent to the prosthesis, and less severe metal artifacts were observed on DL-MAR when compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.014). CNRs were higher for DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001). Higher CNRs were observed on DL-MAR images compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.010). CONCLUSIONS DL-MAR showed higher image quality, diagnostic confidence, and superior metal artifact reduction compared to conventional CT images and 130-keV monoenergetic images with and without O-MAR in unilateral THA patients. RELEVANCE STATEMENT DL-MAR resulted into improved image quality, stronger reduction of metal artifacts, and improved diagnostic confidence compared to conventional and virtual monoenergetic images with and without metal artifact reduction, bringing DL-based metal artifact reduction closer to clinical application. KEY POINTS • Metal artifacts introduced by total hip arthroplasty hamper radiologic assessment on CT. • A deep-learning algorithm (DL-MAR) was compared to dual-layer CT images with O-MAR. • DL-MAR showed best image quality and diagnostic confidence. • Highest contrast-to-noise ratios were observed on the DL-MAR images.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands.
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands.
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands.
| | - Ruud H H Wellenberg
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | - Ingrid M Nijholt
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | | | - Kees F van Dijke
- Department of Radiology & Nuclear Medicine, Noordwest Ziekenhuisgroep, 1815 JD, Alkmaar, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | | |
Collapse
|
7
|
Li S, Chen K, Ma X, Liang Z. Semi-supervised low-dose SPECT restoration using sinogram inner-structure aware graph neural network. Phys Med Biol 2024; 69:055016. [PMID: 38324896 DOI: 10.1088/1361-6560/ad2716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/07/2024] [Indexed: 02/09/2024]
Abstract
Objective.To mitigate the potential radiation risk, low-dose single photon emission computed tomography (SPECT) is of increasing interest. Numerous deep learning-based methods have been developed to perform low-dose imaging while maintaining image quality. However, most existing methods seldom explore the unique inner-structure inherent within sinograms. In addition, traditional supervised learning methods require large-scale labeled data, where the normal-dose data serves as annotation and is intractable to acquire in low-dose imaging. In this study, we aim to develop a novel sinogram inner-structure-aware semi-supervised framework for the task of low-dose SPECT sinogram restoration.Approach.The proposed framework retains the strengths of UNet, meanwhile introducing a sinogram-structure-based non-local neighbors graph neural network (SSN-GNN) module and a window-based K-nearest neighbors GNN (W-KNN-GNN) module to effectively exploit the inherent inner-structure within SPECT sinograms. Moreover, the proposed framework employs the mean teacher semi-supervised learning approach to leverage the information available in abundant unlabeled low-dose sinograms.Main results.The datasets exploited in this study were acquired from the (Extended Cardiac-Torso) XCAT anthropomorphic digital phantoms, which provide realistic images for imaging research of various modalities. Quantitative as well as qualitative results demonstrate that the proposed framework achieves superior performance compared to several state-of-the-art reconstruction methods. To further validate the effectiveness of the proposed framework, ablation and robustness experiments were also performed. The experimental results show that each component of the proposed framework effectively improves the model performance, and the framework exhibits superior robustness with respect to various noise levels. Besides, the proposed semi-supervised paradigm showcases the efficacy of incorporating supplementary unlabeled low-dose sinograms.Significance.The proposed framework improves the quality of low-dose SPECT reconstructed images by utilizing sinogram inner-structure and incorporating supplementary unlabeled data, which provides an important tool for dose reduction without sacrificing the image quality.
Collapse
Affiliation(s)
- Si Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, People's Republic of China
| | - Keming Chen
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, People's Republic of China
| | - Xiangyuan Ma
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, People's Republic of China
| | - Zengguo Liang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, People's Republic of China
| |
Collapse
|
8
|
Lin J, Li J, Dou J, Zhong L, Di J, Qin Y. Dual-Domain Reconstruction Network Incorporating Multi-Level Wavelet Transform and Recurrent Convolution for Sparse View Computed Tomography Imaging. Tomography 2024; 10:133-158. [PMID: 38250957 PMCID: PMC11154272 DOI: 10.3390/tomography10010011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/08/2024] [Accepted: 01/10/2024] [Indexed: 01/23/2024] Open
Abstract
Sparse view computed tomography (SVCT) aims to reduce the number of X-ray projection views required for reconstructing the cross-sectional image of an object. While SVCT significantly reduces X-ray radiation dose and speeds up scanning, insufficient projection data give rise to issues such as severe streak artifacts and blurring in reconstructed images, thereby impacting the diagnostic accuracy of CT detection. To address this challenge, a dual-domain reconstruction network incorporating multi-level wavelet transform and recurrent convolution is proposed in this paper. The dual-domain network is composed of a sinogram domain network (SDN) and an image domain network (IDN). Multi-level wavelet transform is employed in both IDN and SDN to decompose sinograms and CT images into distinct frequency components, which are then processed through separate network branches to recover detailed information within their respective frequency bands. To capture global textures, artifacts, and shallow features in sinograms and CT images, a recurrent convolution unit (RCU) based on convolutional long and short-term memory (Conv-LSTM) is designed, which can model their long-range dependencies through recurrent calculation. Additionally, a self-attention-based multi-level frequency feature normalization fusion (MFNF) block is proposed to assist in recovering high-frequency components by aggregating low-frequency components. Finally, an edge loss function based on the Laplacian of Gaussian (LoG) is designed as the regularization term for enhancing the recovery of high-frequency edge structures. The experimental results demonstrate the effectiveness of our approach in reducing artifacts and enhancing the reconstruction of intricate structural details across various sparse views and noise levels. Our method excels in both performance and robustness, as evidenced by its superior outcomes in numerous qualitative and quantitative assessments, surpassing contemporary state-of-the-art CNNs or Transformer-based reconstruction methods.
Collapse
Affiliation(s)
- Juncheng Lin
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Jialin Li
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Jiazhen Dou
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Liyun Zhong
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Jianglei Di
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Yuwen Qin
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China; (J.L.); (J.L.); (J.D.); (L.Z.); (Y.Q.)
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Guangdong University of Technology, Guangzhou 510006, China
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| |
Collapse
|
9
|
Selles M, van Osch JAC, Maas M, Boomsma MF, Wellenberg RHH. Advances in metal artifact reduction in CT images: A review of traditional and novel metal artifact reduction techniques. Eur J Radiol 2024; 170:111276. [PMID: 38142571 DOI: 10.1016/j.ejrad.2023.111276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Metal artifacts degrade CT image quality, hampering clinical assessment. Numerous metal artifact reduction methods are available to improve the image quality of CT images with metal implants. In this review, an overview of traditional methods is provided including the modification of acquisition and reconstruction parameters, projection-based metal artifact reduction techniques (MAR), dual energy CT (DECT) and the combination of these techniques. Furthermore, the additional value and challenges of novel metal artifact reduction techniques that have been introduced over the past years are discussed such as photon counting CT (PCCT) and deep learning based metal artifact reduction techniques.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | | | - Mario Maas
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | | - Ruud H H Wellenberg
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| |
Collapse
|
10
|
Zhou B, Xie H, Liu Q, Chen X, Guo X, Feng Z, Hou J, Zhou SK, Li B, Rominger A, Shi K, Duncan JS, Liu C. FedFTN: Personalized federated learning with deep feature transformation network for multi-institutional low-count PET denoising. Med Image Anal 2023; 90:102993. [PMID: 37827110 PMCID: PMC10611438 DOI: 10.1016/j.media.2023.102993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/12/2023] [Accepted: 10/02/2023] [Indexed: 10/14/2023]
Abstract
Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
11
|
Shrestha P, LaManna JM, Fahy KF, Kim P, Lee C, Lee JK, Baltic E, Jacobson DL, Hussey DS, Bazylak A. Simultaneous multimaterial operando tomography of electrochemical devices. SCIENCE ADVANCES 2023; 9:eadg8634. [PMID: 37939178 PMCID: PMC10631724 DOI: 10.1126/sciadv.adg8634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 10/06/2023] [Indexed: 11/10/2023]
Abstract
The performance of electrochemical energy devices, such as fuel cells and batteries, is dictated by intricate physiochemical processes within. To better understand and rationally engineer these processes, we need robust operando characterization tools that detect and distinguish multiple interacting components/interfaces in high contrast. Here, we uniquely combine dual-modality tomography (simultaneous neutron and x-ray tomography) and advanced image processing (iterative reconstruction and metal artifact reduction) for high-contrast multimaterial imaging, with signal and contrast enhancements of up to 10 and 48 times, respectively, compared to conventional single-modality imaging. Targeted development and application of these methods to electrochemical devices allow us to resolve operando distributions of six interacting fuel cell components (including void space) with the highest reported pairwise contrast for simultaneous yet decoupled spatiotemporal characterization of component morphology and hydration. Such high-contrast tomography ushers in key gold standards for operando electrochemical characterization, with broader applicability to numerous multimaterial systems.
Collapse
Affiliation(s)
- Pranay Shrestha
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Jacob M. LaManna
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Kieran F. Fahy
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pascal Kim
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - ChungHyuk Lee
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
- Department of Chemical Engineering, Toronto Metropolitan University, Toronto, Ontario, Canada
| | - Jason K. Lee
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Elias Baltic
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - David L. Jacobson
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Daniel S. Hussey
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Aimy Bazylak
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Li G, Ji L, You C, Gao S, Zhou L, Bai K, Luo S, Gu N. MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints. Phys Med Biol 2023; 68:205005. [PMID: 37696272 DOI: 10.1088/1361-6560/acf8ac] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 09/11/2023] [Indexed: 09/13/2023]
Abstract
Objective.Metal artifact reduction (MAR) has been a key issue in CT imaging. Recently, MAR methods based on deep learning have achieved promising results. However, when deploying deep learning-based MAR in real-world clinical scenarios, two prominent challenges arise. One limitation is the lack of paired training data in real applications, which limits the practicality of supervised methods. Another limitation is that image-domain methods suitable for more application scenarios are inadequate in performance while end-to-end approaches with better performance are only applicable to fan-beam CT due to large memory consumption.Approach.We propose a novel image-domain MAR method based on the generative adversarial network with variable constraints (MARGANVAC) to improve MAR performance. The proposed variable constraint is a kind of time-varying cost function that can relax the fidelity constraint at the beginning and gradually strengthen the fidelity constraint as the training progresses. To better deploy our image-domain supervised method into practical scenarios, we develop a transfer method to mimic the real metal artifacts by first extracting the real metal traces and then adding them to artifact-free images to generate paired training data.Main results.The effectiveness of the proposed method is validated in simulated fan-beam experiments and real cone-beam experiments. All quantitative and qualitative results demonstrate that the proposed method achieves superior performance compared with the competing methods.Significance.The MARGANVAC model proposed in this paper is an image-domain model that can be conveniently applied to various scenarios such as fan beam and cone beam CT. At the same time, its performance is on par with the cutting-edge dual-domain MAR approaches. In addition, the metal artifact transfer method proposed in this paper can easily generate paired data with real artifact features, which can be better used for model training in real scenarios.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Longyin Ji
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven 06510, United States of America
| | - Shuai Gao
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Langrui Zhou
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Keshu Bai
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Ning Gu
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| |
Collapse
|
13
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
14
|
Tang H, Jiang S, Lin Y, Li Y, Bao X. An improved dual-domain network for metal artifact reduction in CT images using aggregated contextual transformations. Phys Med Biol 2023; 68:175021. [PMID: 37541223 DOI: 10.1088/1361-6560/aced78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 08/04/2023] [Indexed: 08/06/2023]
Abstract
Objective. Metal artifact reduction (MAR) remains a challenging task due to the difficulty of removing artifacts while preserving anatomical details of the tissue. Although current dual-domain networks have shown promising performance in MAR, they heavily rely on the image domain, which can be too smooth and lose important information in the metal-affected area. To address this problem, we propose an improved dual domain network framework.Approach. We enhance sinogram completion performance by utilizing an aggregated contextual transformations network in the sinogram domain. Furthermore, we utilizea prior-projection-based linearized correction method to obtain images with beam-hardening artifacts removed, which are incorporated into the input of the image post-processing network to assist in training the image domain network. Finally, we train the sinogram domain network and the image domain network separately to their respective convergences.Main results. In experiments conducted on a simulated dataset, our method achieves the best average RMSE of 25.1, SSIM of 0.973, and PSNR of 42.1, respectively.Significance. The proposed method is capable of preserving tissue structures near metallic objects while eliminating metal artifacts from the reconstructed images. Related codes will be released athttps://github.com/Corinna-China/AOTDudoNet.
Collapse
Affiliation(s)
- Hui Tang
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, People's Republic of China
| | - Sudong Jiang
- School of Software Engineering, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Yubing Lin
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Yu Li
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Xudong Bao
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| |
Collapse
|
15
|
Tang H, Lin YB, Jiang SD, Li Y, Li T, Bao XD. A new dental CBCT metal artifact reduction method based on a dual-domain processing framework. Phys Med Biol 2023; 68:175016. [PMID: 37524084 DOI: 10.1088/1361-6560/acec29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.Cone beam computed tomography (CBCT) has been wildly used in clinical treatment of dental diseases. However, patients often have metallic implants in mouth, which will lead to severe metal artifacts in the reconstructed images. To reduce metal artifacts in dental CBCT images, which have a larger amount of data and a limited field of view compared to computed tomography images, a new dental CBCT metal artifact reduction method based on a projection correction and a convolutional neural network (CNN) based image post-processing model is proposed in this paper. Approach.The proposed method consists of three stages: (1) volume reconstruction and metal segmentation in the image domain, using the forward projection to get the metal masks in the projection domain; (2) linear interpolation in the projection domain and reconstruction to build a linear interpolation (LI) corrected volume; (3) take the LI corrected volume as prior and perform the prior based beam hardening correction in the projection domain, and (4) combine the constructed projection corrected volume and LI-volume slice-by-slice in the image domain by two concatenated U-Net based models (CNN1 and CNN2). Simulated and clinical dental CBCT cases are used to evaluate the proposed method. The normalized root means square difference (NRMSD) and the structural similarity index (SSIM) are used for the quantitative evaluation of the method.Main results.The proposed method outperforms the frequency domain fusion method (FS-MAR) and a state-of-art CNN based method on the simulated dataset and yields the best NRMSD and SSIM of 4.0196 and 0.9924, respectively. Visual results on both simulated and clinical images also illustrate that the proposed method can effectively reduce metal artifacts.Significance. This study demonstrated that the proposed dual-domain processing framework is suitable for metal artifact reduction in dental CBCT images.
Collapse
Affiliation(s)
- Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, People's Republic of China
| | - Yu Bing Lin
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Su Dong Jiang
- School of Software Engineering, Southeast University, Nanjing, People's Republic of China
| | - Yu Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Tian Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Xu Dong Bao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| |
Collapse
|
16
|
Li Y, Sun X, Wang S, Li X, Qin Y, Pan J, Chen P. MDST: multi-domain sparse-view CT reconstruction based on convolution and swin transformer. Phys Med Biol 2023; 68:095019. [PMID: 36889004 DOI: 10.1088/1361-6560/acc2ab] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 03/08/2023] [Indexed: 03/10/2023]
Abstract
Objective.Sparse-view computed tomography (SVCT), which can reduce the radiation doses administered to patients and hasten data acquisition, has become an area of particular interest to researchers. Most existing deep learning-based image reconstruction methods are based on convolutional neural networks (CNNs). Due to the locality of convolution and continuous sampling operations, existing approaches cannot fully model global context feature dependencies, which makes the CNN-based approaches less efficient in modeling the computed tomography (CT) images with various structural information.Approach.To overcome the above challenges, this paper develops a novel multi-domain optimization network based on convolution and swin transformer (MDST). MDST uses swin transformer block as the main building block in both projection (residual) domain and image (residual) domain sub-networks, which models global and local features of the projections and reconstructed images. MDST consists of two modules for initial reconstruction and residual-assisted reconstruction, respectively. The sparse sinogram is first expanded in the initial reconstruction module with a projection domain sub-network. Then, the sparse-view artifacts are effectively suppressed by an image domain sub-network. Finally, the residual assisted reconstruction module to correct the inconsistency of the initial reconstruction, further preserving image details.Main results. Extensive experiments on CT lymph node datasets and real walnut datasets show that MDST can effectively alleviate the loss of fine details caused by information attenuation and improve the reconstruction quality of medical images.Significance.MDST network is robust and can effectively reconstruct images with different noise level projections. Different from the current prevalent CNN-based networks, MDST uses transformer as the main backbone, which proves the potential of transformer in SVCT reconstruction.
Collapse
Affiliation(s)
- Yu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - XueQin Sun
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - SuKai Wang
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - XuRu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - YingWei Qin
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - JinXiao Pan
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - Ping Chen
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| |
Collapse
|
17
|
Selles M, Slotman DJ, van Osch JAC, Nijholt IM, Wellenberg RHH, Maas M, Boomsma MF. Is AI the way forward for reducing metal artifacts in CT? development of a generic deep learning-based method and initial evaluation in patients with sacroiliac joint implants. Eur J Radiol 2023; 163:110844. [PMID: 37119708 DOI: 10.1016/j.ejrad.2023.110844] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/01/2023]
Abstract
PURPOSE To develop a deep learning-based metal artifact reduction technique (dl-MAR) and quantitatively compare metal artifacts on dl-MAR-corrected CT-images, orthopedic metal artifact reduction (O-MAR)-corrected CT-images and uncorrected CT-images after sacroiliac (SI) joint fusion. METHODS dl-MAR was trained on CT-images with simulated metal artifacts. Pre-surgery CT-images and uncorrected, O-MAR-corrected and dl-MAR-corrected post-surgery CT-images of twenty-five patients undergoing SI joint fusion were retrospectively obtained. Image registration was applied to align pre-surgery with post-surgery CT-images within each patient, allowing placement of regions of interest (ROIs) on the same anatomical locations. Six ROIs were placed on the metal implant and the contralateral side in bone lateral of the SI joint, the gluteus medius muscle and the iliacus muscle. Metal artifacts were quantified as the difference in Hounsfield units (HU) between pre- and post-surgery CT-values within the ROIs on the uncorrected, O-MAR-corrected and dl-MAR-corrected images. Noise was quantified as standard deviation in HU within the ROIs. Metal artifacts and noise in the post-surgery CT-images were compared using linear multilevel regression models. RESULTS Metal artifacts were significantly reduced by O-MAR and dl-MAR in bone (p < 0.001), contralateral bone (O-MAR: p = 0.009; dl-MAR: p < 0.001), gluteus medius (p < 0.001), contralateral gluteus medius (p < 0.001), iliacus (p < 0.001) and contralateral iliacus (O-MAR: p = 0.024; dl-MAR: p < 0.001) compared to uncorrected images. Images corrected with dl-MAR resulted in stronger artifact reduction than images corrected with O-MAR in contralateral bone (p < 0.001), gluteus medius (p = 0.006), contralateral gluteus medius (p < 0.001), iliacus (p = 0.017), and contralateral iliacus (p < 0.001). Noise was reduced by O-MAR in bone (p = 0.009) and gluteus medius (p < 0.001) while noise was reduced by dl-MAR in all ROIs (p < 0.001) in comparison to uncorrected images. CONCLUSION dl-MAR showed superior metal artifact reduction compared to O-MAR in CT-images with SI joint fusion implants.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands
| | | | | | - Ruud H H Wellenberg
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | |
Collapse
|
18
|
Wang H, Li Y, Zhang H, Meng D, Zheng Y. InDuDoNet+: A deep unfolding dual domain network for metal artifact reduction in CT images. Med Image Anal 2023; 85:102729. [PMID: 36623381 DOI: 10.1016/j.media.2022.102729] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 11/27/2022] [Accepted: 12/09/2022] [Indexed: 12/25/2022]
Abstract
During the computed tomography (CT) imaging process, metallic implants within patients often cause harmful artifacts, which adversely degrade the visual quality of reconstructed CT images and negatively affect the subsequent clinical diagnosis. For the metal artifact reduction (MAR) task, current deep learning based methods have achieved promising performance. However, most of them share two main common limitations: (1) the CT physical imaging geometry constraint is not comprehensively incorporated into deep network structures; (2) the entire framework has weak interpretability for the specific MAR task; hence, the role of each network module is difficult to be evaluated. To alleviate these issues, in the paper, we construct a novel deep unfolding dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded. Concretely, we derive a joint spatial and Radon domain reconstruction model and propose an optimization algorithm with only simple operators for solving it. By unfolding the iterative steps involved in the proposed algorithm into the corresponding network modules, we easily build the InDuDoNet+ with clear interpretability. Furthermore, we analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance. Comprehensive experiments on synthesized data and clinical data substantiate the superiority of the proposed methods as well as the superior generalization performance beyond the current state-of-the-art (SOTA) MAR methods. Code is available at https://github.com/hongwang01/InDuDoNet_plus.
Collapse
Affiliation(s)
| | | | - Haimiao Zhang
- Beijing Information Science and Technology University, Beijing, China
| | - Deyu Meng
- Xi'an Jiaotong University, Xi'an, China; Peng Cheng Laboratory, Shenzhen, China; Macau University of Science and Technology, Taipa, Macao.
| | | |
Collapse
|
19
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
20
|
Li S, Peng L, Li F, Liang Z. Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9728-9758. [PMID: 37322909 DOI: 10.3934/mbe.2023427] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.
Collapse
Affiliation(s)
- Si Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Limei Peng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Fenghuan Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Zengguo Liang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| |
Collapse
|
21
|
Zhu M, Zhu Q, Song Y, Guo Y, Zeng D, Bian Z, Wang Y, Ma J. Physics-informed sinogram completion for metal artifact reduction in CT imaging. Phys Med Biol 2023; 68. [PMID: 36808913 DOI: 10.1088/1361-6560/acbddf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/21/2023] [Indexed: 02/23/2023]
Abstract
Objective.Metal artifacts in the computed tomography (CT) imaging are unavoidably adverse to the clinical diagnosis and treatment outcomes. Most metal artifact reduction (MAR) methods easily result in the over-smoothing problem and loss of structure details near the metal implants, especially for these metal implants with irregular elongated shapes. To address this problem, we present the physics-informed sinogram completion (PISC) method for MAR in CT imaging, to reduce metal artifacts and recover more structural textures.Approach.Specifically, the original uncorrected sinogram is firstly completed by the normalized linear interpolation algorithm to reduce metal artifacts. Simultaneously, the uncorrected sinogram is also corrected based on the beam-hardening correction physical model, to recover the latent structure information in metal trajectory region by leveraging the attenuation characteristics of different materials. Both corrected sinograms are fused with the pixel-wise adaptive weights, which are manually designed according to the shape and material information of metal implants. To furtherly reduce artifacts and improve the CT image quality, a post-processing frequency split algorithm is adopted to yield the final corrected CT image after reconstructing the fused sinogram.Main results.We qualitatively and quantitatively evaluated the presented PISC method on two simulated datasets and three real datasets. All results demonstrate that the presented PISC method can effectively correct the metal implants with various shapes and materials, in terms of artifact suppression and structure preservation.Significance.We proposed a sinogram-domain MAR method to compensate for the over-smoothing problem existing in most MAR methods by taking advantage of the physical prior knowledge, which has the potential to improve the performance of the deep learning based MAR approaches.
Collapse
Affiliation(s)
- Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Qisen Zhu
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yi Guo
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
22
|
Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, Guo X, Li X, Zhou SK, Duncan JS, Liu C. Federated Transfer Learning for Low-dose PET Denoising: A Pilot Study with Simulated Heterogeneous Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:284-295. [PMID: 37789946 PMCID: PMC10544830 DOI: 10.1109/trpms.2022.3194408] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90007, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Xiaoxiao Li
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China and the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - James S Duncan
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
23
|
Wu X, Gao P, Zhang P, Shang Y, He B, Zhang L, Jiang J, Hui H, Tian J. Cross-domain knowledge transfer based parallel-cascaded multi-scale attention network for limited view reconstruction in projection magnetic particle imaging. Comput Biol Med 2023; 158:106809. [PMID: 37004433 DOI: 10.1016/j.compbiomed.2023.106809] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 02/20/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023]
Abstract
Projection magnetic particle imaging (MPI) can significantly improve the temporal resolution of three-dimensional (3D) imaging compared to that using traditional point by point scanning. However, the dense view of projections required for tomographic reconstruction limits the scope of temporal resolution optimization. The solution to this problem in computed tomography (CT) is using limited view projections (sparse view or limited angle) for reconstruction, which can be divided into: completing the limited view sinogram and image post-processing for streaking artifacts caused by insufficient projections. Benefiting from large-scale CT datasets, both categories of deep learning-based methods have achieved tremendous progress; yet, there is a data scarcity limitation in MPI. We propose a cross-domain knowledge transfer learning strategy that can transfer the prior knowledge of the limited view learned by the model in CT to MPI, which can help reduce the network requirements for real MPI data. In addition, the size of the imaging target affects the scale of the streaking artifacts caused by insufficient projections. Therefore, we propose a parallel-cascaded multi-scale attention module that allows the network to adaptively identify streaking artifacts at different scales. The proposed method was evaluated on real phantom and in vivo mouse data, and it significantly outperformed several advanced limited view methods. The streaking artifacts caused by an insufficient number of projections can be overcome using the proposed method.
Collapse
Affiliation(s)
- Xiangjun Wu
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Pengli Gao
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Peng Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Yaxin Shang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Bingxi He
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Liwen Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jingying Jiang
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China.
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| | - Jie Tian
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; Zhuhai Precision Medical Center, Zhuhai People's Hospital, Jinan University, Zhuhai, China.
| |
Collapse
|
24
|
Chen X, Zhou B, Xie H, Miao T, Liu H, Holler W, Lin M, Miller EJ, Carson RE, Sinusas AJ, Liu C. DuDoSS: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac SPECT. Med Phys 2023; 50:89-103. [PMID: 36048541 PMCID: PMC9868054 DOI: 10.1002/mp.15958] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 08/04/2022] [Accepted: 08/19/2022] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | | | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Visage Imaging, Inc., San Diego, California, United States, 92130
| | - Edward J. Miller
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Richard E. Carson
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Albert J. Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| |
Collapse
|
25
|
Zhou B, Chen X, Xie H, Zhou SK, Duncan JS, Liu C. DuDoUFNet: Dual-Domain Under-to-Fully-Complete Progressive Restoration Network for Simultaneous Metal Artifact Reduction and Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3587-3599. [PMID: 35816532 PMCID: PMC9812027 DOI: 10.1109/tmi.2022.3189759] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
To reduce the potential risk of radiation to the patient, low-dose computed tomography (LDCT) has been widely adopted in clinical practice for reconstructing cross-sectional images using sinograms with reduced x-ray flux. The LDCT image quality is often degraded by different levels of noise depending on the low-dose protocols. The image quality will be further degraded when the patient has metallic implants, where the image suffers from additional streak artifacts along with further amplified noise levels, thus affecting the medical diagnosis and other CT-related applications. Previous studies mainly focused either on denoising LDCT without considering metallic implants or full-dose CT metal artifact reduction (MAR). Directly applying previous LDCT or MAR approaches to the issue of simultaneous metal artifact reduction and low-dose CT (MARLD) may yield sub-optimal reconstruction results. In this work, we develop a dual-domain under-to-fully-complete progressive restoration network, called DuDoUFNet, for MARLD. Our DuDoUFNet aims to reconstruct images with substantially reduced noise and artifact by progressive sinogram to image domain restoration with a two-stage progressive restoration network design. Our experimental results demonstrate that our method can provide high-quality reconstruction, superior to previous LDCT and MAR methods under various low-dose and metal settings.
Collapse
|
26
|
Li Y, Han S, Zhao Y, Li F, Ji D, Zhao X, Liu D, Jian J, Hu C. Synchrotron microtomography image restoration via regularization representation and deep CNN prior. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107181. [PMID: 36257200 DOI: 10.1016/j.cmpb.2022.107181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Synchrotron-based X-ray microtomography (S-µCT) is a promising imaging technique that plays an important role in modern medical science. S-µCT systems often cause various artifacts and noises in the reconstructed CT images, such as ring artifacts, quantum noise, and electronic noise. In most situations, such noise and artifacts occur simultaneously, which results in a deterioration in the image quality and affects subsequent research. Due to the complexity of the distribution of these mixed artifacts and noise, it is difficult to restore the corrupted images. To address this issue, we propose a novel algorithm to remove mixed artifacts and noise from S-µCT images simultaneously. METHODS There are two important aspects of our method. Regarding ring artifacts, because of their specific structural characteristics, regularization-based methods are more suitable; thus, low-rank tensor decomposition and total variation are utilized to represent their directional and locally piecewise smoothness properties. Moreover, to determine the implicit prior of the random noise, a convolutional neural network (CNN) based method is used. The advantages of traditional regularization and the deep CNN are then combined and embedded in a plug-and-play framework. Hence, an efficient image restoration algorithm is proposed to address the problem of mixed artifacts and noise in S-µCT images. RESULTS Our proposed method was assessed by utilizing simulations and real data experiments. The qualitative results showed that the proposed method could effectively remove ring artifacts as well as random noise. The quantitative results demonstrated that the proposed method achieved almost the best results in terms of PSNR, SSIM and MAE compared to other methods. CONCLUSIONS The proposed method can serve as an effective tool for restoring corrupted S-µCT images, and it has the potential to promote the application of S-µCT.
Collapse
Affiliation(s)
- Yimin Li
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin 300070, China
| | - Shuo Han
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin 300070, China
| | - Yuqing Zhao
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin 300070, China
| | - Fangzhi Li
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin 300070, China
| | - Dongjiang Ji
- School of Science, Tianjin University of Technology and Education, Tianjin 300222, China
| | - Xinyan Zhao
- Liver Research Center, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China; Beijing Key Laboratory of Translational Medicine in Liver Cirrhosis and National Clinical Research Center of Digestive Disease, Beijing 100050, China
| | - Dayong Liu
- Tianjin Medical University school of stomatology, Tianjin 300070, China
| | - Jianbo Jian
- Department of Radiation Oncology, Tianjin Medical University General Hospital, Tianjin 300070, China
| | - Chunhong Hu
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin 300070, China.
| |
Collapse
|
27
|
Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network‐based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme. Med Phys 2022; 49:6253-6277. [DOI: 10.1002/mp.15884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Junhyun Ahn
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Byeongjoon Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering Convergence IT Engineering, Mechanical Engineering School of Interdisciplinary Bioscience and Bioengineering, and Medical Device Innovation Center Pohang University of Science and Technology Pohang 37673 South Korea
| | - Jongduk Baek
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| |
Collapse
|
28
|
Hu D, Zhang Y, Liu J, Luo S, Chen Y. DIOR: Deep Iterative Optimization-Based Residual-Learning for Limited-Angle CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1778-1790. [PMID: 35100109 DOI: 10.1109/tmi.2022.3148110] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Limited-angle CT is a challenging problem in real applications. Incomplete projection data will lead to severe artifacts and distortions in reconstruction images. To tackle this problem, we propose a novel reconstruction framework termed Deep Iterative Optimization-based Residual-learning (DIOR) for limited-angle CT. Instead of directly deploying the regularization term on image space, the DIOR combines iterative optimization and deep learning based on the residual domain, significantly improving the convergence property and generalization ability. Specifically, the asymmetric convolutional modules are adopted to strengthen the feature extraction capacity in smooth regions for deep priors. Besides, in our DIOR method, the information contained in low-frequency and high-frequency components is also evaluated by perceptual loss to improve the performance in tissue preservation. Both simulated and clinical datasets are performed to validate the performance of DIOR. Compared with existing competitive algorithms, quantitative and qualitative results show that the proposed method brings a promising improvement in artifact removal, detail restoration and edge preservation.
Collapse
|