1
|
Ji X, Zhuo X, Lu Y, Mao W, Zhu S, Quan G, Xi Y, Lyu T, Chen Y. Image Domain Multi-Material Decomposition Noise Suppression Through Basis Transformation and Selective Filtering. IEEE J Biomed Health Inform 2024; 28:2891-2903. [PMID: 38363665 DOI: 10.1109/jbhi.2023.3348135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2024]
Abstract
Spectral CT can provide material characterization ability to offer more precise material information for diagnosis purposes. However, the material decomposition process generally leads to amplification of noise which significantly limits the utility of the material basis images. To mitigate such problem, an image domain noise suppression method was proposed in this work. The method performs basis transformation of the material basis images based on a singular value decomposition. The noise variances of the original spectral CT images were incorporated in the matrix to be decomposed to ensure that the transformed basis images are statistically uncorrelated. Due to the difference in noise amplitudes in the transformed basis images, a selective filtering method was proposed with the low-noise transformed basis image as guidance. The method was evaluated using both numerical simulation and real clinical dual-energy CT data. Results demonstrated that compared with existing methods, the proposed method performs better in preserving the spatial resolution and the soft tissue contrast while suppressing the image noise. The proposed method is also computationally efficient and can realize real-time noise suppression for clinical spectral CT images.
Collapse
|
2
|
de Vries L, van Herten RLM, Hoving JW, Išgum I, Emmer BJ, Majoie CBLM, Marquering HA, Gavves E. Spatio-temporal physics-informed learning: A novel approach to CT perfusion analysis in acute ischemic stroke. Med Image Anal 2023; 90:102971. [PMID: 37778103 DOI: 10.1016/j.media.2023.102971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/20/2023] [Accepted: 09/11/2023] [Indexed: 10/03/2023]
Abstract
CT perfusion imaging is important in the imaging workup of acute ischemic stroke for evaluating affected cerebral tissue. CT perfusion analysis software produces cerebral perfusion maps from commonly noisy spatio-temporal CT perfusion data. High levels of noise can influence the results of CT perfusion analysis, necessitating software tuning. This work proposes a novel approach for CT perfusion analysis that uses physics-informed learning, an optimization framework that is robust to noise. In particular, we propose SPPINN: Spatio-temporal Perfusion Physics-Informed Neural Network and research spatio-temporal physics-informed learning. SPPINN learns implicit neural representations of contrast attenuation in CT perfusion scans using the spatio-temporal coordinates of the data and employs these representations to estimate a continuous representation of the cerebral perfusion parameters. We validate the approach on simulated data to quantify perfusion parameter estimation performance. Furthermore, we apply the method to in-house patient data and the public Ischemic Stroke Lesion Segmentation 2018 benchmark data to assess the correspondence between the perfusion maps and reference standard infarct core segmentations. Our method achieves accurate perfusion parameter estimates even with high noise levels and differentiates healthy tissue from infarcted tissue. Moreover, SPPINN perfusion maps accurately correspond with reference standard infarct core segmentations. Hence, we show that using spatio-temporal physics-informed learning for cerebral perfusion estimation is accurate, even in noisy CT perfusion data. The code for this work is available at https://github.com/lucasdevries/SPPINN.
Collapse
Affiliation(s)
- Lucas de Vries
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands.
| | - Rudolf L M van Herten
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| | - Jan W Hoving
- Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Ivana Išgum
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Bart J Emmer
- Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Charles B L M Majoie
- Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Henk A Marquering
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC location University of Amsterdam, Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands; Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Efstratios Gavves
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Li Z, Li H, Ralescu AL, Dillman JR, Parikh NA, He L. A novel collaborative self-supervised learning method for radiomic data. Neuroimage 2023; 277:120229. [PMID: 37321358 PMCID: PMC10440826 DOI: 10.1016/j.neuroimage.2023.120229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 05/19/2023] [Accepted: 06/12/2023] [Indexed: 06/17/2023] Open
Abstract
The computer-aided disease diagnosis from radiomic data is important in many medical applications. However, developing such a technique relies on labeling radiological images, which is a time-consuming, labor-intensive, and expensive process. In this work, we present the first novel collaborative self-supervised learning method to solve the challenge of insufficient labeled radiomic data, whose characteristics are different from text and image data. To achieve this, we present two collaborative pretext tasks that explore the latent pathological or biological relationships between regions of interest and the similarity and dissimilarity of information between subjects. Our method collaboratively learns the robust latent feature representations from radiomic data in a self-supervised manner to reduce human annotation efforts, which benefits the disease diagnosis. We compared our proposed method with other state-of-the-art self-supervised learning methods on a simulation study and two independent datasets. Extensive experimental results demonstrated that our method outperforms other self-supervised learning methods on both classification and regression tasks. With further refinement, our method will have the potential advantage in automatic disease diagnosis with large-scale unlabeled data available.
Collapse
Affiliation(s)
- Zhiyuan Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Anca L Ralescu
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Jonathan R Dillman
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nehal A Parikh
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, U niversity of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
4
|
Farea Shaaf Z, Mahadi Abdul Jamil M, Ambar R, Abd Wahab MH. Convolutional Neural Network for Denoising Left Ventricle Magnetic Resonance Images. COMPUTATIONAL INTELLIGENCE AND MACHINE LEARNING APPROACHES IN BIOMEDICAL ENGINEERING AND HEALTH CARE SYSTEMS 2022:1-14. [DOI: 10.2174/9781681089553122010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image processing is critical in disease detection and prediction. For
example, they locate lesions and measure an organ's morphological structures.
Currently, cardiac magnetic resonance imaging (CMRI) plays an essential role in
cardiac motion tracking and analyzing regional and global heart functions with high
accuracy and reproducibility. Cardiac MRI datasets are images taken during the heart's
cardiac cycles. These datasets require expert labeling to accurately recognize features
and train neural networks to predict cardiac disease. Any erroneous prediction caused
by image impairment will impact patients' diagnostic decisions. As a result, image
preprocessing is used, including enhancement tools such as filtering and denoising.
This paper introduces a denoising algorithm that uses a convolution neural network
(CNN) to delineate left ventricle (LV) contours (endocardium and epicardium borders)
from MRI images. With only a small amount of training data from the EMIDEC
database, this network performs well for MRI image denoising.
Collapse
Affiliation(s)
- Zakarya Farea Shaaf
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,,Johor,Malaysia
| | - Muhammad Mahadi Abdul Jamil
- Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Universiti Tun Hussein Onn Malaysia,Johor,Malaysia
| | - Radzi Ambar
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia
| | - Mohd Helmy Abd Wahab
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia,86400
| |
Collapse
|
5
|
Zeng D, Zeng C, Zeng Z, Li S, Deng Z, Chen S, Bian Z, Ma J. Basis and current state of computed tomography perfusion imaging: a review. Phys Med Biol 2022; 67. [PMID: 35926503 DOI: 10.1088/1361-6560/ac8717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 08/04/2022] [Indexed: 12/30/2022]
Abstract
Computed tomography perfusion (CTP) is a functional imaging that allows for providing capillary-level hemodynamics information of the desired tissue in clinics. In this paper, we aim to offer insight into CTP imaging which covers the basics and current state of CTP imaging, then summarize the technical applications in the CTP imaging as well as the future technological potential. At first, we focus on the fundamentals of CTP imaging including systematically summarized CTP image acquisition and hemodynamic parameter map estimation techniques. A short assessment is presented to outline the clinical applications with CTP imaging, and then a review of radiation dose effect of the CTP imaging on the different applications is presented. We present a categorized methodology review on known and potential solvable challenges of radiation dose reduction in CTP imaging. To evaluate the quality of CTP images, we list various standardized performance metrics. Moreover, we present a review on the determination of infarct and penumbra. Finally, we reveal the popularity and future trend of CTP imaging.
Collapse
Affiliation(s)
- Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Cuidie Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhixiong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Sui Li
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhen Deng
- Department of Neurology, Nanfang Hospital, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Sijin Chen
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| |
Collapse
|
6
|
A Review of Deep Learning CT Reconstruction: Concepts, Limitations, and Promise in Clinical Practice. CURRENT RADIOLOGY REPORTS 2022. [DOI: 10.1007/s40134-022-00399-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Abstract
Purpose of Review
Deep Learning reconstruction (DLR) is the current state-of-the-art method for CT image formation. Comparisons to existing filter back-projection, iterative, and model-based reconstructions are now available in the literature. This review summarizes the prior reconstruction methods, introduces DLR, and then reviews recent findings from DLR from a physics and clinical perspective.
Recent Findings
DLR has been shown to allow for noise magnitude reductions relative to filtered back-projection without suffering from “plastic” or “blotchy” noise texture that was found objectionable with most iterative and model-based solutions. Clinically, early reader studies have reported increases in subjective quality scores and studies have successfully implemented DLR-enabled dose reductions.
Summary
The future of CT image reconstruction is bright; deep learning methods have only started to tackle problems in this space via addressing noise reduction. Artifact mitigation and spectral applications likely be future candidates for DLR applications.
Collapse
|
7
|
Wu D, Kim K, Li Q. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med Phys 2021; 48:7657-7672. [PMID: 34791655 PMCID: PMC11216369 DOI: 10.1002/mp.15101] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/07/2021] [Accepted: 06/24/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
Qiu B, Zeng S, Meng X, Jiang Z, You Y, Geng M, Li Z, Hu Y, Huang Z, Zhou C, Ren Q, Lu Y. Comparative study of deep neural networks with unsupervised Noise2Noise strategy for noise reduction of optical coherence tomography images. JOURNAL OF BIOPHOTONICS 2021; 14:e202100151. [PMID: 34383390 DOI: 10.1002/jbio.202100151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 08/09/2021] [Accepted: 08/09/2021] [Indexed: 06/13/2023]
Abstract
As a powerful diagnostic tool, optical coherence tomography (OCT) has been widely used in various clinical setting. However, OCT images are susceptible to inherent speckle noise that may contaminate subtle structure information, due to low-coherence interferometric imaging procedure. Many supervised learning-based models have achieved impressive performance in reducing speckle noise of OCT images trained with a large number of noisy-clean paired OCT images, which are not commonly feasible in clinical practice. In this article, we conducted a comparative study to investigate the denoising performance of OCT images over different deep neural networks through an unsupervised Noise2Noise (N2N) strategy, which only trained with noisy OCT samples. Four representative network architectures including U-shaped model, multi-information stream model, straight-information stream model and GAN-based model were investigated on an OCT image dataset acquired from healthy human eyes. The results demonstrated all four unsupervised N2N models offered denoised OCT images with a performance comparable with that of supervised learning models, illustrating the effectiveness of unsupervised N2N models in denoising OCT images. Furthermore, U-shaped models and GAN-based models using UNet network as generator are two preferred and suitable architectures for reducing speckle noise of OCT images and preserving fine structure information of retinal layers under unsupervised N2N circumstances.
Collapse
Affiliation(s)
- Bin Qiu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Shuang Zeng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
| | - Xiangxi Meng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Products Administration), Department of Nuclear Medicine, Peking University Cancer Hospital & Institute, Beijing, China
| | - Zhe Jiang
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yunfei You
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Mufeng Geng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Ziyuan Li
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yicheng Hu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Zhiyu Huang
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Chuanqing Zhou
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| |
Collapse
|
9
|
Bai T, Wang B, Nguyen D, Wang B, Dong B, Cong W, Kalra MK, Jiang S. Deep Interactive Denoiser (DID) for X-Ray Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2965-2975. [PMID: 34329156 DOI: 10.1109/tmi.2021.3101241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Low-dose computed tomography (LDCT) is desirable for both diagnostic imaging and image-guided interventions. Denoisers are widely used to improve the quality of LDCT. Deep learning (DL)-based denoisers have shown state-of-the-art performance and are becoming mainstream methods. However, there are two challenges to using DL-based denoisers: 1) a trained model typically does not generate different image candidates with different noise-resolution tradeoffs, which are sometimes needed for different clinical tasks; and 2) the model's generalizability might be an issue when the noise level in the testing images differs from that in the training dataset. To address these two challenges, in this work, we introduce a lightweight optimization process that can run on top of any existing DL-based denoiser during the testing phase to generate multiple image candidates with different noise-resolution tradeoffs suitable for different clinical tasks in real time. Consequently, our method allows users to interact with the denoiser to efficiently review various image candidates and quickly pick the desired one; thus, we termed this method deep interactive denoiser (DID). Experimental results demonstrated that DID can deliver multiple image candidates with different noise-resolution tradeoffs and shows great generalizability across various network architectures, as well as training and testing datasets with various noise levels.
Collapse
|
10
|
Fang W, Wu D, Kim K, Kalra MK, Singh R, Li L, Li Q. Iterative material decomposition for spectral CT using self-supervised Noise2Noise prior. Phys Med Biol 2021; 66. [PMID: 34126602 DOI: 10.1088/1361-6560/ac0afd] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
Compared to conventional computed tomography (CT), spectral CT can provide the capability of material decomposition, which can be used in many clinical diagnosis applications. However, the decomposed images can be very noisy due to the dose limit in CT scanning and the noise magnification of the material decomposition process. To alleviate this situation, we proposed an iterative one-step inversion material decomposition algorithm with a Noise2Noise prior. The algorithm estimated material images directly from projection data and used a Noise2Noise prior for denoising. In contrast to supervised deep learning methods, the designed Noise2Noise prior was built based on self-supervised learning and did not need external data for training. In our method, the data consistency term and the Noise2Noise network were alternatively optimized in the iterative framework, respectively, using a separable quadratic surrogate (SQS) and the Adam algorithm. The proposed iterative algorithm was validated and compared to other methods on simulated spectral CT data, preclinical photon-counting CT data and clinical dual-energy CT data. Quantitative analysis showed that our proposed method performs promisingly on noise suppression and structure detail recovery.
Collapse
Affiliation(s)
- Wei Fang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, People's Republic of China.,Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Liang Li
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| |
Collapse
|
11
|
Zhang Z, Liang X, Zhao W, Xing L. Noise2Context: Context-assisted learning 3D thin-layer for low-dose CT. Med Phys 2021; 48:5794-5803. [PMID: 34287948 DOI: 10.1002/mp.15119] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/31/2021] [Accepted: 07/08/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of x-ray radiation exposure attract more and more attention. To lower the x-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. METHODS In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. RESULTS Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. CONCLUSIONS In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
12
|
Hasan AM, Mohebbian MR, Wahid KA, Babyn P. Hybrid-Collaborative Noise2Noise Denoiser for Low-Dose CT Images. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3002178] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
13
|
|