1
|
Eulig E, Ommer B, Kachelrieß M. Benchmarking deep learning-based low-dose CT image denoising algorithms. Med Phys 2024. [PMID: 39287517 DOI: 10.1002/mp.17379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 08/05/2024] [Accepted: 08/17/2024] [Indexed: 09/19/2024] Open
Abstract
BACKGROUND Long-lasting efforts have been made to reduce radiation dose and thus the potential radiation risk to the patient for computed tomography (CT) acquisitions without severe deterioration of image quality. To this end, various techniques have been employed over the years including iterative reconstruction methods and noise reduction algorithms. PURPOSE Recently, deep learning-based methods for noise reduction became increasingly popular and a multitude of papers claim ever improving performance both quantitatively and qualitatively. However, the lack of a standardized benchmark setup and inconsistencies in experimental design across studies hinder the verifiability and reproducibility of reported results. METHODS In this study, we propose a benchmark setup to overcome those flaws and improve reproducibility and verifiability of experimental results in the field. We perform a comprehensive and fair evaluation of several state-of-the-art methods using this standardized setup. RESULTS Our evaluation reveals that most deep learning-based methods show statistically similar performance, and improvements over the past years have been marginal at best. CONCLUSIONS This study highlights the need for a more rigorous and fair evaluation of novel deep learning-based methods for low-dose CT image denoising. Our benchmark setup is a first and important step towards this direction and can be used by future researchers to evaluate their algorithms.
Collapse
Affiliation(s)
- Elias Eulig
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | | | - Marc Kachelrieß
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
2
|
Yin Z, Wu P, Manohar A, McVeigh ER, Pack JD. Protocol optimization for functional cardiac CT imaging using noise emulation in the raw data domain. Med Phys 2024; 51:4622-4634. [PMID: 38753583 DOI: 10.1002/mp.17088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 03/29/2024] [Indexed: 05/18/2024] Open
Abstract
BACKGROUND Four-dimensional (4D) wide coverage computed tomography (CT) is an effective imaging modality for measuring the mechanical function of the myocardium. However, repeated CT measurement across a number of heartbeats is still a concern. PURPOSE A projection-domain noise emulation method is presented to generate accurate low-dose (mA modulated) 4D cardiac CT scans from high-dose scans, enabling protocol optimization to deliver sufficient image quality for functional cardiac analysis while using a dose level that is as low as reasonably achievable (ALARA). METHODS Given a targeted low-dose mA modulation curve, the proposed noise emulation method injects both quantum and electronic noise of proper magnitude and correlation to the high-dose data in projection domain. A spatially varying (i.e., channel-dependent) detector gain term as well as its calibration method were proposed to further improve the noise emulation accuracy. To determine the ALARA dose threshold, a straightforward projection domain image quality (IQ) metric was proposed that is based on the number of projection rays that do not fall under the non-linear region of the detector response. Experiments were performed to validate the noise emulation method with both phantom and clinical data in terms of visual similarity, contrast-to-noise ratio (CNR), and noise-power spectrum (NPS). RESULTS For both phantom and clinical data, the low-dose emulated images exhibited similar noise magnitude (CNR difference within 2%), artifacts, and texture to that of the real low-dose images. The proposed channel-dependent detector gain term resulted in additional increase in emulation accuracy. Using the proposed IQ metric, recommended kVp and mA settings were calculated for low dose 4D Cardiac CT acquisitions for patients of different sizes. CONCLUSIONS A detailed method to estimate system-dependent parameters for a raw-data based low dose emulation framework was described. The method produced realistic noise levels, artifacts, and texture with phantom and clinical studies. The proposed low-dose emulation method can be used to prospectively select patient-specific minimal-dose protocols for functional cardiac CT.
Collapse
Affiliation(s)
- Zhye Yin
- GE HealthCare, Waukesha, Wisconsin, USA
| | - Pengwei Wu
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| | - Ashish Manohar
- Department of Medicine, Stanford University, Palo Alto, California, USA
| | - Elliot R McVeigh
- Department of Bioengineering, Medicine, Radiology at University of California San Diego, San Diego, California, USA
| | - Jed D Pack
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| |
Collapse
|
3
|
Winfree T, McCollough C, Yu L. Development and validation of a noise insertion algorithm for photon-counting-detector CT. Med Phys 2024. [PMID: 38923526 DOI: 10.1002/mp.17263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Inserting noise into existing patient projection data to simulate lower-radiation-dose exams has been frequently used in traditional energy-integrating-detector (EID)-CT to optimize radiation dose in clinical protocols and to generate paired images for training deep-learning-based reconstruction and noise reduction methods. Recent introduction of photon counting detector CT (PCD-CT) also requires such a method to accomplish these tasks. However, clinical PCD-CT scanners often restrict the users access to the raw count data, exporting only the preprocessed, log-normalized sinogram. Therefore, it remains a challenge to employ projection domain noise insertion algorithms on PCD-CT. PURPOSE To develop and validate a projection domain noise insertion algorithm for PCD-CT that does not require access to the raw count data. MATERIALS AND METHODS A projection-domain noise model developed originally for EID-CT was adapted for PCD-CT. This model requires, as input, a map of the incident number of photons at each detector pixel when no object is in the beam. To obtain the map of incident number of photons, air scans were acquired on a PCD-CT scanner, then the noise equivalent photon number (NEPN) was calculated from the variance in the log normalized projection data of each scan. Additional air scans were acquired at various mA settings to investigate the impact of pulse pileup on the linearity of NEPN measurement. To validate the noise insertion algorithm, Noise Power Spectra (NPS) were generated from a 30 cm water tank scan and used to compare the noise texture and noise level of measured and simulated half dose and quarter dose images. An anthropomorphic thorax phantom was scanned with automatic exposure control, and noise levels at different slice locations were compared between simulated and measured half dose and quarter dose images. Spectral correlation between energy thresholds T1 and T2, and energy bins, B1 and B2, was compared between simulated and measured data across a wide range of tube current. Additionally, noise insertion was performed on a clinical patient case for qualitative assessment. RESULTS The NPS generated from simulated low dose water tank images showed similar shape and amplitude to that generated from the measured low dose images, differing by a maximum of 5.0% for half dose (HD) T1 images, 6.3% for HD T2 images, 4.1% for quarter dose (QD) T1 images, and 6.1% for QD T2 images. Noise versus slice measurements of the lung phantom showed comparable results between measured and simulated low dose images, with root mean square percent errors of 5.9%, 5.4%, 5.0%, and 4.6% for QD T1, HD T1, QD T2, and HD T2, respectively. NEPN measurements in air were linear up until 112 mA, after which pulse pileup effects significantly distort the air scan NEPN profile. Spectral correlation between T1 and T2 in simulation agreed well with that in the measured data in typical dose ranges. CONCLUSIONS A projection-domain noise insertion algorithm was developed and validated for PCD-CT to synthesize low-dose images from existing scans. It can be used for optimizing scanning protocols and generating paired images for training deep-learning-based methods.
Collapse
Affiliation(s)
- Timothy Winfree
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
4
|
Han M, Baek J. Direct estimation of the noise power spectrum from patient data to generate synthesized CT noise for denoising network training. Med Phys 2024; 51:1637-1652. [PMID: 38289987 DOI: 10.1002/mp.16963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/12/2023] [Accepted: 01/18/2024] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Developing a deep-learning network for denoising low-dose CT (LDCT) images necessitates paired computed tomography (CT) images acquired at different dose levels. However, it is challenging to obtain these images from the same patient. PURPOSE In this study, we introduce a novel approach to generate CT images at different dose levels. METHODS Our method involves the direct estimation of the quantum noise power spectrum (NPS) from patient CT images without the need for prior information. By modeling the anatomical NPS using a power-law function and estimating the quantum NPS from the measured NPS after removing the anatomical NPS, we create synthesized quantum noise by applying the estimated quantum NPS as a filter to random noise. By adding synthesized noise to CT images, synthesized CT images can be generated as if these are obtained at a lower dose. This leads to the generation of paired images at different dose levels for training denoising networks. RESULTS The proposed method accurately estimates the reference quantum NPS. The denoising network trained with paired data generated using synthesized quantum noise achieves denoising performance comparable to networks trained using Mayo Clinic data, as justified by the mean-squared-error (MSE), structural similarity index (SSIM)and peak signal-to-noise ratio (PSNR) scores. CONCLUSIONS This approach offers a promising solution for LDCT image denoising network development without the need for multiple scans of the same patient at different doses.
Collapse
Affiliation(s)
- Minah Han
- Department of Artificial Intelligence, Yonsei University, Seoul, South Korea
- Bareunex Imaging Inc., Incheon, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, Yonsei University, Seoul, South Korea
- Bareunex Imaging Inc., Incheon, South Korea
| |
Collapse
|
5
|
Yang CC, Hou KY. A CNN-based denoising method trained with images acquired with electron density phantoms for thin-sliced coronary artery calcium scans. J Appl Clin Med Phys 2024; 25:e14287. [PMID: 38346094 DOI: 10.1002/acm2.14287] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 12/13/2023] [Accepted: 01/16/2024] [Indexed: 03/13/2024] Open
Abstract
PURPOSE This work proposed a convolutional neural network (CNN)-based method trained with images acquired with electron density phantoms to reduce quantum noise for coronary artery calcium (CAC) scans reconstructed with slice thickness less than 3 mm. METHODS A DenseNet model was used to estimate quantum noise for CAC scans reconstructed with slice thickness of 0.5, 1.0 and 1.5 mm. Training data was acquired using electron density phantoms in three different sizes. The label images of the CNN model were real noise maps, while the input images of the CNN model were pseudo noise maps. Image denoising was conducted by subtracting the CNN output images from thin-sliced CAC scans. The efficacy of the proposed method was verified through both phantom study and patient study. RESULTS By means of phantom study, the proposed method was proven effective in reducing quantum noise in CAC scans reconstructed with 1.5-mm slice thickness without causing significant texture change or variation in HU values. With regard to patient study, calcifications were more clear on the denoised CAC scans reconstructed with slice thickness of 0.5, 1.0 and 1.5 mm than on 3-mm slice images, while over-smooth changes were not observed in the denoised CAC scans reconstructed with 1.5-mm slice thickness. CONCLUSION Our results demonstrated that the electron density phantoms can be used to generate training data for the proposed CNN-based denoising method to reduce quantum noise for CAC scans reconstructed with 1.5-mm slice thickness. Because anthropomorphic phantom is not a necessity, our method could make image denoising more practical in routine clinical practice.
Collapse
Affiliation(s)
- Ching-Ching Yang
- Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Kuei-Yuan Hou
- Department of Radiology, Cathay General Hospital, Taipei, Taiwan
| |
Collapse
|
6
|
Thomsen FSL, Iarussi E, Borggrefe J, Boyd SK, Wang Y, Battié MC. Bone-GAN: Generation of virtual bone microstructure of high resolution peripheral quantitative computed tomography. Med Phys 2023; 50:6943-6954. [PMID: 37264564 DOI: 10.1002/mp.16482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 04/06/2023] [Accepted: 04/25/2023] [Indexed: 06/03/2023] Open
Abstract
BACKGROUND Data-driven development of medical biomarkers of bone requires a large amount of image data but physical measurements are generally too restricted in size and quality to perform a robust training. PURPOSE This study aims to provide a reliable in silico method for the generation of realistic bone microstructure with defined microarchitectural properties. Synthetic bone samples may improve training of neural networks and serve for the development of new diagnostic parameters of bone architecture and mineralization. METHODS One hundred-fifty cadaveric lumbar vertebrae from 48 different male human spines were scanned with a high resolution peripheral quantitative CT. After prepocessing the scans, we extracted 10,795 purely spongeous bone patches, each with a side length of 32 voxels (5 mm) and isotropic voxel size of 164 μm. We trained a volumetric generative adversarial network (GAN) in a progressive manner to create synthetic microstructural bone samples. We then added a style transfer technique to allow the generation of synthetic samples with defined microstructure and gestalt by simultaneously optimizing two entangled loss functions. Reliability testing was performed by comparing real and synthetic bone samples on 10 well-understood microstructural parameters. RESULTS The method was able to create synthetic bone samples with visual and quantitative properties that effectively matched with the real samples. The GAN contained a well-formed latent space allowing to smoothly morph bone samples by their microstructural parameters, visual appearance or both. Optimum performance has been obtained for bone samples with voxel size 32 × 32 × 32, but also samples of size 64 × 64 × 64 could be synthesized. CONCLUSIONS Our two-step-approach combines a parameter-agnostic GAN with a parameter-specific style transfer technique. It allows to generate an unlimited anonymous database of microstructural bone samples with sufficient realism to be used for the development of new data-driven methods of bone-biomarkers. Particularly, the style transfer technique can generate datasets of bone samples with specific conditions to simulate certain bone pathologies.
Collapse
Affiliation(s)
- Felix S L Thomsen
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
- Department of Electrical and Computer Engineering, Institute for Computer Science and Engineering, National University of the South (DIEC-ICIC-UNS), Bahía Blanca, Argentina
| | - Emmanuel Iarussi
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Laboratory of Artificial Intelligence, University Torcuato Di Tella, Buenos Aires, Argentina
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Steven K Boyd
- McCaig Institute for Bone and Joint Health, University of Calgary, Canada
| | - Yue Wang
- Spine lab, Department of Orthopedic Surgery, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Michele C Battié
- Common Spinal Disorders Research Group, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Canada
| |
Collapse
|
7
|
Chang S, Huber NR, Marsh JF, Koons EK, Gong H, Yu L, McCollough CH, Leng S. Pie-Net: Prior-information-enabled deep learning noise reduction for coronary CT angiography acquired with a photon counting detector CT. Med Phys 2023; 50:6283-6295. [PMID: 37042049 PMCID: PMC10564970 DOI: 10.1002/mp.16411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/10/2023] [Accepted: 03/29/2023] [Indexed: 04/13/2023] Open
Abstract
BACKGROUND Photon-counting-detector CT (PCD-CT) enables the production of virtual monoenergetic images (VMIs) at a high spatial resolution (HR) via simultaneous acquisition of multi-energy data. However, noise levels in these HR VMIs are markedly increased. PURPOSE To develop a deep learning technique that utilizes a lower noise VMI as prior information to reduce image noise in HR, PCD-CT coronary CT angiography (CTA). METHODS Coronary CTA exams of 10 patients were acquired using PCD-CT (NAEOTOM Alpha, Siemens Healthineers). A prior-information-enabled neural network (Pie-Net) was developed, treating one lower-noise VMI (e.g., 70 keV) as a prior input and one noisy VMI (e.g., 50 keV or 100 keV) as another. For data preprocessing, noisy VMIs were reconstructed by filtered back-projection (FBP) and iterative reconstruction (IR), which were then subtracted to generate "noise-only" images. Spatial decoupling was applied to the noise-only images to mitigate overfitting and improve randomization. Thicker slice averaging was used for the IR and prior images. The final training inputs for the convolutional neural network (CNN) inside the Pie-Net consisted of thicker-slice signal images with the reinsertion of spatially decoupled noise-only images and the thicker-slice prior images. The CNN training labels consisted of the corresponding thicker-slice label images without noise insertion. Pie-Net's performance was evaluated in terms of image noise, spatial detail preservation, and quantitative accuracy, and compared to a U-net-based method that did not include prior information. RESULTS Pie-Net provided strong noise reduction, by 95 ± 1% relative to FBP and by 60 ± 8% relative to IR. For HR VMIs at different keV (e.g., 50 keV or 100 keV), Pie-Net maintained spatial and spectral fidelity. The inclusion of prior information from the PCD-CT data in the spectral domain was able to improve a robust deep learning-based denoising performance compared to the U-net-based method, which caused some loss of spatial detail and introduced some artifacts. CONCLUSION The proposed Pie-Net achieved substantial noise reduction while preserving HR VMI's spatial and spectral properties.
Collapse
Affiliation(s)
- Shaojie Chang
- Department of Radiology, Mayo Clinic, Rochester, MN, US
| | | | | | | | - Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN, US
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN, US
| | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN, US
| |
Collapse
|
8
|
Niu C, Li M, Fan F, Wu W, Guo X, Lyu Q, Wang G. Noise Suppression With Similarity-Based Self-Supervised Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1590-1602. [PMID: 37015446 PMCID: PMC10288330 DOI: 10.1109/tmi.2022.3231428] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Image denoising is a prerequisite for downstream tasks in many fields. Low-dose and photon-counting computed tomography (CT) denoising can optimize diagnostic performance at minimized radiation dose. Supervised deep denoising methods are popular but require paired clean or noisy samples that are often unavailable in practice. Limited by the independent noise assumption, current self-supervised denoising methods cannot process correlated noises as in CT images. Here we propose the first-of-its-kind similarity-based self-supervised deep denoising approach, referred to as Noise2Sim, that works in a nonlocal and nonlinear fashion to suppress not only independent but also correlated noises. Theoretically, Noise2Sim is asymptotically equivalent to supervised learning methods under mild conditions. Experimentally, Nosie2Sim recovers intrinsic features from noisy low-dose CT and photon-counting CT images as effectively as or even better than supervised learning methods on practical datasets visually, quantitatively and statistically. Noise2Sim is a general self-supervised denoising approach and has great potential in diverse applications.
Collapse
|
9
|
Liu X, Liang X, Deng L, Tan S, Xie Y. Learning low-dose CT degradation from unpaired data with flow-based model. Med Phys 2022; 49:7516-7530. [PMID: 35880375 DOI: 10.1002/mp.15886] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND There has been growing interest in low-dose computed tomography (LDCT) for reducing the X-ray radiation to patients. However, LDCT always suffers from complex noise in reconstructed images. Although deep learning-based methods have shown their strong performance in LDCT denoising, most of them require a large number of paired training data of normal-dose CT (NDCT) images and LDCT images, which are hard to acquire in the clinic. Lack of paired training data significantly undermines the practicability of supervised deep learning-based methods. To alleviate this problem, unsupervised or weakly supervised deep learning-based methods are required. PURPOSE We aimed to propose a method that achieves LDCT denoising without training pairs. Specifically, we first trained a neural network in a weakly supervised manner to simulate LDCT images from NDCT images. Then, simulated training pairs could be used for supervised deep denoising networks. METHODS We proposed a weakly supervised method to learn the degradation of LDCT from unpaired LDCT and NDCT images. Concretely, LDCT and normal-dose images were fed into one shared flow-based model and projected to the latent space. Then, the degradation between low-dose and normal-dose images was modeled in the latent space. Finally, the model was trained by minimizing the negative log-likelihood loss with no requirement of paired training data. After training, an NDCT image can be input to the trained flow-based model to generate the corresponding LDCT image. The simulated image pairs of NDCT and LDCT can be further used to train supervised denoising neural networks for test. RESULTS Our method achieved much better performance on LDCT image simulation compared with the most widely used image-to-image translation method, CycleGAN, according to the radial noise power spectrum. The simulated image pairs could be used for any supervised LDCT denoising neural networks. We validated the effectiveness of our generated image pairs on a classic convolutional neural network, REDCNN, and a novel transformer-based model, TransCT. Our method achieved mean peak signal-to-noise ratio (PSNR) of 24.43dB, mean structural similarity (SSIM) of 0.785 on an abdomen CT dataset, mean PSNR of 33.88dB, mean SSIM of 0.797 on a chest CT dataset, which outperformed several traditional CT denoising methods, the same network trained by CycleGAN-generated data, and a novel transfer learning method. Besides, our method was on par with the supervised networks in terms of visual effects. CONCLUSION We proposed a flow-based method to learn LDCT degradation from only unpaired training data. It achieved impressive performance on LDCT synthesis. Next, we could train neural networks with the generated paired data for LDCT denoising. The denoising results are better than traditional and weakly supervised methods, comparable to supervised deep learning methods.
Collapse
Affiliation(s)
- Xuan Liu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaokun Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Deng
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Shan Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Yaoqin Xie
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
10
|
Design and manufacture of an X-ray generator by support vector machines. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00754-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Effective Spatial Resolution of Photon Counting CT for Imaging of Trabecular Structures is Superior to Conventional Clinical CT and Similar to High Resolution Peripheral CT. Invest Radiol 2022; 57:620-626. [PMID: 35318968 DOI: 10.1097/rli.0000000000000873] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Photon counting computed tomography (PCCT) might offer an effective spatial resolution that is significantly improved compared with conventional state-of-the-art computed tomography (CT) and even provide a microstructural level of detail similar to high-resolution peripheral CT (HR-pQCT). The aim of this study was to evaluate the volumetric effective spatial resolution of clinically approved PCCT as an alternative to HR-pQCT for ex vivo or preclinical high-resolution imaging of bone microstructure. MATERIALS AND METHODS The experiment contained 5 human vertebrae embedded in epoxy resin, which were scanned 3 times each, and on 3 different clinical CT scanners: a PCCT (Naeotom Alpha), a dual-energy CT (Somatom Force [SF]), and a single-energy CT (Somatom Sensation 40 [S40]), all manufactured by Siemens Healthineers (Erlangen, Germany). Scans were performed with a tube voltage of 120 kVp and, to provide maximum scan performance and minimum noise deterioration, with exposures of 1500 mAs (SF), 2400 mAs (S40), and 4500 mAs (PCCT) and low slice increments of 0.1 (PCCT) and 0.3 mm (SF, S40). Images were reconstructed with sharp and very sharp bone kernels, Br68 and Br76 (PCCT), Br64 (SF), and B65s and B75h (S40). Ground truth information was obtained from an XtremeCT scanner (Scanco, Brüttisellen, Switzerland). Voxel-wise comparison was performed after registration, calibration, and resampling of the volumes to isotropic voxel size of 0.164 mm. Three-dimensional point spread- and modulation-transfer functions were calculated with Wiener's deconvolution in the anatomical trabecular structure, allowing optimum estimation of device- and kernel-specific smoothing properties as well as specimen-related diffraction effects on the measurement. RESULTS At high contrast (modulation transfer function [MTF] of 10%), radial effective resolutions of PCCT were 10.5 lp/cm (minimum resolvable object size 476 μm) for kernel Br68 and 16.9 lp/cm (295 μm) for kernel Br76. At low contrast (MTF 5%), radial effective spatial resolutions were 10.8 lp/cm (464 μm) for kernel Br68 and 30.5 lp/cm (164 μm) for kernel Br76. Axial effective resolutions of PCCT for both kernels were between 27.0 (185 μm) and 29.9 lp/cm (167 μm). Spatial resolutions with kernel Br76 might possibly be still higher but were technically limited by the isotropic voxel size of 164 μm. The effective volumetric resolutions of PCCT with kernel Br76 ranged between 61.9 (MTF 10%) and 222.4 (MTF 5%) elements per cubic mm. Photon counting CT improved the effective volumetric resolution by factor 5.5 (MTF 10%) and 18 (MTF 5%) compared with SF and by a factor of 8.7 (MTF 10%) and 20 (MTF 5%) compared with S40. Photon counting CT allowed obtaining similar structural information as HR-pQCT. CONCLUSIONS The effective spatial resolution of PCCT in trabecular bone imaging was comparable with that of HR-pQCT and more than 5 times higher compared with conventional CT. For ex vivo samples and when patient radiation dose can be neglected, PCCT allows imaging bone microstructure at a preclinical level of detail.
Collapse
|
12
|
Guo X, Zhang L, Xing Y. Analytical covariance estimation for iterative CT reconstruction methods. Biomed Phys Eng Express 2022; 8. [PMID: 35213850 DOI: 10.1088/2057-1976/ac58bf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 02/25/2022] [Indexed: 11/11/2022]
Abstract
Covariance of reconstruction images are useful to analyze the magnitude and correlation of noise in the evaluation of systems and reconstruction algorithms. The covariance estimation requires a big number of image samples that are hard to acquire in reality. A covariance propagation method from projection by a few noisy realizations is studied in this work. Based on the property of convergent points of cost funtions, the proposed method is composed of three steps, (1) construct a relationship between the covariance of projection and corresponding reconstruction from cost functions at its convergent point, (2) simplify the covariance relationship constructed in (1) by introducing an approximate gradient of penalties, and (3) obtain an analytical covariance estimation according to the simplified relationship in (2). Three approximation methods for step (2) are studied: the linear approximation of the gradient of penalties (LAM), the Taylor apprximation (TAM), and the mixture of LAM and TAM (MAM). TV and qGGMRF penalized weighted least square methods are experimented on. Results from statistical methods are used as reference. Under the condition of unstable 2nd derivative of penalties such as TV, the covariance image estimated by LAM accords to reference well but of smaller values, while the covarianc estimation by TAM is quite off. Under the conditon of relatively stable 2nd derivative of penalties such as qGGMRF, TAM performs well and LAM is again with a negative bias in magnitude. MAM gives a best performance under both conditions by combining LAM and TAM. Results also show that only one noise realization is enough to obtain reasonable covariance estimation analytically, which is important for practical usage. This work suggests the necessity and a new way to estimate the covariance for non-quadratically penalized reconstructions. Currently, the proposed method is computationally expensive for large size reconstructions.Computational efficiency is our future work to focus.
Collapse
Affiliation(s)
- Xiaoyue Guo
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| | - Li Zhang
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| | - Yuxiang Xing
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| |
Collapse
|
13
|
Liu Y, Zhou L, Peng H. Machine learning based oxygen and carbon concentration derivation using dual-energy CT for PET-based dose verification in proton therapy. Med Phys 2022; 49:3347-3360. [PMID: 35246842 DOI: 10.1002/mp.15581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/02/2022] [Accepted: 02/20/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Online dose verification based on proton-induced positron emitters requires high accuracy in the assignment of elemental composition (e.g. C and O). We developed a machine learning framework for deriving oxygen and carbon concentration based on dual-energy CT (DECT). METHODS Digital phantoms at the head site were constructed based on single-energy CT (SECT) and stoichiometric calibration. DECT images (80 kVp and 140 kVp) were synthesized using two methods: 1) theoretical CT numbers with Gaussian noise (method 1) and 2) forward/backward image reconstruction with poly-energetic energy spectrum and Poisson noise modeled (method 2). Two architectures of convolutional neural networks, UNet and ResNet, were investigated to map from DECT images to C/O weights. Four cases (UNet-1: Method 1+UNet, ResNet-1: Method 1+ResNet, UNet-2: Method 2+UNet, and ResNet-2: Method 2 +ResNet) were tested for different tissue types and different noise levels. Monte-Carlo simulation was employed to identify the impact of fluctuation in oxygen and carbon concentration on activity/dose distribution. RESULTS When no noise present, all four cases are able to obtain <2% mean absolute errors (MAE) and <4% root mean square error (RMSE). For the worst image quality (e.g. lowest image SNR), the RMSE for O among all tissue types is 3.02% (UNet-1), 4.46% (ResNet-1), 4.38% (UNet-2) and 6.31% (ResNet-2), respectively. For UNet-1 and ResNet-1, the model performed slightly better in terms of RMSE for skeletal tissue than soft tissues. Such trend is not observed for UNet-2 and ResNet-2. With regard to the comparison between UNet and ResNet, different accuracy and noise immunity are observed. The activity profiles exhibit 3-5% difference in terms of mean relative error (MRE) between the ground truth and machine learning outcome. CONCLUSION We explored the feasibility of a machine learning framework to derive elemental concentration of oxygen and carbon based on DECT images. Two machine learning models, UNet and ResNet, are able to utilize spatial correlation and obtain accurate carbon and oxygen concentration. This study lays a foundation for us to apply the proposed approach to clinical DECT images. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yuxiang Liu
- Department of Medical Physics, Wuhan University, Wuhan, 430072, China
| | - Long Zhou
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Hao Peng
- Department of Medical Physics, Wuhan University, Wuhan, 430072, China.,ProtonSmart Ltd, Wuhan, 430072, China
| |
Collapse
|
14
|
Divel SE, Christensen S, Segars WP, Lansberg MG, Pelc NJ. A dynamic simulation framework for CT perfusion in stroke assessment built from first principles. Med Phys 2021; 48:3500-3510. [PMID: 33877693 DOI: 10.1002/mp.14887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 01/24/2021] [Accepted: 04/02/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Physicians utilize cerebral perfusion maps (e.g., cerebral blood flow, cerebral blood volume, transit time) to prescribe the plan of care for stroke patients. Variability in scanning techniques and post-processing software can result in differences between these perfusion maps. To determine which techniques are acceptable for clinical care, it is important to validate the accuracy and reproducibility of the perfusion maps. Validation using clinical data is challenging due to the lack of a gold standard to assess cerebral perfusion and the impracticality of scanning patients multiple times with different scanning techniques. In contrast, simulated data from a realistic digital phantom of the cerebral perfusion in acute stroke patients would enable studies to optimize and validate the scanning and post-processing techniques. METHODS We describe a complete framework to simulate CT perfusion studies for stroke assessment. We begin by expanding the XCAT brain phantom to enable spatially varying contrast agent dynamics and incorporate a realistic model of the dynamics in the cerebral vasculature derived from first principles. A dynamic CT simulator utilizes the time-concentration curves to define the contrast agent concentration in the object at each time point and generates CT perfusion images compatible with commercially available post-processing software. We also generate ground truth perfusion maps to which the maps generated by post-processing software can be compared. RESULTS We demonstrate a dynamic CT perfusion study of a simulated patient with an ischemic stroke and the resulting perfusion maps generated by post-processing software. We include a visual comparison between the computer-generated perfusion maps and the ground truth perfusion maps. The framework is highly tunable; users can modify the perfusion properties (e.g., occlusion location, CBF, CBV, and MTT), scanner specifications (e.g., focal spot size and detector configuration), scanning protocol (e.g., kVp and mAs), and reconstruction parameters (e.g., slice thickness and reconstruction filter). CONCLUSIONS This framework provides realistic test data with the underlying ground truth that enables a robust assessment of CT perfusion techniques and post-processing methods for stroke assessment.
Collapse
Affiliation(s)
- Sarah E Divel
- Departments of Electrical Engineering and Radiology, Stanford University, Stanford, CA, 94305, USA
| | - Soren Christensen
- Stanford Stroke Center, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - William P Segars
- Carl E. Ravin Advanced Imaging Laboratories, Departments of Radiology and Biomedical Engineering, Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
| | - Maarten G Lansberg
- Department of Neurology and Neurological Sciences and the Stanford Stroke Center, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Norbert J Pelc
- Departments of Bioengineering and Radiology, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
15
|
Subhas N, Jun BJ, Mehta PN, Ricchetti ET, Obuchowski NA, Primak AN, Iannotti JP. Low-dose CT with metal artifact reduction in arthroplasty imaging: a cadaveric and clinical study. Skeletal Radiol 2021; 50:955-965. [PMID: 33037447 DOI: 10.1007/s00256-020-03643-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 09/15/2020] [Accepted: 10/05/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To determine whether a simulated low-dose metal artifact reduction (MAR) CT technique is comparable with a clinical dose MAR technique for shoulder arthroplasty evaluation. MATERIALS AND METHODS Two shoulder arthroplasties in cadavers and 25 shoulder arthroplasties in patients were scanned using a clinical dose (140 kVp, 300 qrmAs); cadavers were also scanned at half dose (140 kVp, 150 qrmAs). Images were reconstructed using a MAR CT algorithm at full dose and a noise-insertion algorithm simulating 50% dose reduction. For the actual and simulated half-dose cadaver scans, differences in SD for regions of interest were assessed, and streak artifact near the arthroplasty was graded by 3 blinded readers. Simulated half-dose scans were compared with full-dose scans in patients by measuring differences in implant position and by comparing readers' grades of periprosthetic osteolysis and muscle atrophy. RESULTS The mean difference in SD between actual and simulated half-dose methods was 2.42 HU (95% CI [1.4, 3.4]). No differences in streak artifact grades were seen in 13/18 (72.2%) comparisons in cadavers. In patients, differences in implant position measurements were within 1° or 1 mm in 149/150 (99.3%) measurements. The inter-reader agreement rates were nearly identical when readers were using full-dose (77.3% [232/300] for osteolysis and 76.9% [173/225] for muscle atrophy) and simulated half-dose (76.7% [920/1200] for osteolysis and 74.0% [666/900] for muscle atrophy) scans. CONCLUSION A simulated half-dose MAR CT technique is comparable both quantitatively and qualitatively with a standard-dose technique for shoulder arthroplasty evaluation, demonstrating that this technique could be used to reduce dose in arthroplasty imaging.
Collapse
Affiliation(s)
- Naveen Subhas
- Department of Diagnostic Radiology, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA.
| | - Bong J Jun
- Department of Biomedical Engineering, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Parthiv N Mehta
- Department of Diagnostic Radiology, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Eric T Ricchetti
- Department of Orthopaedic Surgery, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Nancy A Obuchowski
- Department of Biostatistics, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Andrew N Primak
- Siemens Medical Solutions USA, Inc., Malvern, PA, 19355, USA
| | - Joseph P Iannotti
- Department of Orthopaedic Surgery, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| |
Collapse
|
16
|
Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021; 71:102065. [PMID: 33915472 DOI: 10.1016/j.media.2021.102065] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/30/2021] [Indexed: 12/12/2022]
Abstract
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Collapse
|
17
|
Shen C, Tsai MY, Chen L, Li S, Nguyen D, Wang J, Jiang SB, Jia X. On the robustness of deep learning-based lung-nodule classification for CT images with respect to image noise. Phys Med Biol 2020; 65:245037. [PMID: 33152716 PMCID: PMC7870572 DOI: 10.1088/1361-6560/abc812] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Robustness is an important aspect when evaluating a method of medical image analysis. In this study, we investigated the robustness of a deep learning (DL)-based lung-nodule classification model for CT images with respect to noise perturbations. A deep neural network (DNN) was established to classify 3D CT images of lung nodules into malignant or benign groups. The established DNN was able to predict malignancy rate of lung nodules based on CT images, achieving the area under the curve of 0.91 for the testing dataset in a tenfold cross validation as compared to radiologists' prediction. We then evaluated its robustness against noise perturbations. We added to the input CT images noise signals generated randomly or via an optimization scheme using a realistic noise model based on a noise power spectrum for a given mAs level, and monitored the DNN's output. The results showed that the CT noise was able to affect the prediction results of the established DNN model. With random noise perturbations at 100 mAs, DNN's predictions for 11.2% of training data and 17.4% of testing data were successfully altered by at least once. The percentage increased to 23.4% and 34.3%, respectively, for optimization-based perturbations. We further evaluated robustness of models with different architectures, parameters, number of output labels, etc, and robustness concern was found in these models to different degrees. To improve model robustness, we empirically proposed an adaptive training scheme. It fine-tuned the DNN model by including perturbations in the training dataset that successfully altered the DNN's perturbations. The adaptive scheme was repeatedly performed to gradually improve DNN's robustness. The numbers of perturbations at 100 mAs affecting DNN's predictions were reduced to 10.8% for training and 21.1% for testing by the adaptive training scheme after two iterations. Our study illustrated that robustness may potentially be a concern for an exemplary DL-based lung-nodule classification model for CT images, indicating the needs for evaluating and ensuring model robustness when developing similar models. The proposed adaptive training scheme may be able to improve model robustness.
Collapse
Affiliation(s)
- Chenyang Shen
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Min-Yu Tsai
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Shulong Li
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Steve B. Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Xun Jia
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| |
Collapse
|