1
|
Olasz C, Varga LG, Nagy A. Novel U-net based deep neural networks for transmission tomography. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:13-31. [PMID: 34806643 DOI: 10.3233/xst-210962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND The fusion of computer tomography and deep learning is an effective way of achieving improved image quality and artifact reduction in reconstructed images. OBJECTIVE In this paper, we present two novel neural network architectures for tomographic reconstruction with reduced effects of beam hardening and electrical noise. METHODS In the case of the proposed novel architectures, the image reconstruction step is located inside the neural networks, which allows the network to be trained by taking the mathematical model of the projections into account. This strong connection enables us to enhance the projection data and the reconstructed image together. We tested the two proposed models against three other methods on two datasets. The datasets contain physically correct simulated data, and they show strong signs of beam hardening and electrical noise. We also performed a numerical evaluation of the neural networks on the reconstructed images according to three error measurements and provided a scoring system of the methods derived from the three measures. RESULTS The results showed the superiority of the novel architecture called TomoNet2. TomoNet2 improved the quality of the images according to the average Structural Similarity Index from 0.9372 to 0.9977 and 0.9519 to 0.9886 on the two data sets, when compared to the FBP method. This network also yielded the best results for 79.2 and 53.0 percent for the two datasets according to Peak-Signal-to-Noise-Ratio compared to the other improvement techniques. CONCLUSIONS Our experimental results showed that the reconstruction step used in skip connections in deep neural networks improves the quality of the reconstructions. We are confident that our proposed method can be effectively applied to other datasets for tomographic purposes.
Collapse
Affiliation(s)
| | | | - Antal Nagy
- University of Szeged, 6720, Szeged, Hungary
| |
Collapse
|
2
|
Gong H, Ren L, Hsieh SS, McCollough CH, Yu L. Deep learning enabled ultra-fast-pitch acquisition in clinical X-ray computed tomography. Med Phys 2021; 48:5712-5726. [PMID: 34415068 DOI: 10.1002/mp.15176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 07/04/2021] [Accepted: 07/30/2021] [Indexed: 01/04/2023] Open
Abstract
OBJECTIVE In X-raycomputed tomography (CT), many important clinical applications may benefit from a fast acquisition speed. The helical scan is the most widely used acquisition mode in clinical CT, where a fast helical pitch can improve the acquisition speed. However, on a typical single-source helical CT (SSCT) system, the helical pitch p typically cannot exceed 1.5; otherwise, reconstruction artifacts will result from data insufficiency. The purpose of this work is to develop a deep convolutional neural network (CNN) to correct for artifacts caused by an ultra-fast pitch, which can enable faster acquisition speed than what is currently achievable. METHODS A customized CNN (denoted as ultra-fast-pitch network (UFP-net)) was developed to restore the underlying anatomical structure from the artifact-corrupted post-reconstruction data acquired from SSCT with ultra-fast pitch (i.e., p ≥ 2). UFP-net employed residual learning to capture the features of image artifacts. UFP-net further deployed in-house-customized functional blocks with spatial-domain local operators and frequency-domain non-local operators, to explore multi-scale feature representation. Images of contrast-enhanced patient exams (n = 83) with routine pitch setting (i.e., p < 1) were retrospectively collected, which were used as training and testing datasets. This patient cohort involved CT exams over different scan ranges of anatomy (chest, abdomen, and pelvis) and CT systems (Siemens Definition, Definition Flash, Definition AS+, Siemens Healthcare, Inc.), and the corresponding base CT scanning protocols used consistent settings of major scan parameters (e.g., collimation and pitch). Forward projection of the original images was calculated to synthesize helical CT scans with one regular pitch setting (p = 1) and two ultra-fast-pitch setting (p = 2 and 3). All patient images were reconstructed using the standard filtered-back-projection (FBP) algorithm. A customized multi-stage training scheme was developed to incrementally optimize the parameters of UFP-net, using ultra-fast-pitch images as network inputs and regular pitch images as labels. Visual inspection was conducted to evaluate image quality. Structural similarity index (SSIM) and relative root-mean-square error (rRMSE) were used as quantitative quality metrics. RESULTS The UFP-net dramatically improved image quality over standard FBP at both ultra-fast-pitch settings. At p = 2, UFP-net yielded higher mean SSIM (> 0.98) with lower mean rRMSE (< 2.9%), compared to FBP (mean SSIM < 0.93; mean rRMSE > 9.1%). Quantitative metrics at p = 3: UFP-net-mean SSIM [0.86, 0.94] and mean rRMSE [5.0%, 8.2%]; FBP-mean SSIM [0.36, 0.61] and mean rRMSE [36.0%, 58.6%]. CONCLUSION The proposed UFP-net has the potential to enable ultra-fast data acquisition in clinical CT without sacrificing image quality. This method has demonstrated reasonable generalizability over different body parts when the corresponding CT exams involved consistent base scan parameters.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Liqiang Ren
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Scott S Hsieh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
3
|
Li M, Hsu W, Xie X, Cong J, Gao W. SACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising With Self-Supervised Perceptual Loss Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2289-2301. [PMID: 31985412 DOI: 10.1109/tmi.2020.2968472] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Computed tomography (CT) is a widely used screening and diagnostic tool that allows clinicians to obtain a high-resolution, volumetric image of internal structures in a non-invasive manner. Increasingly, efforts have been made to improve the image quality of low-dose CT (LDCT) to reduce the cumulative radiation exposure of patients undergoing routine screening exams. The resurgence of deep learning has yielded a new approach for noise reduction by training a deep multi-layer convolutional neural networks (CNN) to map the low-dose to normal-dose CT images. However, CNN-based methods heavily rely on convolutional kernels, which use fixed-size filters to process one local neighborhood within the receptive field at a time. As a result, they are not efficient at retrieving structural information across large regions. In this paper, we propose a novel 3D self-attention convolutional neural network for the LDCT denoising problem. Our 3D self-attention module leverages the 3D volume of CT images to capture a wide range of spatial information both within CT slices and between CT slices. With the help of the 3D self-attention module, CNNs are able to leverage pixels with stronger relationships regardless of their distance and achieve better denoising results. In addition, we propose a self-supervised learning scheme to train a domain-specific autoencoder as the perceptual loss function. We combine these two methods and demonstrate their effectiveness on both CNN-based neural networks and WGAN-based neural networks with comprehensive experiments. Tested on the AAPM-Mayo Clinic Low Dose CT Grand Challenge data set, our experiments demonstrate that self-attention (SA) module and autoencoder (AE) perceptual loss function can efficiently enhance traditional CNNs and can achieve comparable or better results than the state-of-the-art methods.
Collapse
|
4
|
Boink YE, Manohar S, Brune C. A Partially-Learned Algorithm for Joint Photo-acoustic Reconstruction and Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:129-139. [PMID: 31180846 DOI: 10.1109/tmi.2019.2922026] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In an inhomogeneously illuminated photoacoustic image, important information like vascular geometry is not readily available, when only the initial pressure is reconstructed. To obtain the desired information, algorithms for image segmentation are often applied as a post-processing step. In this article, we propose to jointly acquire the photoacoustic reconstruction and segmentation, by modifying a recently developed partially learned algorithm based on a convolutional neural network. We investigate the stability of the algorithm against changes in initial pressures and photoacoustic system settings. These insights are used to develop an algorithm that is robust to input and system settings. Our approach can easily be applied to other imaging modalities and can be modified to perform other high-level tasks different from segmentation. The method is validated on challenging synthetic and experimental photoacoustic tomography data in limited angle and limited view scenarios. It is computationally less expensive than classical iterative methods and enables higher quality reconstructions and segmentations than the state-of-the-art learned and non-learned methods.
Collapse
|
5
|
Syben C, Michen M, Stimpel B, Seitz S, Ploner S, Maier AK. Technical Note: PYRO-NN: Python reconstruction operators in neural networks. Med Phys 2019; 46:5110-5115. [PMID: 31389023 PMCID: PMC6899669 DOI: 10.1002/mp.13753] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 07/23/2019] [Accepted: 07/24/2019] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems. METHODS PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. RESULTS The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN. CONCLUSIONS PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.
Collapse
Affiliation(s)
- Christopher Syben
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Markus Michen
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Bernhard Stimpel
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stephan Seitz
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stefan Ploner
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Andreas K. Maier
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| |
Collapse
|
6
|
Davoudi N, Deán-Ben XL, Razansky D. Deep learning optoacoustic tomography with sparse data. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0095-3] [Citation(s) in RCA: 93] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
7
|
Micieli D, Minniti T, Evans LM, Gorini G. Accelerating Neutron Tomography experiments through Artificial Neural Network based reconstruction. Sci Rep 2019; 9:2450. [PMID: 30792423 PMCID: PMC6385317 DOI: 10.1038/s41598-019-38903-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/18/2018] [Indexed: 11/19/2022] Open
Abstract
Neutron Tomography (NT) is a non-destructive technique to investigate the inner structure of a wide range of objects and, in some cases, provides valuable results in comparison to the more common X-ray imaging techniques. However, NT is time consuming and scanning a set of similar objects during a beamtime leads to data redundancy and long acquisition times. Nowadays NT is unfeasible for quality checking study of large quantities of similar objects. One way to decrease the total scan time is to reduce the number of projections. Analytical reconstruction methods are very fast but under this condition generate streaking artifacts in the reconstructed images. Iterative algorithms generally provide better reconstruction for limited data problems, but at the expense of longer reconstruction time. In this study, we propose the recently introduced Neural Network Filtered Back-Projection (NN-FBP) method to optimize the time usage in NT experiments. Simulated and real neutron data were used to assess the performance of the NN-FBP method as a function of the number of projections. For the first time a machine learning based algorithm is applied and tested for NT image reconstruction problem. We demonstrate that the NN-FBP method can reliably reduce acquisition and reconstruction times and it outperforms conventional reconstruction methods used in NT, providing high image quality for limited datasets.
Collapse
Affiliation(s)
- Davide Micieli
- Università della Calabria, Dipartimento di Fisica, Arcavacata di Rende (Cosenza), 87036, Italy.
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy.
| | - Triestino Minniti
- STFC, Rutherford Appleton Laboratory, ISIS Facility, Harwell, United Kingdom
| | - Llion Marc Evans
- Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxfordshire, United Kingdom
- College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, United Kingdom
| | - Giuseppe Gorini
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy
| |
Collapse
|
8
|
Deep Variational Networks with Exponential Weighting for Learning Computed Tomography. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_35] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
9
|
Huang Y, Lu Y, Taubmann O, Lauritsch G, Maier A. Traditional machine learning for limited angle tomography. Int J Comput Assist Radiol Surg 2018; 14:11-19. [DOI: 10.1007/s11548-018-1851-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Accepted: 08/15/2018] [Indexed: 10/28/2022]
|
10
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
11
|
Wurfl T, Hoffmann M, Christlein V, Breininger K, Huang Y, Unberath M, Maier AK. Deep Learning Computed Tomography: Learning Projection-Domain Weights From Image Domain in Limited Angle Problems. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1454-1463. [PMID: 29870373 DOI: 10.1109/tmi.2018.2833499] [Citation(s) in RCA: 111] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we present a new deep learning framework for 3-D tomographic reconstruction. To this end, we map filtered back-projection-type algorithms to neural networks. However, the back-projection cannot be implemented as a fully connected layer due to its memory requirements. To overcome this problem, we propose a new type of cone-beam back-projection layer, efficiently calculating the forward pass. We derive this layer's backward pass as a projection operation. Unlike most deep learning approaches for reconstruction, our new layer permits joint optimization of correction steps in volume and projection domain. Evaluation is performed numerically on a public data set in a limited angle setting showing a consistent improvement over analytical algorithms while keeping the same computational test-time complexity by design. In the region of interest, the peak signal-to-noise ratio has increased by 23%. In addition, we show that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows.
Collapse
|
12
|
Huang Y, Würfl T, Breininger K, Liu L, Lauritsch G, Maier A. Some Investigations on Robustness of Deep Learning in Limited Angle Tomography. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00928-1_17] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|