1
|
Qiu RLJ, Lei Y, Shelton J, Higgins K, Bradley JD, Curran WJ, Liu T, Kesarwala AH, Yang X. Deep learning-based thoracic CBCT correction with histogram matching. Biomed Phys Eng Express 2021; 7. [PMID: 34654011 DOI: 10.1088/2057-1976/ac3055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 10/15/2021] [Indexed: 12/25/2022]
Abstract
Kilovoltage cone-beam computed tomography (CBCT)-based image-guided radiation therapy (IGRT) is used for daily delivery of radiation therapy, especially for stereotactic body radiation therapy (SBRT), which imposes particularly high demands for setup accuracy. The clinical applications of CBCTs are constrained, however, by poor soft tissue contrast, image artifacts, and instability of Hounsfield unit (HU) values. Here, we propose a new deep learning-based method to generate synthetic CTs (sCT) from thoracic CBCTs. A deep-learning model which integrates histogram matching (HM) into a cycle-consistent adversarial network (Cycle-GAN) framework, called HM-Cycle-GAN, was trained to learn mapping between thoracic CBCTs and paired planning CTs. Perceptual supervision was adopted to minimize blurring of tissue interfaces. An informative maximizing loss was calculated by feeding CBCT into the HM-Cycle-GAN to evaluate the image histogram matching between the planning CTs and the sCTs. The proposed algorithm was evaluated using data from 20 SBRT patients who each received 5 fractions and therefore 5 thoracic CBCTs. To reduce the effect of anatomy mismatch, original CBCT images were pre-processed via deformable image registrations with the planning CT before being used in model training and result assessment. We used planning CTs as ground truth for the derived sCTs from the correspondent co-registered CBCTs. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC) indices were adapted as evaluation metrics of the proposed algorithm. Assessments were done using Cycle-GAN as the benchmark. The average MAE, PSNR, and NCC of the sCTs generated by our method were 66.2 HU, 30.3 dB, and 0.95, respectively, over all CBCT fractions. Superior image quality and reduced noise and artifact severity were seen using the proposed method compared to the results from the standard Cycle-GAN method. Our method could therefore improve the accuracy of IGRT and corrected CBCTs could help improve online adaptive RT by offering better contouring accuracy and dose calculation.
Collapse
Affiliation(s)
- Richard L J Qiu
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Joseph Shelton
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Aparna H Kesarwala
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| |
Collapse
|
2
|
Jang H, Kim S, Yoo S, Han S, Sohn HG. Feature Matching Combining Radiometric and Geometric Characteristics of Images, Applied to Oblique- and Nadir-Looking Visible and TIR Sensors of UAV Imagery. Sensors (Basel) 2021; 21:s21134587. [PMID: 34283114 PMCID: PMC8271569 DOI: 10.3390/s21134587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 06/25/2021] [Accepted: 06/30/2021] [Indexed: 11/17/2022]
Abstract
A large amount of information needs to be identified and produced during the process of promoting projects of interest. Thermal infrared (TIR) images are extensively used because they can provide information that cannot be extracted from visible images. In particular, TIR oblique images facilitate the acquisition of information of a building’s facade that is challenging to obtain from a nadir image. When a TIR oblique image and the 3D information acquired from conventional visible nadir imagery are combined, a great synergy for identifying surface information can be created. However, it is an onerous task to match common points in the images. In this study, a robust matching method of image pairs combined with different wavelengths and geometries (i.e., visible nadir-looking vs. TIR oblique, and visible oblique vs. TIR nadir-looking) is proposed. Three main processes of phase congruency, histogram matching, and Image Matching by Affine Simulation (IMAS) were adjusted to accommodate the radiometric and geometric differences of matched image pairs. The method was applied to Unmanned Aerial Vehicle (UAV) images of building and non-building areas. The results were compared with frequently used matching techniques, such as scale-invariant feature transform (SIFT), speeded-up robust features (SURF), synthetic aperture radar–SIFT (SAR–SIFT), and Affine SIFT (ASIFT). The method outperforms other matching methods in root mean square error (RMSE) and matching performance (matched and not matched). The proposed method is believed to be a reliable solution for pinpointing surface information through image matching with different geometries obtained via TIR and visible sensors.
Collapse
Affiliation(s)
- Hyoseon Jang
- School of Civil and Environmental Engineering, Yonsei University, Seoul 03722, Korea; (H.J.); (S.K.); (S.Y.)
| | - Sangkyun Kim
- School of Civil and Environmental Engineering, Yonsei University, Seoul 03722, Korea; (H.J.); (S.K.); (S.Y.)
| | - Suhong Yoo
- School of Civil and Environmental Engineering, Yonsei University, Seoul 03722, Korea; (H.J.); (S.K.); (S.Y.)
| | - Soohee Han
- Department of Geoinformatics Engineering, Kyungil University, Gyeongsan 38428, Korea;
| | - Hong-Gyoo Sohn
- School of Civil and Environmental Engineering, Yonsei University, Seoul 03722, Korea; (H.J.); (S.K.); (S.Y.)
- Correspondence: ; Tel.: +82-2-2123-2809
| |
Collapse
|
3
|
Touati R, Le WT, Kadoury S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys Med Biol 2021; 66. [PMID: 33761478 DOI: 10.1088/1361-6560/abf1bb] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 03/24/2021] [Indexed: 12/12/2022]
Abstract
With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada.,CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
4
|
Abstract
During the capturing of the time-lapse sequence of fluorescently labeled samples, fluorescence intensity exhibits decays. This phenomenon is known as 'photobleaching' and is a widely known problem in imaging in life sciences. The photobleaching can be attenuated by tuning the imaging set-up, but when such adjustments only partially work, the image sequence can be corrected for the loss of intensity in order to precisely segment the target structure or to quantify true intensity dynamics. We implemented an ImageJ plugin that allows the user to compensate for the photobleaching to estimate the non-bleaching condition with choice of three different algorithms: simple ratio, exponential fitting, and histogram matching methods. The histogram matching method is a novel algorithm for photobleaching correction. This article presents details and characteristics of each algorithm based on application to actual image sequences.
Collapse
Affiliation(s)
- Kota Miura
- Nikon Imaging Center, University of Heidelberg, Heidelberg, 69120, Germany.,Centre for Molecular and Cellular Imaging, EMBL, Heidelberg, 69117, Germany
| |
Collapse
|
5
|
Roy S, He Q, Sweeney E, Carass A, Reich DS, Prince JL, Pham DL. Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation. IEEE J Biomed Health Inform 2015; 19:1598-609. [PMID: 26340685 PMCID: PMC4562064 DOI: 10.1109/jbhi.2015.2439242] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.
Collapse
|
6
|
Chen CL, Ishikawa H, Wollstein G, Bilonick RA, Sigal IA, Kagemann L, Schuman JS. Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images. Invest Ophthalmol Vis Sci 2015; 56:3810-9. [PMID: 26066749 PMCID: PMC4468911 DOI: 10.1167/iovs.15-16502] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Accepted: 04/28/2015] [Indexed: 11/24/2022] Open
Abstract
PURPOSE We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. METHODS We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1-10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6-10), were determined for device, original, and HM measurements. RESULTS The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. CONCLUSIONS The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects.
Collapse
Affiliation(s)
- Chieh-Li Chen
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Hiroshi Ishikawa
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Gadi Wollstein
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Richard A. Bilonick
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Ian A. Sigal
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Larry Kagemann
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Joel S. Schuman
- UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
7
|
Astola L, Molenaar J. A New Modified Histogram Matching Normalization for Time Series Microarray Analysis. Microarrays (Basel) 2014; 3:203-11. [PMID: 27600344 PMCID: PMC4996360 DOI: 10.3390/microarrays3030203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Revised: 06/19/2014] [Accepted: 06/25/2014] [Indexed: 11/16/2022]
Abstract
Microarray data is often utilized in inferring regulatory networks. Quantile normalization (QN) is a popular method to reduce array-to-array variation. We show that in the context of time series measurements QN may not be the best choice for this task, especially not if the inference is based on continuous time ODE model. We propose an alternative normalization method that is better suited for network inference from time series data.
Collapse
Affiliation(s)
- Laura Astola
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5612 AZ,The Netherlands.
| | - Jaap Molenaar
- Biometris, Wageningen University and Research Centre, Wageningen 6708 PB, The Netherlands.
- Wageningen Centre for Systems Biology, Wageningen 6700 AC, The Netherlands.
| |
Collapse
|
8
|
Roy S, Carass A, Jog A, Prince JL, Lee J. MR to CT Registration of Brains using Image Synthesis. Proc SPIE Int Soc Opt Eng 2014; 9034:spie.org/Publications/Proceedings/Paper/10.1117/12.2043954. [PMID: 25057341 PMCID: PMC4104818 DOI: 10.1117/12.2043954] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Computed tomography (CT) is the standard imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Jerry L. Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Junghoon Lee
- Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine
| |
Collapse
|