1
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
2
|
Jiang J, Hong J, Tringale K, Reyngold M, Crane C, Tyagi N, Veeraraghavan H. Progressively refined deep joint registration segmentation (ProRSeg) of gastrointestinal organs at risk: Application to MRI and cone-beam CT. Med Phys 2023; 50:4758-4774. [PMID: 37265185 PMCID: PMC11009869 DOI: 10.1002/mp.16527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 04/27/2023] [Accepted: 05/08/2023] [Indexed: 06/03/2023] Open
Abstract
BACKGROUND Adaptive radiation treatment (ART) for locally advanced pancreatic cancer (LAPC) requires consistently accurate segmentation of the extremely mobile gastrointestinal (GI) organs at risk (OAR) including the stomach, duodenum, large and small bowel. Also, due to lack of sufficiently accurate and fast deformable image registration (DIR), accumulated dose to the GI OARs is currently only approximated, further limiting the ability to more precisely adapt treatments. PURPOSE Develop a 3-D Progressively refined joint Registration-Segmentation (ProRSeg) deep network to deformably align and segment treatment fraction magnetic resonance images (MRI)s, then evaluate segmentation accuracy, registration consistency, and feasibility for OAR dose accumulation. METHOD ProRSeg was trained using five-fold cross-validation with 110 T2-weighted MRI acquired at five treatment fractions from 10 different patients, taking care that same patient scans were not placed in training and testing folds. Segmentation accuracy was measured using Dice similarity coefficient (DSC) and Hausdorff distance at 95th percentile (HD95). Registration consistency was measured using coefficient of variation (CV) in displacement of OARs. Statistical comparison to other deep learning and iterative registration methods were done using the Kruskal-Wallis test, followed by pair-wise comparisons with Bonferroni correction applied for multiple testing. Ablation tests and accuracy comparisons against multiple methods were done. Finally, applicability of ProRSeg to segment cone-beam CT (CBCT) scans was evaluated on a publicly available dataset of 80 scans using five-fold cross-validation. RESULTS ProRSeg processed 3D volumes (128 × 192 × 128) in 3 s on a NVIDIA Tesla V100 GPU. It's segmentations were significantly more accurate (p < 0.001 $p<0.001$ ) than compared methods, achieving a DSC of 0.94 ±0.02 for liver, 0.88±0.04 for large bowel, 0.78±0.03 for small bowel and 0.82±0.04 for stomach-duodenum from MRI. ProRSeg achieved a DSC of 0.72±0.01 for small bowel and 0.76±0.03 for stomach-duodenum from public CBCT dataset. ProRSeg registrations resulted in the lowest CV in displacement (stomach-duodenumC V x $CV_{x}$ : 0.75%,C V y $CV_{y}$ : 0.73%, andC V z $CV_{z}$ : 0.81%; small bowelC V x $CV_{x}$ : 0.80%,C V y $CV_{y}$ : 0.80%, andC V z $CV_{z}$ : 0.68%; large bowelC V x $CV_{x}$ : 0.71%,C V y $CV_{y}$ : 0.81%, andC V z $CV_{z}$ : 0.75%). ProRSeg based dose accumulation accounting for intra-fraction (pre-treatment to post-treatment MRI scan) and inter-fraction motion showed that the organ dose constraints were violated in four patients for stomach-duodenum and for three patients for small bowel. Study limitations include lack of independent testing and ground truth phantom datasets to measure dose accumulation accuracy. CONCLUSIONS ProRSeg produced more accurate and consistent GI OARs segmentation and DIR of MRI and CBCTs compared to multiple methods. Preliminary results indicates feasibility for OAR dose accumulation using ProRSeg.
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jun Hong
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Kathryn Tringale
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Marsha Reyngold
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Christopher Crane
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
3
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
4
|
Target organ non-rigid registration on abdominal CT images via deep-learning based detection. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102976] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
5
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
6
|
Shah KD, Shackleford JA, Kandasamy N, Sharp GC. A generalized framework for analytic regularization of uniform cubic B-spline displacement fields. Biomed Phys Eng Express 2021; 7. [PMID: 33878749 DOI: 10.1088/2057-1976/abf9e6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 04/20/2021] [Indexed: 11/11/2022]
Abstract
Image registration is an inherently ill-posed problem that lacks the constraints needed for a unique mapping between voxels of the two images being registered. As such, one must regularize the registration to achieve physically meaningful transforms. The regularization penalty is usually a function of derivatives of the displacement-vector field and can be calculated either analytically or numerically. The numerical approach, however, is computationally expensive depending on the image size, and therefore a computationally efficient analytical framework has been developed. Using cubic B-splines as the registration transform, we develop a generalized mathematical framework that supports five distinct regularizers: diffusion, curvature, linear elastic, third-order, and total displacement. We validate our approach by comparing each with its numerical counterpart in terms of accuracy. We also provide benchmarking results showing that the analytic solutions run significantly faster-up to two orders of magnitude-than finite differencing based numerical implementations.
Collapse
Affiliation(s)
- Keyur D Shah
- Electrical and Computer Engineering Department, Drexel University, Philadelphia, PA 19104, United States of America
| | - James A Shackleford
- Electrical and Computer Engineering Department, Drexel University, Philadelphia, PA 19104, United States of America
| | - Nagarajan Kandasamy
- Electrical and Computer Engineering Department, Drexel University, Philadelphia, PA 19104, United States of America
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA 02114, United States of America
| |
Collapse
|
7
|
Zhang Y, Wu X, Gach HM, Li H, Yang D. GroupRegNet: a groupwise one-shot deep learning-based 4D image registration method. Phys Med Biol 2021; 66:045030. [PMID: 33412539 DOI: 10.1088/1361-6560/abd956] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate deformable four-dimensional (4D) (three-dimensional in space and time) medical images registration is essential in a variety of medical applications. Deep learning-based methods have recently gained popularity in this area for the significantly lower inference time. However, they suffer from drawbacks of non-optimal accuracy and the requirement of a large amount of training data. A new method named GroupRegNet is proposed to address both limitations. The deformation fields to warp all images in the group into a common template is obtained through one-shot learning. The use of the implicit template reduces bias and accumulated error associated with the specified reference image. The one-shot learning strategy is similar to the conventional iterative optimization method but the motion model and parameters are replaced with a convolutional neural network and the weights of the network. GroupRegNet also features a simpler network design and a more straightforward registration process, which eliminates the need to break up the input image into patches. The proposed method was quantitatively evaluated on two public respiratory-binned 4D-computed tomography datasets. The results suggest that GroupRegNet outperforms the latest published deep learning-based methods and is comparable to the top conventional method pTVreg. To facilitate future research, the source code is available at https://github.com/vincentme/GroupRegNet.
Collapse
Affiliation(s)
- Yunlu Zhang
- Departments of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, 63110 United States of America
| | | | | | | | | |
Collapse
|
8
|
Huttinga NRF, Bruijnen T, van den Berg CAT, Sbrizzi A. Nonrigid 3D motion estimation at high temporal resolution from prospectively undersampled k-space data using low-rank MR-MOTUS. Magn Reson Med 2020; 85:2309-2326. [PMID: 33169888 PMCID: PMC7839760 DOI: 10.1002/mrm.28562] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 09/30/2020] [Accepted: 09/30/2020] [Indexed: 12/25/2022]
Abstract
Purpose With the recent introduction of the MR‐LINAC, an MR‐scanner combined with a radiotherapy LINAC, MR‐based motion estimation has become of increasing interest to (retrospectively) characterize tumor and organs‐at‐risk motion during radiotherapy. To this extent, we introduce low‐rank MR‐MOTUS, a framework to retrospectively reconstruct time‐resolved nonrigid 3D+t motion fields from a single low‐resolution reference image and prospectively undersampled k‐space data acquired during motion. Theory Low‐rank MR‐MOTUS exploits spatiotemporal correlations in internal body motion with a low‐rank motion model, and inverts a signal model that relates motion fields directly to a reference image and k‐space data. The low‐rank model reduces the degrees‐of‐freedom, memory consumption, and reconstruction times by assuming a factorization of space‐time motion fields in spatial and temporal components. Methods Low‐rank MR‐MOTUS was employed to estimate motion in 2D/3D abdominothoracic scans and 3D head scans. Data were acquired using golden‐ratio radial readouts. Reconstructed 2D and 3D respiratory motion fields were, respectively, validated against time‐resolved and respiratory‐resolved image reconstructions, and the head motion against static image reconstructions from fully sampled data acquired right before and right after the motion. Results Results show that 2D+t respiratory motion can be estimated retrospectively at 40.8 motion fields per second, 3D+t respiratory motion at 7.6 motion fields per second and 3D+t head‐neck motion at 9.3 motion fields per second. The validations show good consistency with image reconstructions. Conclusions The proposed framework can estimate time‐resolved nonrigid 3D motion fields, which allows to characterize drifts and intra and inter‐cycle patterns in breathing motion during radiotherapy, and could form the basis for real‐time MR‐guided radiotherapy.
Collapse
Affiliation(s)
- Niek R F Huttinga
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tom Bruijnen
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Alessandro Sbrizzi
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
9
|
Fu Y, Lei Y, Wang T, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal 2020; 67:101845. [PMID: 33129147 DOI: 10.1016/j.media.2020.101845] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 08/17/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023]
Abstract
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Yang Lei
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322, United States
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Tian Liu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States.
| |
Collapse
|
10
|
Fu Y, Ippolito JE, Ludwig DR, Nizamuddin R, Li HH, Yang D. Technical Note: Automatic segmentation of CT images for ventral body composition analysis. Med Phys 2020; 47:5723-5730. [PMID: 32969050 DOI: 10.1002/mp.14465] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 08/28/2020] [Accepted: 09/04/2020] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments. METHODS A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity. RESULTS The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets. CONCLUSION A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Collapse
Affiliation(s)
- Yabo Fu
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Joseph E Ippolito
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Daniel R Ludwig
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Rehan Nizamuddin
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Harold H Li
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Deshan Yang
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| |
Collapse
|
11
|
Nie X, Huang K, Deasy J, Rimner A, Li G. Enhanced super-resolution reconstruction of T1w time-resolved 4DMRI in low-contrast tissue using 2-step hybrid deformable image registration. J Appl Clin Med Phys 2020; 21:25-39. [PMID: 32961002 PMCID: PMC7592986 DOI: 10.1002/acm2.12988] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Revised: 06/22/2019] [Accepted: 06/23/2020] [Indexed: 12/25/2022] Open
Abstract
Purpose Deformable image registration (DIR) in low‐contrast tissues is often suboptimal because of low visibility of landmarks, low driving‐force to deform, and low penalty for misalignment. We aim to overcome the shortcomings for improved reconstruction of time‐resolved four‐dimensional magnetic resonance imaging (TR‐4DMRI). Methods and Materials Super‐resolution TR‐4DMRI reconstruction utilizes DIR to combine high‐resolution (highR:2x2x2mm3) breath‐hold (BH) and low‐resolution (lowR:5x5x5mm3) free‐breathing (FB) 3D cine (2Hz) images to achieve clinically acceptable spatiotemporal resolution. A 2‐step hybrid DIR approach was developed to segment low‐dynamic‐range (LDR) regions: low‐intensity lungs and high‐intensity “bodyshell” (=body‐lungs) for DIR refinement after conventional DIR. The intensity in LDR regions was renormalized to the full dynamic range (FDR) to enhance local tissue contrast. A T1‐mapped 4D XCAT digital phantom was created, and seven volunteers and five lung cancer patients were scanned with two BH and one 3D cine series per subject to compare the 1‐step conventional and 2‐step hybrid DIR using: (a) the ground truth in the phantom, (b) highR‐BH references, which were used to simulate 3D cine images by down‐sampling and Rayleigh‐noise‐adding, and (c) cross‐verification between two TR‐4DMRI images reconstructed from two BHs. To assess DIR improvement, 8‐17 blood vessel bifurcations were used in volunteers, and lung tumor position, size, and shape were used in phantom and patients, together with the voxel intensity correlation (VIC), structural similarity (SSIM), and cross‐consistency check (CCC). Results The 2‐step hybrid DIR improves contrast and DIR accuracy. In volunteers, it improves low‐contrast alignment from 6.5 ± 1.8 mm to 3.3 ± 1.0 mm. In phantom, it improves tumor center of mass alignment (COM = 1.3 ± 0.2 mm) and minimizes DIR directional difference. In patients, it produces almost‐identical tumor COM, size, and shape (dice> 0.85) as the reference. The VIC and SSIM are significantly increased and the number of CCC outliers are reduced by half. Conclusion The 2‐step hybrid DIR improves low‐contrast‐tissue alignment and increases lung tumor fidelity. It is recommended to adopt the 2‐step hybrid DIR for TR‐4DMRI reconstruction.
Collapse
Affiliation(s)
- Xingyu Nie
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Kirk Huang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Joseph Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Guang Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
12
|
Lei Y, Fu Y, Wang T, Liu Y, Patel P, Curran WJ, Liu T, Yang X. 4D-CT deformable image registration using multiscale unsupervised deep learning. Phys Med Biol 2020; 65:085003. [PMID: 32097902 PMCID: PMC7775640 DOI: 10.1088/1361-6560/ab79c4] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA, 30322
| | | | | | | | | | | | | | | |
Collapse
|
13
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
14
|
Gong L, Duan L, Dai Y, He Q, Zuo S, Fu T, Yang X, Zheng J. Locally Adaptive Total p-Variation Regularization for Non-Rigid Image Registration With Sliding Motion. IEEE Trans Biomed Eng 2020; 67:2560-2571. [PMID: 31940514 DOI: 10.1109/tbme.2020.2964695] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Due to the complicated thoracic movements which contain both sliding motion occurring at lung surfaces and smooth motion within individual organs, respiratory estimation is still an intrinsically challenging task. In this paper, we propose a novel regularization term called locally adaptive total p-variation (LaTpV) and embed it into a parametric registration framework to accurately recover lung motion. LaTpV originates from a modified Lp-norm constraint (1 < p < 2), where a prior distribution of p modeled by the Dirac-shaped function is constructed to specifically assign different values to voxels. LaTpV adaptively balances the smoothness and discontinuity of the displacement field to encourage an expected sliding interface. Additionally, we also analytically deduce the gradient of the cost function with respect to transformation parameters. To validate the performance of LaTpV, we not only test it on two mono-modal databases including synthetic images and pulmonary computed tomography (CT) images, but also on a more difficult thoracic CT and positron emission tomography (PET) dataset for the first time. For all experiments, both the quantitative and qualitative results indicate that LaTpV significantly surpasses some existing regularizers such as bending energy and parametric total variation. The proposed LaTpV based registration scheme might be more superior for sliding motion correction and more potential for clinical applications such as the diagnosis of pleural mesothelioma and the adjustment of radiotherapy plans.
Collapse
|
15
|
Fu Y, Wu X, Thomas AM, Li HH, Yang D. Automatic large quantity landmark pairs detection in 4DCT lung images. Med Phys 2019; 46:4490-4501. [PMID: 31318989 PMCID: PMC8311742 DOI: 10.1002/mp.13726] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 06/20/2019] [Accepted: 07/11/2019] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To automatically and precisely detect a large quantity of landmark pairs between two lung computed tomography (CT) images to support evaluation of deformable image registration (DIR). We expect that the generated landmark pairs will significantly augment the current lung CT benchmark datasets in both quantity and positional accuracy. METHODS A large number of landmark pairs were detected within the lung between the end-exhalation (EE) and end-inhalation (EI) phases of the lung four-dimensional computed tomography (4DCT) datasets. Thousands of landmarks were detected by applying the Harris-Stephens corner detection algorithm on the probability maps of the lung vasculature tree. A parametric image registration method (pTVreg) was used to establish initial landmark correspondence by registering the images at EE and EI phases. A multi-stream pseudo-siamese (MSPS) network was then developed to further improve the landmark pair positional accuracy by directly predicting three-dimensional (3D) shifts to optimally align the landmarks in EE to their counterparts in EI. Positional accuracies of the detected landmark pairs were evaluated using both digital phantoms and publicly available landmark pairs. RESULTS Dense sets of landmark pairs were detected for 10 4DCT lung datasets, with an average of 1886 landmark pairs per case. The mean and standard deviation of target registration error (TRE) were 0.47 ± 0.45 mm with 98% of landmark pairs having a TRE smaller than 2 mm for 10 digital phantom cases. Tests using 300 manually labeled landmark pairs in 10 lung 4DCT benchmark datasets (DIRLAB) produced TRE results of 0.73 ± 0.53 mm with 97% of landmark pairs having a TRE smaller than 2 mm. CONCLUSION A new method was developed to automatically and precisely detect a large quantity of landmark pairs between lung CT image pairs. The detected landmark pairs could be used as benchmark datasets for more accurate and informative quantitative evaluation of DIR algorithms.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Xue Wu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Allan M. Thomas
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Harold H. Li
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Deshan Yang
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| |
Collapse
|
16
|
Takemura A, Nagano A, Kojima H, Ikeda T, Yokoyama N, Tsukamoto K, Noto K, Isomura N, Ueda S, Kawashima H. An uncertainty metric to evaluate deformation vector fields for dose accumulation in radiotherapy. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2018; 6:77-82. [PMID: 33458393 PMCID: PMC7807581 DOI: 10.1016/j.phro.2018.05.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Revised: 05/14/2018] [Accepted: 05/23/2018] [Indexed: 02/08/2023]
Abstract
Background and purpose In adaptive radiotherapy, deformable image registration (DIR) is used to propagate delineations of tumors and organs into a new therapy plan and to calculate the accumulated total dose. Many DIR accuracy metrics have been proposed. An alternative proposed here could be a local uncertainty (LU) metric for DIR results. Materials and methods The LU represented the uncertainty of each DIR position and was focused on deformation evaluation in uniformly-dense regions. Four cases demonstrated LU calculations: two head and neck cancer cases, a lung cancer case, and a prostate cancer case. Each underwent two CT examinations for radiotherapy planning. Results LU maps were calculated from each DIR of the clinical cases. Reduced fat regions had LUs of 4.6 ± 0.9 mm, 4.8 ± 1.0 mm, and 4.5 ± 0.7 mm, while the shrunken left parotid gland had a LU of 4.1 ± 0.8 mm and the shrunken lung tumor had a LU of 3.7 ± 0.7 mm. The bowels in the pelvic region had a LU of 10.2 ± 3.7 mm. LU histograms for the cases were similar and 99% of the voxels had a LU < 3 mm. Conclusions LU is a new uncertainty metric for DIR that was demonstrated for clinical cases. It had a tolerance of <3 mm.
Collapse
Affiliation(s)
- Akihiro Takemura
- Faculty of Health Sciences, Institution of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa 920-0942, Japan
| | - Akira Nagano
- Division of Radiology, Okayama University Hospital, 2-5-1 Shikatacho, Kitaku, Okayama 700-8558, Japan
| | - Hironori Kojima
- Department of Radiological Technology, Kanazawa University Hospital, 13-1 Takaramachi, Kanazawa 920-8641, Japan.,Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa 920-0942, Japan
| | - Tomohiro Ikeda
- Department of Radiation Oncology, Southern Tohoku Proton Therapy Center, 7-115 Yatsuyamada, Koriyama-City, Fukushima-Pref. 963-8563, Japan
| | - Noriomi Yokoyama
- Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa 920-0942, Japan
| | - Kosuke Tsukamoto
- Department of Radiological Technology, Kanazawa University Hospital, 13-1 Takaramachi, Kanazawa 920-8641, Japan
| | - Kimiya Noto
- Department of Radiological Technology, Kanazawa University Hospital, 13-1 Takaramachi, Kanazawa 920-8641, Japan
| | - Naoki Isomura
- Department of Radiological Technology, Kanazawa University Hospital, 13-1 Takaramachi, Kanazawa 920-8641, Japan
| | - Shinichi Ueda
- Department of Radiological Technology, Kanazawa University Hospital, 13-1 Takaramachi, Kanazawa 920-8641, Japan
| | - Hiroki Kawashima
- Faculty of Health Sciences, Institution of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa 920-0942, Japan
| |
Collapse
|