1
|
Li G, Wang G, Wei W, Li Z, Xiao Q, He H, Luo D, Chen L, Li J, Zhang X, Song Y, Bai S. Cardiorespiratory motion characteristics and their dosimetric impact on cardiac stereotactic body radiotherapy. Med Phys 2024. [PMID: 38994881 DOI: 10.1002/mp.17284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 06/19/2024] [Accepted: 06/20/2024] [Indexed: 07/13/2024] Open
Abstract
BACKGROUND Cardiac stereotactic body radiotherapy (CSBRT) is an emerging and promising noninvasive technique for treating refractory arrhythmias utilizing highly precise, single or limited-fraction high-dose irradiations. This method promises to revolutionize the treatment of cardiac conditions by delivering targeted therapy with minimal exposure to surrounding healthy tissues. However, the dynamic nature of cardiorespiratory motion poses significant challenges to the precise delivery of dose in CSBRT, introducing potential variabilities that can impact treatment efficacy. The complexities of the influence of cardiorespiratory motion on dose distribution are compounded by interplay and blurring effects, introducing additional layers of dose uncertainty. These effects, critical to the understanding and improvement of the accuracy of CSBRT, remain unexplored, presenting a gap in current clinical literature. PURPOSE To investigate the cardiorespiratory motion characteristics in arrhythmia patients and the dosimetric impact of interplay and blurring effects induced by cardiorespiratory motion on CSBRT plan quality. METHODS The position and volume variations in the substrate target and cardiac substructures were evaluated in 12 arrhythmia patients using displacement maximum (DMX) and volume metrics. Moreover, a four-dimensional (4D) dose reconstruction approach was employed to examine the dose uncertainty of the cardiorespiratory motion. RESULTS Cardiac pulsation induced lower DMX than respiratory motion but increased the coefficient of variation and relative range in cardiac substructure volumes. The mean DMX of the substrate target was 0.52 cm (range: 0.26-0.80 cm) for cardiac pulsation and 0.82 cm (range: 0.32-2.05 cm) for respiratory motion. The mean DMX of the cardiac structure ranged from 0.15 to 1.56 cm during cardiac pulsation and from 0.35 to 1.89 cm during respiratory motion. Cardiac pulsation resulted in an average deviation of -0.73% (range: -4.01%-4.47%) in V25 between the 3D and 4D doses. The mean deviations in the homogeneity index (HI) and gradient index (GI) were 1.70% (range: -3.10%-4.36%) and 0.03 (range: -0.14-0.11), respectively. For cardiac substructures, the deviations in D50 due to cardiac pulsation ranged from -1.88% to 1.44%, whereas the deviations in Dmax ranged from -2.96% to 0.88% of the prescription dose. By contrast, the respiratory motion led to a mean deviation of -1.50% (range: -10.73%-4.23%) in V25. The mean deviations in HI and GI due to respiratory motion were 4.43% (range: -3.89%-13.98%) and 0.18 (range: -0.01-0.47) (p < 0.05), respectively. Furthermore, the deviations in D50 and Dmax in cardiac substructures for the respiratory motion ranged from -0.28% to 4.24% and -4.12% to 1.16%, respectively. CONCLUSIONS Cardiorespiratory motion characteristics vary among patients, with the respiratory motion being more significant. The intricate cardiorespiratory motion characteristics and CSBRT plan complexity can induce substantial dose uncertainty. Therefore, assessing individual motion characteristics and 4D dose reconstruction techniques is critical for implementing CSBRT without compromising efficacy and safety.
Collapse
Affiliation(s)
- Guangjun Li
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Guangyu Wang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Weige Wei
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Zhibin Li
- Department of Radiotherapy & Oncology, The First Affiliated Hospital of Soochow University, Institute of Radiotherapy & Oncology, Soochow University, Suzhou, China
| | - Qing Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Haiping He
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Dashuang Luo
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Li Chen
- Department of Radiotherapy & Oncology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Jing Li
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xiangyu Zhang
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Ying Song
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Sen Bai
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Radiotherapy Physics & Technology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
2
|
Bao L, Chen K, Kong D, Ying S, Zeng T. Time multiscale regularization for nonlinear image registration. Comput Med Imaging Graph 2024; 112:102331. [PMID: 38199126 DOI: 10.1016/j.compmedimag.2024.102331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/25/2023] [Accepted: 12/13/2023] [Indexed: 01/12/2024]
Abstract
Regularization-based methods are commonly used for image registration. However, fixed regularizers have limitations in capturing details and describing the dynamic registration process. To address this issue, we propose a time multiscale registration framework for nonlinear image registration in this paper. Our approach replaces the fixed regularizer with a monotone decreasing sequence, and iteratively uses the residual of the previous step as the input for registration. Particularly, first, we introduce a dynamically varying regularization strategy that updates regularizers at each iteration and incorporates them with a multiscale framework. This approach guarantees an overall smooth deformation field in the initial stage of registration and fine-tunes local details as the images become more similar. We then deduce convergence analysis under certain conditions on the regularizers and parameters. Further, we introduce a TV-like regularizer to demonstrate the efficiency of our method. Finally, we compare our proposed multiscale algorithm with some existing methods on both synthetic images and pulmonary computed tomography (CT) images. The experimental results validate that our proposed algorithm outperforms the compared methods, especially in preserving details during image registration with sharp structures.
Collapse
Affiliation(s)
- Lili Bao
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China
| | - Ke Chen
- Department of Mathematics and Statistics, University of Strathclyde, Glasgow, UK.
| | - Dexing Kong
- School of Mathematical Science, Zhejiang University, Hangzhou 310027, PR China
| | - Shihui Ying
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
3
|
Zheng JQ, Wang Z, Huang B, Lim NH, Papież BW. Residual Aligner-based Network (RAN): Motion-separable structure for coarse-to-fine discontinuous deformable registration. Med Image Anal 2024; 91:103038. [PMID: 38000258 DOI: 10.1016/j.media.2023.103038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Deformable image registration, the estimation of the spatial transformation between different images, is an important task in medical imaging. Deep learning techniques have been shown to perform 3D image registration efficiently. However, current registration strategies often only focus on the deformation smoothness, which leads to the ignorance of complicated motion patterns (e.g., separate or sliding motions), especially for the intersection of organs. Thus, the performance when dealing with the discontinuous motions of multiple nearby objects is limited, causing undesired predictive outcomes in clinical usage, such as misidentification and mislocalization of lesions or other abnormalities. Consequently, we proposed a novel registration method to address this issue: a new Motion Separable backbone is exploited to capture the separate motion, with a theoretical analysis of the upper bound of the motions' discontinuity provided. In addition, a novel Residual Aligner module was used to disentangle and refine the predicted motions across the multiple neighboring objects/organs. We evaluate our method, Residual Aligner-based Network (RAN), on abdominal Computed Tomography (CT) scans and it has shown to achieve one of the most accurate unsupervised inter-subject registration for the 9 organs, with the highest-ranked registration of the veins (Dice Similarity Coefficient (%)/Average surface distance (mm): 62%/4.9mm for the vena cava and 34%/7.9mm for the portal and splenic vein), with a smaller model structure and less computation compared to state-of-the-art methods. Furthermore, when applied to lung CT, the RAN achieves comparable results to the best-ranked networks (94%/3.0mm), also with fewer parameters and less computation.
Collapse
Affiliation(s)
- Jian-Qing Zheng
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | - Ziyang Wang
- Department of Computer Science, University of Oxford, Oxford, UK
| | - Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Imperial College, London, UK
| | - Ngee Han Lim
- The Kennedy Institute of Rheumatology, University of Oxford, UK
| | | |
Collapse
|
4
|
Alscher T, Erleben K, Darkner S. Collision-constrained deformable image registration framework for discontinuity management. PLoS One 2023; 18:e0290243. [PMID: 37594943 PMCID: PMC10437794 DOI: 10.1371/journal.pone.0290243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
Topological changes like sliding motion, sources and sinks are a significant challenge in image registration. This work proposes the use of the alternating direction method of multipliers as a general framework for constraining the registration of separate objects with individual deformation fields from overlapping in image registration. This constraint is enforced by introducing a collision detection algorithm from the field of computer graphics which results in a robust divide and conquer optimization strategy using Free-Form Deformations. A series of experiments demonstrate that the proposed framework performs superior with regards to the combination of intersection prevention and image registration including synthetic examples containing complex displacement patterns. The results show compliance with the non-intersection constraints while simultaneously preventing a decrease in registration accuracy. Furthermore, the application of the proposed algorithm to the DIR-Lab data set demonstrates that the framework generalizes to real data by validating it on a lung registration problem.
Collapse
Affiliation(s)
- Thomas Alscher
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Kenny Erleben
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| |
Collapse
|
5
|
Ruthven M, Miquel ME, King AP. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech. Biomed Signal Process Control 2023; 80:104290. [PMID: 36743699 PMCID: PMC9746295 DOI: 10.1016/j.bspc.2022.104290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Objective Dynamic magnetic resonance (MR) imaging enables visualisation of articulators during speech. There is growing interest in quantifying articulator motion in two-dimensional MR images of the vocal tract, to better understand speech production and potentially inform patient management decisions. Image registration is an established way to achieve this quantification. Recently, segmentation-informed deformable registration frameworks have been developed and have achieved state-of-the-art accuracy. This work aims to adapt such a framework and optimise it for estimating displacement fields between dynamic two-dimensional MR images of the vocal tract during speech. Methods A deep-learning-based registration framework was developed and compared with current state-of-the-art registration methods and frameworks (two traditional methods and three deep-learning-based frameworks, two of which are segmentation informed). The accuracy of the methods and frameworks was evaluated using the Dice coefficient (DSC), average surface distance (ASD) and a metric based on velopharyngeal closure. The metric evaluated if the fields captured a clinically relevant and quantifiable aspect of articulator motion. Results The segmentation-informed frameworks achieved higher DSCs and lower ASDs and captured more velopharyngeal closures than the traditional methods and the framework that was not segmentation informed. All segmentation-informed frameworks achieved similar DSCs and ASDs. However, the proposed framework captured the most velopharyngeal closures. Conclusions A framework was successfully developed and found to more accurately estimate articulator motion than five current state-of-the-art methods and frameworks. Significance The first deep-learning-based framework specifically for registering dynamic two-dimensional MR images of the vocal tract during speech has been developed and evaluated.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom,Corresponding author at: Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom.
| | - Marc E. Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,Digital Environment Research Institute (DERI), Empire House, 67-75 New Road, Queen Mary University of London, London E1 1HH, United Kingdom,Advanced Cardiovascular Imaging, Barts NIHR BRC, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P. King
- School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
6
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
7
|
Penarrubia L, Pinon N, Roux E, Dávila Serrano EE, Richard JC, Orkisz M, Sarrut D. Improving motion-mask segmentation in thoracic CT with multiplanar U-nets. Med Phys 2021; 49:420-431. [PMID: 34778978 DOI: 10.1002/mp.15347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/30/2021] [Accepted: 10/19/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Motion-mask segmentation from thoracic computed tomography (CT) images is the process of extracting the region that encompasses lungs and viscera, where large displacements occur during breathing. It has been shown to help image registration between different respiratory phases. This registration step is, for example, useful for radiotherapy planning or calculating local lung ventilation. Knowing the location of motion discontinuity, that is, sliding motion near the pleura, allows a better control of the registration preventing unrealistic estimates. Nevertheless, existing methods for motion-mask segmentation are not robust enough to be used in clinical routine. This article shows that it is feasible to overcome this lack of robustness by using a lightweight deep-learning approach usable on a standard computer, and this even without data augmentation or advanced model design. METHODS A convolutional neural-network architecture with three 2D U-nets for the three main orientations (sagittal, coronal, axial) was proposed. Predictions generated by the three U-nets were combined by majority voting to provide a single 3D segmentation of the motion mask. The networks were trained on a database of nonsmall cell lung cancer 4D CT images of 43 patients. Training and evaluation were done with a K-fold cross-validation strategy. Evaluation was based on a visual grading by two experts according to the appropriateness of the segmented motion mask for the registration task, and on a comparison with motion masks obtained by a baseline method using level sets. A second database (76 CT images of patients with early-stage COVID-19), unseen during training, was used to assess the generalizability of the trained neural network. RESULTS The proposed approach outperformed the baseline method in terms of quality and robustness: the success rate increased from 53 % to 79 % without producing any failure. It also achieved a speed-up factor of 60 with GPU, or 17 with CPU. The memory footprint was low: less than 5 GB GPU RAM for training and less than 1 GB GPU RAM for inference. When evaluated on a dataset with images differing by several characteristics (CT device, pathology, and field of view), the proposed method improved the success rate from 53 % to 83 % . CONCLUSION With 5-s processing time on a mid-range GPU and success rates around 80 % , the proposed approach seems fast and robust enough to be routinely used in clinical practice. The success rate can be further improved by incorporating more diversity in training data via data augmentation and additional annotated images from different scanners and diseases. The code and trained model are publicly available.
Collapse
Affiliation(s)
- Ludmilla Penarrubia
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Nicolas Pinon
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Emmanuel Roux
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | | | - Jean-Christophe Richard
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France.,Service de Réanimation Médicale, Hôpital de la Croix Rousse, Hospices Civils de Lyon, France
| | - Maciej Orkisz
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - David Sarrut
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| |
Collapse
|
8
|
Fu T, Fan J, Liu D, Song H, Zhang C, Ai D, Cheng Z, Liang P, Yang J. Divergence-Free Fitting-Based Incompressible Deformation Quantification of Liver. IEEE J Biomed Health Inform 2021; 25:720-736. [PMID: 32750981 DOI: 10.1109/jbhi.2020.3013126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Liver is an incompressible organ that maintains its volume during the respiration-induced deformation. Quantifying this deformation with the incompressible constraint is significant for liver tracking. The constraint can be accomplished with retaining the divergence-free field obtained by the deformation decomposition. However, the decomposition process is time-consuming, and the removal of non-divergence-free field weakens the deformation. In this study, a divergence-free fitting-based registration method is proposed to quantify the incompressible deformation rapidly and accurately. First, the deformation to be estimated is mapped to the velocity in a diffeomorphic space. Then, this velocity is decomposed by a fast Fourier-based Hodge-Helmholtz decomposition to obtain the divergence-free, curl-free, and harmonic fields. The curl-free field is replaced and fitted by the obtained harmonic field with a translation field to generate a new divergence-free velocity. By optimizing this velocity, the final incompressible deformation is obtained. Moreover, a deep learning framework (DLF) is constructed to accelerate the incompressible deformation quantification. An incompressible respiratory motion model is built for the DLF by using the proposed registration method and is then used to augment the training data. An encoder-decoder network is introduced to learn appearance-velocity correlation at patch scale. In the experiment, we compare the proposed registration with three state-of-the-art methods. The results show that the proposed method can accurately achieve the incompressible registration of liver with a mean liver overlap ratio of 95.33%. Moreover, the time consumed by DLF is nearly 15 times shorter than that by other methods.
Collapse
|
9
|
Menchón-Lara RM, Royuela-Del-Val J, Simmross-Wattenberg F, Casaseca-de-la-Higuera P, Martín-Fernández M, Alberola-López C. Fast 4D elastic group-wise image registration. Convolutional interpolation revisited. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105812. [PMID: 33160691 DOI: 10.1016/j.cmpb.2020.105812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 10/15/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper proposes a new and highly efficient implementation of 3D+t groupwise registration based on the free-form deformation paradigm. METHODS Deformation is posed as a cascade of 1D convolutions, achieving great reduction in execution time for evaluation of transformations and gradients. RESULTS The proposed method has been applied to 4D cardiac MRI and 4D thoracic CT monomodal datasets. Results show an average runtime reduction above 90%, both in CPU and GPU executions, compared with the classical tensor product formulation. CONCLUSIONS Our implementation, although fully developed for the metric sum of squared differences, can be extended to other metrics and its adaptation to multiresolution strategies is straightforward. Therefore, it can be extremely useful to speed up image registration procedures in different applications where high dimensional data are involved.
Collapse
Affiliation(s)
- Rosa-María Menchón-Lara
- Laboratorio de Procesado de Imagen. ETSI de Telecomunicación, Universidad de Valladolid, Valladolid, Spain.
| | | | | | | | - Marcos Martín-Fernández
- Laboratorio de Procesado de Imagen. ETSI de Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Carlos Alberola-López
- Laboratorio de Procesado de Imagen. ETSI de Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| |
Collapse
|
10
|
Bae JP, Yoon S, Vania M, Lee D. Spatiotemporal Free-Form Registration Method Assisted by a Minimum Spanning Tree During Discontinuous Transformations. J Digit Imaging 2021; 34:190-203. [PMID: 33483863 DOI: 10.1007/s10278-020-00409-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 11/02/2020] [Accepted: 11/20/2020] [Indexed: 10/22/2022] Open
Abstract
The sliding motion along the boundaries of discontinuous regions has been actively studied in B-spline free-form deformation framework. This study focusses on the sliding motion for a velocity field-based 3D+t registration. The discontinuity of the tangent direction guides the deformation of the object region, and a separate control of two regions provides a better registration accuracy. The sliding motion under the velocity field-based transformation is conducted under the [Formula: see text]-Rényi entropy estimator using a minimum spanning tree (MST) topology. Moreover, a new topology changing method of the MST is proposed. The topology change is performed as follows: inserting random noise, constructing the MST, and removing random noise while preserving a local connection consistency of the MST. This random noise process (RNP) prevents the [Formula: see text]-Rényi entropy-based registration from degrading in sliding motion, because the RNP creates a small disturbance around special locations. Experiments were performed using two publicly available datasets: the DIR-Lab dataset, which consists of 4D pulmonary computed tomography (CT) images, and a benchmarking framework dataset for cardiac 3D ultrasound. For the 4D pulmonary CT images, RNP produced a significantly improved result for the original MST with sliding motion (p<0.05). For the cardiac 3D ultrasound dataset, only a discontinuity-based registration indicated activity of the RNP. In contrast, the single MST without sliding motion did not show any improvement. These experiments proved the effectiveness of the RNP for sliding motion.
Collapse
Affiliation(s)
- Jang Pyo Bae
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea
| | - Siyeop Yoon
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea.,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea
| | - Malinda Vania
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea.,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea
| | - Deukhee Lee
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea. .,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea.
| |
Collapse
|
11
|
Gong L, Duan L, Dai Y, He Q, Zuo S, Fu T, Yang X, Zheng J. Locally Adaptive Total p-Variation Regularization for Non-Rigid Image Registration With Sliding Motion. IEEE Trans Biomed Eng 2020; 67:2560-2571. [PMID: 31940514 DOI: 10.1109/tbme.2020.2964695] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Due to the complicated thoracic movements which contain both sliding motion occurring at lung surfaces and smooth motion within individual organs, respiratory estimation is still an intrinsically challenging task. In this paper, we propose a novel regularization term called locally adaptive total p-variation (LaTpV) and embed it into a parametric registration framework to accurately recover lung motion. LaTpV originates from a modified Lp-norm constraint (1 < p < 2), where a prior distribution of p modeled by the Dirac-shaped function is constructed to specifically assign different values to voxels. LaTpV adaptively balances the smoothness and discontinuity of the displacement field to encourage an expected sliding interface. Additionally, we also analytically deduce the gradient of the cost function with respect to transformation parameters. To validate the performance of LaTpV, we not only test it on two mono-modal databases including synthetic images and pulmonary computed tomography (CT) images, but also on a more difficult thoracic CT and positron emission tomography (PET) dataset for the first time. For all experiments, both the quantitative and qualitative results indicate that LaTpV significantly surpasses some existing regularizers such as bending energy and parametric total variation. The proposed LaTpV based registration scheme might be more superior for sliding motion correction and more potential for clinical applications such as the diagnosis of pleural mesothelioma and the adjustment of radiotherapy plans.
Collapse
|
12
|
Li D, Zhong W, Deh KM, Nguyen TD, Prince MR, Wang Y, Spincemaille P. Discontinuity Preserving Liver MR Registration with 3D Active Contour Motion Segmentation. IEEE Trans Biomed Eng 2018; 66:10.1109/TBME.2018.2880733. [PMID: 30418878 PMCID: PMC6565504 DOI: 10.1109/tbme.2018.2880733] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVE The sliding motion of the liver during respiration violates the homogeneous motion smoothness assumption in conventional non-rigid image registration and commonly results in compromised registration accuracy. This paper presents a novel approach, registration with 3D active contour motion segmentation (RAMS), to improve registration accuracy with discontinuity-aware motion regularization. METHODS A Markov random field-based discrete optimization with dense displacement sampling and self-similarity context metric is used for registration, while a graph cuts-based 3D active contour approach is applied to segment the sliding interface. In the first registration pass, a mask-free L1 regularization on an image-derived minimum spanning tree is performed to allow motion discontinuity. Based on the motion field estimates, a coarse segmentation finds the motion boundaries. Next, based on MR signal intensity, a fine segmentation aligns the motion boundaries with anatomical boundaries. In the second registration pass, smoothness constraints across the segmented sliding interface are removed by masked regularization on a minimum spanning forest and masked interpolation of the motion field. RESULTS For in vivo breath-hold abdominal MRI data, the motion masks calculated by RAMS are highly consistent with manual segmentations in terms of Dice similarity and bidirectional local distance measure. These automatically obtained masks are shown to substantially improve registration accuracy for both the proposed discrete registration as well as conventional continuous non-rigid algorithms. CONCLUSION/SIGNIFICANCE The presented results demonstrated the feasibility of automated segmentation of the respiratory sliding motion interface in liver MR images and the effectiveness of using the derived motion masks to preserve motion discontinuity.
Collapse
Affiliation(s)
- Dongxiao Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Wenxiong Zhong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Kofi M. Deh
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Thanh D. Nguyen
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Martin R. Prince
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Yi Wang
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA., Department of Biomedical Engineering, Cornell University, Ithaca, NY 14853, USA
| | - Pascal Spincemaille
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| |
Collapse
|
13
|
|
14
|
Papież BW, Franklin JM, Heinrich MP, Gleeson FV, Brady M, Schnabel JA. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications. J Med Imaging (Bellingham) 2018; 5:024001. [PMID: 29662918 PMCID: PMC5886381 DOI: 10.1117/1.jmi.5.2.024001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/13/2018] [Indexed: 11/14/2022] Open
Abstract
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Collapse
Affiliation(s)
- Bartłomiej W Papież
- University of Oxford, Institute of Biomedical Engineering, Department of Engineering Science, Oxford, United Kingdom
| | - James M Franklin
- University of Oxford, Department of Oncology, Oxford, United Kingdom
| | | | - Fergus V Gleeson
- Oxford University Hospitals NHS Trust, Churchill Hospital, Department of Radiology, Oxford, United Kingdom
| | - Michael Brady
- University of Oxford, Department of Oncology, Oxford, United Kingdom
| | - Julia A Schnabel
- University of Oxford, Institute of Biomedical Engineering, Department of Engineering Science, Oxford, United Kingdom.,King's College London, School of Biomedical Engineering and Imaging Sciences, London, United Kingdom
| |
Collapse
|