1
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Penarrubia L, Pinon N, Roux E, Dávila Serrano EE, Richard JC, Orkisz M, Sarrut D. Improving motion-mask segmentation in thoracic CT with multiplanar U-nets. Med Phys 2021; 49:420-431. [PMID: 34778978 DOI: 10.1002/mp.15347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/30/2021] [Accepted: 10/19/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Motion-mask segmentation from thoracic computed tomography (CT) images is the process of extracting the region that encompasses lungs and viscera, where large displacements occur during breathing. It has been shown to help image registration between different respiratory phases. This registration step is, for example, useful for radiotherapy planning or calculating local lung ventilation. Knowing the location of motion discontinuity, that is, sliding motion near the pleura, allows a better control of the registration preventing unrealistic estimates. Nevertheless, existing methods for motion-mask segmentation are not robust enough to be used in clinical routine. This article shows that it is feasible to overcome this lack of robustness by using a lightweight deep-learning approach usable on a standard computer, and this even without data augmentation or advanced model design. METHODS A convolutional neural-network architecture with three 2D U-nets for the three main orientations (sagittal, coronal, axial) was proposed. Predictions generated by the three U-nets were combined by majority voting to provide a single 3D segmentation of the motion mask. The networks were trained on a database of nonsmall cell lung cancer 4D CT images of 43 patients. Training and evaluation were done with a K-fold cross-validation strategy. Evaluation was based on a visual grading by two experts according to the appropriateness of the segmented motion mask for the registration task, and on a comparison with motion masks obtained by a baseline method using level sets. A second database (76 CT images of patients with early-stage COVID-19), unseen during training, was used to assess the generalizability of the trained neural network. RESULTS The proposed approach outperformed the baseline method in terms of quality and robustness: the success rate increased from 53 % to 79 % without producing any failure. It also achieved a speed-up factor of 60 with GPU, or 17 with CPU. The memory footprint was low: less than 5 GB GPU RAM for training and less than 1 GB GPU RAM for inference. When evaluated on a dataset with images differing by several characteristics (CT device, pathology, and field of view), the proposed method improved the success rate from 53 % to 83 % . CONCLUSION With 5-s processing time on a mid-range GPU and success rates around 80 % , the proposed approach seems fast and robust enough to be routinely used in clinical practice. The success rate can be further improved by incorporating more diversity in training data via data augmentation and additional annotated images from different scanners and diseases. The code and trained model are publicly available.
Collapse
Affiliation(s)
- Ludmilla Penarrubia
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Nicolas Pinon
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Emmanuel Roux
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | | | - Jean-Christophe Richard
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France.,Service de Réanimation Médicale, Hôpital de la Croix Rousse, Hospices Civils de Lyon, France
| | - Maciej Orkisz
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - David Sarrut
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| |
Collapse
|