1
|
Chen YC, Lee CE, Lin FY, Li YJ, Lor KL, Chang YC, Chen CM. Longitudinal registration of thoracic CT images with radiation-induced lung diseases: A divide-and-conquer approach based on component structure wise registration using coherent point drift. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108401. [PMID: 39232374 DOI: 10.1016/j.cmpb.2024.108401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 07/20/2024] [Accepted: 08/27/2024] [Indexed: 09/06/2024]
Abstract
BACKGROUND AND OBJECTIVE Registration of pulmonary computed tomography (CT) images with radiation-induced lung diseases (RILD) was essential to investigate the voxel-wise relationship between the formation of RILD and the radiation dose received by different tissues. Although various approaches had been developed for the registration of lung CTs, their performances remained clinically unsatisfactory for registration of lung CT images with RILD. The main difficulties arose from the longitudinal change in lung parenchyma, including RILD and volumetric change of lung cancers, after radiation therapy, leading to inaccurate registration and artifacts caused by erroneous matching of the RILD tissues. METHODS To overcome the influence of the parenchymal changes, a divide-and-conquer approach rooted in the coherent point drift (CPD) paradigm was proposed. The proposed method was based on two kernel ideas. One was the idea of component structure wise registration. Specifically, the proposed method relaxed the intrinsic assumption of equal isotropic covariances in CPD by decomposing a lung and its surrounding tissues into component structures and independently registering the component structures pairwise by CPD. The other was the idea of defining a vascular subtree centered at a matched branch point as a component structure. This idea could not only provide a sufficient number of matched feature points within a parenchyma, but avoid being corrupted by the false feature points resided in the RILD tissues due to globally and indiscriminately sampling using mathematical operators. The overall deformation model was built by using the Thin Plate Spline based on all matched points. RESULTS This study recruited 30 pairs of lung CT images with RILD, 15 of which were used for internal validation (leave-one-out cross-validation) and the other 15 for external validation. The experimental results showed that the proposed algorithm achieved a mean and a mean of maximum 1 % of average surface distances <2 and 8 mm, respectively, and a mean and a maximum target registration error <2 mm and 5 mm on both internal and external validation datasets. The paired two-sample t-tests corroborated that the proposed algorithm outperformed a recent method, the Stavropoulou's method, on the external validation dataset (p < 0.05). CONCLUSIONS The proposed algorithm effectively reduced the influence of parenchymal changes, resulting in a reasonably accurate and artifact-free registration.
Collapse
Affiliation(s)
- Yi-Chang Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan; Department of Medical Imaging, Cardinal Tien Hospital, New Taipei City, Taiwan
| | - Chi-En Lee
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Fan-Ya Lin
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Ya-Jing Li
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Kuo-Lung Lor
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
2
|
Long L, Xue X, Xiao H. CCMNet: Cross-scale correlation-aware mapping network for 3D lung CT image registration. Comput Biol Med 2024; 182:109103. [PMID: 39244962 DOI: 10.1016/j.compbiomed.2024.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/04/2024] [Accepted: 09/01/2024] [Indexed: 09/10/2024]
Abstract
The lung is characterized by high elasticity and complex structure, which implies that the lung is capable of undergoing complex deformation and the shape variable is substantial. Large deformation estimation poses significant challenges to lung image registration. The traditional U-Net architecture is difficult to cover complex deformation due to its limited receptive field. Moreover, the relationship between voxels weakens as the number of downsampling times increases, that is, the long-range dependence issue. In this paper, we propose a novel multilevel registration framework which enhances the correspondence between voxels to improve the ability of estimating large deformations. Our approach consists of a convolutional neural network (CNN) with a two-stream registration structure and a cross-scale mapping attention (CSMA) mechanism. The former extracts the robust features of image pairs within layers, while the latter establishes frequent connections between layers to maintain the correlation of image pairs. This method fully utilizes the context information of different scales to establish the mapping relationship between low-resolution and high-resolution feature maps. We have achieved remarkable results on DIRLAB (TRE 1.56 ± 1.60) and POPI (NCC 99.72% SSIM 91.42%) dataset, demonstrating that this strategy can effectively address the large deformation issues, mitigate long-range dependence, and ultimately achieve more robust lung CT image registration.
Collapse
Affiliation(s)
- Li Long
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| | - Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
3
|
Bhat I, Kuijf HJ, Viergever MA, Pluim JPW. Influence of learned landmark correspondences on lung CT registration. Med Phys 2024; 51:5321-5336. [PMID: 38713916 DOI: 10.1002/mp.17120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/04/2024] [Accepted: 04/25/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Disease or injury may cause a change in the biomechanical properties of the lungs, which can alter lung function. Image registration can be used to measure lung ventilation and quantify volume change, which can be a useful diagnostic aid. However, lung registration is a challenging problem because of the variation in deformation along the lungs, sliding motion of the lungs along the ribs, and change in density. PURPOSE Landmark correspondences have been used to make deformable image registration robust to large displacements. METHODS To tackle the challenging task of intra-patient lung computed tomography (CT) registration, we extend the landmark correspondence prediction model deep convolutional neural network-Match by introducing a soft mask loss term to encourage landmark correspondences in specific regions and avoid the use of a mask during inference. To produce realistic deformations to train the landmark correspondence model, we use data-driven synthetic transformations. We study the influence of these learned landmark correspondences on lung CT registration by integrating them into intensity-based registration as a distance-based penalty. RESULTS Our results on the public thoracic CT dataset COPDgene show that using learned landmark correspondences as a soft constraint can reduce median registration error from approximately 5.46 to 4.08 mm compared to standard intensity-based registration, in the absence of lung masks. CONCLUSIONS We show that using landmark correspondences results in minor improvements in local alignment, while significantly improving global alignment.
Collapse
Affiliation(s)
- Ishaan Bhat
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max A Viergever
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Josien P W Pluim
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
4
|
Rong C, Li Z, Li R, Wang Y. Spatial-aware contrastive learning for cross-domain medical image registration. Med Phys 2024. [PMID: 39031488 DOI: 10.1002/mp.17311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 06/14/2024] [Accepted: 07/01/2024] [Indexed: 07/22/2024] Open
Abstract
BACKGROUND With the rapid advancement of medical imaging technologies, precise image analysis and diagnosis play a crucial role in enhancing treatment outcomes and patient care. Computed tomography (CT) and magnetic resonance imaging (MRI), as pivotal technologies in medical imaging, exhibit unique advantages in bone imaging and soft tissue contrast, respectively. However, cross-domain medical image registration confronts significant challenges due to the substantial differences in contrast, texture, and noise levels between different imaging modalities. PURPOSE The purpose of this study is to address the major challenges encountered in the field of cross-domain medical image registration by proposing a spatial-aware contrastive learning approach that effectively integrates shared information from CT and MRI images. Our objective is to optimize the feature space representation by employing advanced reconstruction and contrastive loss functions, overcoming the limitations of traditional registration methods when dealing with different imaging modalities. Through this approach, we aim to enhance the model's ability to learn structural similarities across domain images, improve registration accuracy, and provide more precise imaging analysis tools for clinical diagnosis and treatment planning. METHODS With prior knowledge that different domains of images (CT and MRI) share same content-style information, we extract equivalent feature spaces from both images, enabling accurate cross-domain point matching. We employ a structure resembling that of an autoencoder, augmented with designed reconstruction and contrastive losses to fulfill our objectives. We also propose region mask to solve the conflict between spatial correlation and distinctiveness, to obtain a better representation space. RESULTS Our research results demonstrate the significant superiority of the proposed spatial-aware contrastive learning approach in the domain of cross-domain medical image registration. Quantitatively, our method achieved an average Dice similarity coefficient (DSC) of 85.68%, target registration error (TRE) of 1.92 mm, and mean Hausdorff distance (MHD) of 1.26 mm, surpassing current state-of-the-art methods. Additionally, the registration processing time was significantly reduced to 2.67 s on a GPU, highlighting the efficiency of our approach. The experimental outcomes not only validate the effectiveness of our method in improving the accuracy of cross-domain image registration but also prove its adaptability across different medical image analysis scenarios, offering robust support for enhancing diagnostic precision and patient treatment outcomes. CONCLUSIONS The spatial-aware contrastive learning approach proposed in this paper introduces a new perspective and solution to the domain of cross-domain medical image registration. By effectively optimizing the feature space representation through carefully designed reconstruction and contrastive loss functions, our method significantly improves the accuracy and stability of registration between CT and MRI images. The experimental results demonstrate the clear advantages of our approach in enhancing the accuracy of cross-domain image registration, offering significant application value in promoting precise diagnosis and personalized treatment planning. In the future, we look forward to further exploring the application of this method in a broader range of medical imaging datasets and its potential integration with other advanced technologies, contributing more innovations to the field of medical image analysis and processing.
Collapse
Affiliation(s)
- Chenchu Rong
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Zhiru Li
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Rui Li
- The Second Affiliated Hospital of Nantong University, Medical School of Nantong University, Nantong, China
| | - Yuanqing Wang
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| |
Collapse
|
5
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
6
|
Cao YH, Bourbonne V, Lucia F, Schick U, Bert J, Jaouen V, Visvikis D. CT respiratory motion synthesis using joint supervised and adversarial learning. Phys Med Biol 2024; 69:095001. [PMID: 38537289 DOI: 10.1088/1361-6560/ad388a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 03/27/2024] [Indexed: 04/16/2024]
Abstract
Objective.Four-dimensional computed tomography (4DCT) imaging consists in reconstructing a CT acquisition into multiple phases to track internal organ and tumor motion. It is commonly used in radiotherapy treatment planning to establish planning target volumes. However, 4DCT increases protocol complexity, may not align with patient breathing during treatment, and lead to higher radiation delivery.Approach.In this study, we propose a deep synthesis method to generate pseudo respiratory CT phases from static images for motion-aware treatment planning. The model produces patient-specific deformation vector fields (DVFs) by conditioning synthesis on external patient surface-based estimation, mimicking respiratory monitoring devices. A key methodological contribution is to encourage DVF realism through supervised DVF training while using an adversarial term jointly not only on the warped image but also on the magnitude of the DVF itself. This way, we avoid excessive smoothness typically obtained through deep unsupervised learning, and encourage correlations with the respiratory amplitude.Main results.Performance is evaluated using real 4DCT acquisitions with smaller tumor volumes than previously reported. Results demonstrate for the first time that the generated pseudo-respiratory CT phases can capture organ and tumor motion with similar accuracy to repeated 4DCT scans of the same patient. Mean inter-scans tumor center-of-mass distances and Dice similarity coefficients were 1.97 mm and 0.63, respectively, for real 4DCT phases and 2.35 mm and 0.71 for synthetic phases, and compares favorably to a state-of-the-art technique (RMSim).Significance.This study presents a deep image synthesis method that addresses the limitations of conventional 4DCT by generating pseudo-respiratory CT phases from static images. Although further studies are needed to assess the dosimetric impact of the proposed method, this approach has the potential to reduce radiation exposure in radiotherapy treatment planning while maintaining accurate motion representation. Our training and testing code can be found athttps://github.com/cyiheng/Dynagan.
Collapse
Affiliation(s)
- Y-H Cao
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| | - V Bourbonne
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - F Lucia
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - U Schick
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - J Bert
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - V Jaouen
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- IMT Atlantique, Brest, France
| | - D Visvikis
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| |
Collapse
|
7
|
Qiu W, Xiong L, Li N, Luo Z, Wang Y, Zhang Y. AEAU-Net: an unsupervised end-to-end registration network by combining affine transformation and deformable medical image registration. Med Biol Eng Comput 2023; 61:2859-2873. [PMID: 37498511 DOI: 10.1007/s11517-023-02887-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 07/09/2023] [Indexed: 07/28/2023]
Abstract
Deformable medical image registration plays an essential role in clinical diagnosis and treatment. However, due to the large difference in image deformation, unsupervised convolutional neural network (CNN)-based methods cannot extract global features and local features simultaneously and cannot capture long-distance dependencies to solve the problem of excessive deformation. In this paper, an unsupervised end-to-end registration network is proposed for 3D MRI medical image registration, named AEAU-Net, which includes two-stage operations, i.e., an affine transformation and a deformable registration. These two operations are implemented by an affine transformation subnetwork and a deformable registration subnetwork, respectively. In the deformable registration subnetwork, termed as EAU-Net, we designed an efficient attention mechanism (EAM) module and a recursive residual path (RSP) module. The EAM module is embedded in the bottom layer of the EAU-Net to capture long-distance dependencies. The RSP model is used to obtain effective features by fusing deep and shallow features. Extensive experiments on two datasets, LPBA40 and Mindboggle101, were conducted to verify the effectiveness of the proposed method. Compared with baseline methods, this proposed method could obtain better registration performance. The ablation study further demonstrated the reasonability and validity of the designed architecture of the proposed method.
Collapse
Affiliation(s)
- Wei Qiu
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China
| | - Lianjin Xiong
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China
| | - Ning Li
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China
| | - Zhangrong Luo
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China
| | - Yaobin Wang
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China
| | - Yangsong Zhang
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, 621010, China.
- NHC Key Laboratory of Nuclear Technology Medical Transformation (Mianyang Central Hospital), Mianyang, 621010, China.
- Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang, Sichuan, 621010, China.
| |
Collapse
|
8
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
9
|
Li J, Ellis DG, Kodym O, Rauschenbach L, Rieß C, Sure U, Wrede KH, Alvarez CM, Wodzinski M, Daniol M, Hemmerling D, Mahdi H, Clement A, Kim E, Fishman Z, Whyne CM, Mainprize JG, Hardisty MR, Pathak S, Sindhura C, Gorthi RKSS, Kiran DV, Gorthi S, Yang B, Fang K, Li X, Kroviakov A, Yu L, Jin Y, Pepe A, Gsaxner C, Herout A, Alves V, Španěl M, Aizenberg MR, Kleesiek J, Egger J. Towards clinical applicability and computational efficiency in automatic cranial implant design: An overview of the AutoImplant 2021 cranial implant design challenge. Med Image Anal 2023; 88:102865. [PMID: 37331241 DOI: 10.1016/j.media.2023.102865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 05/23/2023] [Accepted: 06/02/2023] [Indexed: 06/20/2023]
Abstract
Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Oldřich Kodym
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Laurèl Rauschenbach
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Christoph Rieß
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Ulrich Sure
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Karsten H Wrede
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Carlos M Alvarez
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland; University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems Institute, Sierre, Switzerland
| | - Mateusz Daniol
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Daria Hemmerling
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Hamza Mahdi
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Evan Kim
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Cari M Whyne
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - James G Mainprize
- Sunnybrook Research Institute, Toronto, ON, Canada; Calavera Surgical Design Inc., Toronto, ON, Canada
| | - Michael R Hardisty
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - Shashwat Pathak
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Chitimireddy Sindhura
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | | | - Degala Venkata Kiran
- Department of Mechanical Engineering, Indian Institute of Technology, Tirupati, India
| | - Subrahmanyam Gorthi
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Bokai Yang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Ke Fang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Artem Kroviakov
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Lei Yu
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Adam Herout
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Victor Alves
- ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
| | | | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| |
Collapse
|
10
|
Tian L, Greer H, Vialard FX, Kwitt R, Estépar RSJ, Rushmore RJ, Makris N, Bouix S, Niethammer M. GradICON: Approximate Diffeomorphisms via Gradient Inverse Consistency. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:18084-18094. [PMID: 39247628 PMCID: PMC11378329 DOI: 10.1109/cvpr52729.2023.01734] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
We present an approach to learning regular spatial transformations between image pairs in the context of medical image registration. Contrary to optimization-based registration techniques and many modern learning-based methods, we do not directly penalize transformation irregularities but instead promote transformation regularity via an inverse consistency penalty. We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images. Different from existing approaches, we compose these two resulting maps and regularize deviations of the Jacobian of this composition from the identity matrix. This regularizer - GradICON - results in much better convergence when training registration models compared to promoting inverse consistency of the composition of maps directly while retaining the desirable implicit regularization effects of the latter. We achieve state-of-the-art registration performance on a variety of real-world medical image datasets using a single set of hyperparameters and a single non-dataset-specific training protocol. Code is available at https://github.com/uncbiag/ICON.
Collapse
|
11
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
12
|
Donovan GM, Noble PB, Langton D. Therapeutic response to bronchial thermoplasty: toward feasibility of patient selection based on modeling predictions. J Appl Physiol (1985) 2022; 133:1341-1348. [PMID: 36356255 DOI: 10.1152/japplphysiol.00493.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Bronchial thermoplasty (BT) is a treatment for moderate-to-severe asthma in which the airway smooth muscle layer is targeted directly using thermal ablation. Although it has been shown to be safe and effective in long-term follow-up, questions remain about its mechanism of action, patient selection, and optimization of protocol based on structural phenotype. Using a cohort of 20 subjects who underwent thermoplasty and assessment by computed tomography (CT), we demonstrate that response to BT can be feasibly predicted based on pretreatment airway dimensions that inform a subject-specific computational model. Analysis revealed the need for CT assessment at total lung capacity, rather than functional residual capacity, which was less sensitive to the effects of BT. Final model predictions compared favorably with observed outcomes in terms of airway caliber and asthma control, suggesting that this approach could form the basis of improved clinical practice.NEW & NOTEWORTHY Bronchial thermoplasty is a treatment for asthma that targets the airway smooth muscle directly. We demonstrate the feasibility and constraints of predicting patient-specific response to thermoplasty using a computational model informed by pretreatment CT scans at different lung volumes. Predictions are compared with functional outcomes and posttreatment CT scans. This has the potential to form the basis for improved clinical practice.
Collapse
Affiliation(s)
- G M Donovan
- Department of Mathematics, The University of Auckland, Auckland, New Zealand
| | - P B Noble
- School of Human Sciences, The University of Western Australia, Crawley, Western Australia, Australia
| | - D Langton
- Faculty of Medicine, Nursing and Allied Health, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
13
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
14
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
15
|
Future frame prediction based on generative assistant discriminative network for anomaly detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03488-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
16
|
Hasenstab KA, Tabalon J, Yuan N, Retson T, Hsiao A. CNN-based Deformable Registration Facilitates Fast and Accurate Air Trapping Measurements at Inspiratory and Expiratory CT. Radiol Artif Intell 2022; 4:e210211. [PMID: 35146437 PMCID: PMC8823452 DOI: 10.1148/ryai.2021210211] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/05/2021] [Accepted: 10/22/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE To develop a convolutional neural network (CNN)-based deformable lung registration algorithm to reduce computation time and assess its potential for lobar air trapping quantification. MATERIALS AND METHODS In this retrospective study, a CNN algorithm was developed to perform deformable registration of lung CT (LungReg) using data on 9118 patients from the COPDGene Study (data collected between 2007 and 2012). Loss function constraints included cross-correlation, displacement field regularization, lobar segmentation overlap, and the Jacobian determinant. LungReg was compared with a standard diffeomorphic registration (SyN) for lobar Dice overlap, percentage voxels with nonpositive Jacobian determinants, and inference runtime using paired t tests. Landmark colocalization error (LCE) across 10 patients was compared using a random effects model. Agreement between LungReg and SyN air trapping measurements was assessed using intraclass correlation coefficient. The ability of LungReg versus SyN emphysema and air trapping measurements to predict Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages was compared using area under the receiver operating characteristic curves. RESULTS Average performance of LungReg versus SyN showed lobar Dice overlap score of 0.91-0.97 versus 0.89-0.95, respectively (P < .001); percentage voxels with nonpositive Jacobian determinant of 0.04 versus 0.10, respectively (P < .001); inference run time of 0.99 second (graphics processing unit) and 2.27 seconds (central processing unit) versus 418.46 seconds (central processing unit) (P < .001); and LCE of 7.21 mm versus 6.93 mm (P < .001). LungReg and SyN whole-lung and lobar air trapping measurements achieved excellent agreement (intraclass correlation coefficients > 0.98). LungReg versus SyN area under the receiver operating characteristic curves for predicting GOLD stage were not statistically different (range, 0.88-0.95 vs 0.88-0.95, respectively; P = .31-.95). CONCLUSION CNN-based deformable lung registration is accurate and fully automated, with runtime feasible for clinical lobar air trapping quantification, and has potential to improve diagnosis of small airway diseases.Keywords: Air Trapping, Convolutional Neural Network, Deformable Registration, Small Airway Disease, CT, Lung, Semisupervised Learning, Unsupervised Learning Supplemental material is available for this article. © RSNA, 2021 An earlier incorrect version of this article appeared online. This article was corrected on December 22, 2021.
Collapse
Affiliation(s)
- Kyle A. Hasenstab
- From the Department of Radiology, University of California San Diego,
9500 Gilman Dr, San Diego, CA 92093 (K.A.H., N.Y., T.R., A.H.); and Department
of Mathematics and Statistics, San Diego State University, San Diego, Calif
(K.A.H., J.T.)
| | - Joseph Tabalon
- From the Department of Radiology, University of California San Diego,
9500 Gilman Dr, San Diego, CA 92093 (K.A.H., N.Y., T.R., A.H.); and Department
of Mathematics and Statistics, San Diego State University, San Diego, Calif
(K.A.H., J.T.)
| | - Nancy Yuan
- From the Department of Radiology, University of California San Diego,
9500 Gilman Dr, San Diego, CA 92093 (K.A.H., N.Y., T.R., A.H.); and Department
of Mathematics and Statistics, San Diego State University, San Diego, Calif
(K.A.H., J.T.)
| | - Tara Retson
- From the Department of Radiology, University of California San Diego,
9500 Gilman Dr, San Diego, CA 92093 (K.A.H., N.Y., T.R., A.H.); and Department
of Mathematics and Statistics, San Diego State University, San Diego, Calif
(K.A.H., J.T.)
| | - Albert Hsiao
- From the Department of Radiology, University of California San Diego,
9500 Gilman Dr, San Diego, CA 92093 (K.A.H., N.Y., T.R., A.H.); and Department
of Mathematics and Statistics, San Diego State University, San Diego, Calif
(K.A.H., J.T.)
| |
Collapse
|
17
|
CNN-based lung CT registration with multiple anatomical constraints. Med Image Anal 2021; 72:102139. [PMID: 34216959 PMCID: PMC10369673 DOI: 10.1016/j.media.2021.102139] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 06/10/2021] [Accepted: 06/17/2021] [Indexed: 11/21/2022]
Abstract
Deep-learning-based registration methods emerged as a fast alternative to conventional registration methods. However, these methods often still cannot achieve the same performance as conventional registration methods because they are either limited to small deformation or they fail to handle a superposition of large and small deformations without producing implausible deformation fields with foldings inside. In this paper, we identify important strategies of conventional registration methods for lung registration and successfully developed the deep-learning counterpart. We employ a Gaussian-pyramid-based multilevel framework that can solve the image registration optimization in a coarse-to-fine fashion. Furthermore, we prevent foldings of the deformation field and restrict the determinant of the Jacobian to physiologically meaningful values by combining a volume change penalty with a curvature regularizer in the loss function. Keypoint correspondences are integrated to focus on the alignment of smaller structures. We perform an extensive evaluation to assess the accuracy, the robustness, the plausibility of the estimated deformation fields, and the transferability of our registration approach. We show that it achieves state-of-the-art results on the COPDGene dataset compared to conventional registration method with much shorter execution time. In our experiments on the DIRLab exhale to inhale lung registration, we demonstrate substantial improvements (TRE below 1.2 mm) over other deep learning methods. Our algorithm is publicly available at https://grand-challenge.org/algorithms/deep-learning-based-ct-lung-registration/.
Collapse
|