1
|
Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A, Prince JL, Du Y. A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond. Med Image Anal 2025; 100:103385. [PMID: 39612808 PMCID: PMC11730935 DOI: 10.1016/j.media.2024.103385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 12/01/2024]
Abstract
Deep learning technologies have dramatically reshaped the field of medical image registration over the past decade. The initial developments, such as regression-based and U-Net-based networks, established the foundation for deep learning in image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, network architectures, and uncertainty estimation. These advancements have not only enriched the field of image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Zhangxing Bian
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shalini Subramanian
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| |
Collapse
|
2
|
Liu H, McKenzie E, Xu D, Xu Q, Chin RK, Ruan D, Sheng K. MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Med Image Anal 2025; 99:103351. [PMID: 39388843 DOI: 10.1016/j.media.2024.103351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 06/05/2024] [Accepted: 09/16/2024] [Indexed: 10/12/2024]
Abstract
Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at https://github.com/HengjieLiu/DIR-MUSA.
Collapse
Affiliation(s)
- Hengjie Liu
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Elizabeth McKenzie
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Di Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Qifan Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Dan Ruan
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ke Sheng
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
3
|
Bosma LS, Hussein M, Jameson MG, Asghar S, Brock KK, McClelland JR, Poeta S, Yuen J, Zachiu C, Yeo AU. Tools and recommendations for commissioning and quality assurance of deformable image registration in radiotherapy. Phys Imaging Radiat Oncol 2024; 32:100647. [PMID: 39328928 PMCID: PMC11424976 DOI: 10.1016/j.phro.2024.100647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 09/09/2024] [Accepted: 09/10/2024] [Indexed: 09/28/2024] Open
Abstract
Multiple tools are available for commissioning and quality assurance of deformable image registration (DIR), each with their own advantages and disadvantages in the context of radiotherapy. The selection of appropriate tools should depend on the DIR application with its corresponding available input, desired output, and time requirement. Discussions were hosted by the ESTRO Physics Workshop 2021 on Commissioning and Quality Assurance for DIR in Radiotherapy. A consensus was reached on what requirements are needed for commissioning and quality assurance for different applications, and what combination of tools is associated with this. For commissioning, we recommend the target registration error of manually annotated anatomical landmarks or the distance-to-agreement of manually delineated contours to evaluate alignment. These should be supplemented by the distance to discordance and/or biomechanical criteria to evaluate consistency and plausibility. Digital phantoms can be useful to evaluate DIR for dose accumulation but are currently only available for a limited range of anatomies, image modalities and types of deformations. For quality assurance of DIR for contour propagation, we recommend at least a visual inspection of the registered image and contour. For quality assurance of DIR for warping quantitative information such as dose, Hounsfield units or positron emission tomography-data, we recommend visual inspection of the registered image together with image similarity to evaluate alignment, supplemented by an inspection of the Jacobian determinant or bending energy to evaluate plausibility, and by the dose (gradient) to evaluate relevance. We acknowledge that some of these metrics are still missing in currently available commercial solutions.
Collapse
Affiliation(s)
- Lando S Bosma
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mohammad Hussein
- Metrology for Medical Physics Centre, National Physical Laboratory, Teddington, UK
| | - Michael G Jameson
- GenesisCare, Sydney, Australia
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
| | | | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jamie R McClelland
- Centre for Medical Image Computing and the Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Dept. Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Sara Poeta
- Medical Physics Department, Institut Jules Bordet - Université Libre de Bruxelles, Belgium
| | - Johnson Yuen
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
- St. George Hospital Cancer Care Centre, Sydney NSW2217, Australia
- Ingham Institute for Applied Medical Research, Sydney, Australia
| | - Cornel Zachiu
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Adam U Yeo
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- The Sir Peter MacCallum Department of Oncology, the University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
4
|
Taleb A, Guigou C, Leclerc S, Lalande A, Bozorg Grayeli A. Image-to-Patient Registration in Computer-Assisted Surgery of Head and Neck: State-of-the-Art, Perspectives, and Challenges. J Clin Med 2023; 12:5398. [PMID: 37629441 PMCID: PMC10455300 DOI: 10.3390/jcm12165398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Today, image-guided systems play a significant role in improving the outcome of diagnostic and therapeutic interventions. They provide crucial anatomical information during the procedure to decrease the size and the extent of the approach, to reduce intraoperative complications, and to increase accuracy, repeatability, and safety. Image-to-patient registration is the first step in image-guided procedures. It establishes a correspondence between the patient's preoperative imaging and the intraoperative data. When it comes to the head-and-neck region, the presence of many sensitive structures such as the central nervous system or the neurosensory organs requires a millimetric precision. This review allows evaluating the characteristics and the performances of different registration methods in the head-and-neck region used in the operation room from the perspectives of accuracy, invasiveness, and processing times. Our work led to the conclusion that invasive marker-based methods are still considered as the gold standard of image-to-patient registration. The surface-based methods are recommended for faster procedures and applied on the surface tissues especially around the eyes. In the near future, computer vision technology is expected to enhance these systems by reducing human errors and cognitive load in the operating room.
Collapse
Affiliation(s)
- Ali Taleb
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Caroline Guigou
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| | - Sarah Leclerc
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Alain Lalande
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Medical Imaging Department, University Hospital of Dijon, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
5
|
Paganelli C, Meschini G, Molinelli S, Riboldi M, Baroni G. “Patient-specific validation of deformable image registration in radiation therapy: Overview and caveats”. Med Phys 2018; 45:e908-e922. [DOI: 10.1002/mp.13162] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 07/30/2018] [Accepted: 08/24/2018] [Indexed: 12/26/2022] Open
Affiliation(s)
- Chiara Paganelli
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | - Giorgia Meschini
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | | | - Marco Riboldi
- Department of Medical Physics; Ludwig-Maximilians-Universitat Munchen; Munich 80539 Germany
| | - Guido Baroni
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
- Centro Nazionale di Adroterapia Oncologica; Pavia 27100 Italy
| |
Collapse
|