1
|
Cao YH, Bourbonne V, Lucia F, Schick U, Bert J, Jaouen V, Visvikis D. CT respiratory motion synthesis using joint supervised and adversarial learning. Phys Med Biol 2024; 69:095001. [PMID: 38537289 DOI: 10.1088/1361-6560/ad388a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 03/27/2024] [Indexed: 04/16/2024]
Abstract
Objective.Four-dimensional computed tomography (4DCT) imaging consists in reconstructing a CT acquisition into multiple phases to track internal organ and tumor motion. It is commonly used in radiotherapy treatment planning to establish planning target volumes. However, 4DCT increases protocol complexity, may not align with patient breathing during treatment, and lead to higher radiation delivery.Approach.In this study, we propose a deep synthesis method to generate pseudo respiratory CT phases from static images for motion-aware treatment planning. The model produces patient-specific deformation vector fields (DVFs) by conditioning synthesis on external patient surface-based estimation, mimicking respiratory monitoring devices. A key methodological contribution is to encourage DVF realism through supervised DVF training while using an adversarial term jointly not only on the warped image but also on the magnitude of the DVF itself. This way, we avoid excessive smoothness typically obtained through deep unsupervised learning, and encourage correlations with the respiratory amplitude.Main results.Performance is evaluated using real 4DCT acquisitions with smaller tumor volumes than previously reported. Results demonstrate for the first time that the generated pseudo-respiratory CT phases can capture organ and tumor motion with similar accuracy to repeated 4DCT scans of the same patient. Mean inter-scans tumor center-of-mass distances and Dice similarity coefficients were 1.97 mm and 0.63, respectively, for real 4DCT phases and 2.35 mm and 0.71 for synthetic phases, and compares favorably to a state-of-the-art technique (RMSim).Significance.This study presents a deep image synthesis method that addresses the limitations of conventional 4DCT by generating pseudo-respiratory CT phases from static images. Although further studies are needed to assess the dosimetric impact of the proposed method, this approach has the potential to reduce radiation exposure in radiotherapy treatment planning while maintaining accurate motion representation. Our training and testing code can be found athttps://github.com/cyiheng/Dynagan.
Collapse
Affiliation(s)
- Y-H Cao
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| | - V Bourbonne
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - F Lucia
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - U Schick
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - J Bert
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - V Jaouen
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- IMT Atlantique, Brest, France
| | - D Visvikis
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| |
Collapse
|
2
|
Liang X, Lin S, Liu F, Schreiber D, Yip M. ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images. IEEE Trans Biomed Eng 2023; 70:3265-3276. [PMID: 37279120 DOI: 10.1109/tbme.2023.3280463] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.
Collapse
|
3
|
Wei TT, Kuo C, Tseng YC, Chen JJ. MPVF: 4D Medical Image Inpainting by Multi-Pyramid Voxel Flows. IEEE J Biomed Health Inform 2023; 27:5872-5882. [PMID: 37738187 DOI: 10.1109/jbhi.2023.3318127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Generatinga detailed 4D medical image usually accompanies with prolonged examination time and increased radiation exposure risk. Modern deep learning solutions have exploited interpolation mechanisms to generate a complete 4D image with fewer 3D volumes. However, existing solutions focus more on 2D-slice information, thus missing the changes on the z-axis. This article tackles the 4D cardiac and lung image interpolation problem by synthesizing 3D volumes directly. Although heart and lung only account for a fraction of chest, they constantly undergo periodical motions of varying magnitudes in contrast to the rest of the chest volume, which is more stationary. This poses big challenges to existing models. In order to handle various magnitudes of motions, we propose a Multi-Pyramid Voxel Flows (MPVF) model that takes multiple multi-scale voxel flows into account. This renders our generation network rich information during interpolation, both globally and regionally. Focusing on periodic medical imaging, MPVF takes the maximal and the minimal phases of an organ motion cycle as inputs and can restore a 3D volume at any time point in between. MPVF is featured by a Bilateral Voxel Flow (BVF) module for generating multi-pyramid voxel flows in an unsupervised manner and a Pyramid Fusion (PyFu) module for fusing multiple pyramids of 3D volumes. The model is validated to outperform the state-of-the-art model in several indices with significantly less synthesis time.
Collapse
|
4
|
Zhang R, Liu Q. Learning with few samples in deep learning for image classification, a mini-review. Front Comput Neurosci 2023; 16:1075294. [PMID: 36686199 PMCID: PMC9849670 DOI: 10.3389/fncom.2022.1075294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 12/09/2022] [Indexed: 01/06/2023] Open
Abstract
Deep learning has achieved enormous success in various computer tasks. The excellent performance depends heavily on adequate training datasets, however, it is difficult to obtain abundant samples in practical applications. Few-shot learning is proposed to address the data limitation problem in the training process, which can perform rapid learning with few samples by utilizing prior knowledge. In this paper, we focus on few-shot classification to conduct a survey about the recent methods. First, we elaborate on the definition of the few-shot classification problem. Then we propose a newly organized taxonomy, discuss the application scenarios in which each method is effective, and compare the pros and cons of different methods. We classify few-shot image classification methods from four perspectives: (i) Data augmentation, which contains sample-level and task-level data augmentation. (ii) Metric-based method, which analyzes both feature embedding and metric function. (iii) Optimization method, which is compared from the aspects of self-learning and mutual learning. (iv) Model-based method, which is discussed from the perspectives of memory-based, rapid adaptation and multi-task learning. Finally, we conduct the conclusion and prospect of this paper.
Collapse
|
5
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
6
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|