1
|
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, Zhang Y. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy. Med Phys 2024; 51:8302-8316. [PMID: 39137294 DOI: 10.1002/mp.17338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.
Collapse
Affiliation(s)
- Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Computer Science, ETH Zürich, Zürich, Switzerland
| | - Renato Bellotti
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | - Barbara Bachtiary
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Jan Hrbacek
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Radiation Oncology, University Hospital of Zürich, Zürich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | | | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| |
Collapse
|
2
|
Liu H, McKenzie E, Xu D, Xu Q, Chin RK, Ruan D, Sheng K. MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Med Image Anal 2024; 99:103351. [PMID: 39388843 DOI: 10.1016/j.media.2024.103351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 06/05/2024] [Accepted: 09/16/2024] [Indexed: 10/12/2024]
Abstract
Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at https://github.com/HengjieLiu/DIR-MUSA.
Collapse
Affiliation(s)
- Hengjie Liu
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Elizabeth McKenzie
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Di Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Qifan Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Dan Ruan
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ke Sheng
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
3
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Zhang Z, Li C, Wang W, Dong Z, Liu G, Dong Y, Zhang Y. Towards full-stack deep learning-empowered data processing pipeline for synchrotron tomography experiments. Innovation (N Y) 2024; 5:100539. [PMID: 38089566 PMCID: PMC10711238 DOI: 10.1016/j.xinn.2023.100539] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 11/13/2023] [Indexed: 10/16/2024] Open
Abstract
Synchrotron tomography experiments are transitioning into multifunctional, cross-scale, and dynamic characterizations, enabled by new-generation synchrotron light sources and fast developments in beamline instrumentation. However, with the spatial and temporal resolving power entering a new era, this transition generates vast amounts of data, which imposes a significant burden on the data processing end. Today, as a highly accurate and efficient data processing method, deep learning shows great potential to address the big data challenge being encountered at future synchrotron beamlines. In this review, we discuss recent advances employing deep learning at different stages of the synchrotron tomography data processing pipeline. We also highlight how applications in other data-intensive fields, such as medical imaging and electron tomography, can be migrated to synchrotron tomography. Finally, we provide our thoughts on possible challenges and opportunities as well as the outlook, envisioning selected deep learning methods, curated big models, and customized learning strategies, all through an intelligent scheduling solution.
Collapse
Affiliation(s)
- Zhen Zhang
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China
| | - Chun Li
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhui Wang
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Zheng Dong
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China
| | - Yuhui Dong
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Yi Zhang
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
6
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:857-911. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea
| |
Collapse
|
7
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
8
|
Yuan S, Chen X, Liu Y, Zhu J, Men K, Dai J. Comprehensive evaluation of similarity between synthetic and real CT images for nasopharyngeal carcinoma. Radiat Oncol 2023; 18:182. [PMID: 37936196 PMCID: PMC10629140 DOI: 10.1186/s13014-023-02349-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/11/2023] [Indexed: 11/09/2023] Open
Abstract
BACKGROUND Although magnetic resonance imaging (MRI)-to-computed tomography (CT) synthesis studies based on deep learning have significantly progressed, the similarity between synthetic CT (sCT) and real CT (rCT) has only been evaluated in image quality metrics (IQMs). To evaluate the similarity between synthetic CT (sCT) and real CT (rCT) comprehensively, we comprehensively evaluated IQMs and radiomic features for the first time. METHODS This study enrolled 127 patients with nasopharyngeal carcinoma who underwent CT and MRI scans. Supervised-learning (Unet) and unsupervised-learning (CycleGAN) methods were applied to build MRI-to-CT synthesis models. The regions of interest (ROIs) included nasopharynx gross tumor volume (GTVnx), brainstem, parotid glands, and temporal lobes. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), root mean square error (RMSE), and structural similarity (SSIM) were used to evaluate image quality. Additionally, 837 radiomic features were extracted for each ROI, and the correlation was evaluated using the concordance correlation coefficient (CCC). RESULTS The MAE, RMSE, SSIM, and PSNR of the body were 91.99, 187.12, 0.97, and 51.15 for Unet and 108.30, 211.63, 0.96, and 49.84 for CycleGAN. For the metrics, Unet was superior to CycleGAN (P < 0.05). For the radiomic features, the percentage of four levels (i.e., excellent, good, moderate, and poor, respectively) were as follows: GTVnx, 8.5%, 14.6%, 26.5%, and 50.4% for Unet and 12.3%, 25%, 38.4%, and 24.4% for CycleGAN; other ROIs, 5.44% ± 3.27%, 5.56% ± 2.92%, 21.38% ± 6.91%, and 67.58% ± 8.96% for Unet and 5.16% ± 1.69%, 3.5% ± 1.52%, 12.68% ± 7.51%, and 78.62% ± 8.57% for CycleGAN. CONCLUSIONS Unet-sCT was superior to CycleGAN-sCT for the IQMs. However, neither exhibited absolute superiority in radiomic features, and both were far less similar to rCT. Therefore, further work is required to improve the radiomic similarity for MRI-to-CT synthesis. TRIAL REGISTRATION This study was a retrospective study, so it was free from registration.
Collapse
Affiliation(s)
- Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
9
|
Zhang X, Gosnell J, Nainamalai V, Page S, Huang S, Haw M, Peng B, Vettukattil J, Jiang J. Advances in TEE-Centric Intraprocedural Multimodal Image Guidance for Congenital and Structural Heart Disease. Diagnostics (Basel) 2023; 13:2981. [PMID: 37761348 PMCID: PMC10530233 DOI: 10.3390/diagnostics13182981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/17/2023] [Accepted: 08/17/2023] [Indexed: 09/29/2023] Open
Abstract
Percutaneous interventions are gaining rapid acceptance in cardiology and revolutionizing the treatment of structural heart disease (SHD). As new percutaneous procedures of SHD are being developed, their associated complexity and anatomical variability demand a high-resolution special understanding for intraprocedural image guidance. During the last decade, three-dimensional (3D) transesophageal echocardiography (TEE) has become one of the most accessed imaging methods for structural interventions. Although 3D-TEE can assess cardiac structures and functions in real-time, its limitations (e.g., limited field of view, image quality at a large depth, etc.) must be addressed for its universal adaptation, as well as to improve the quality of its imaging and interventions. This review aims to present the role of TEE in the intraprocedural guidance of percutaneous structural interventions. We also focus on the current and future developments required in a multimodal image integration process when using TEE to enhance the management of congenital and SHD treatments.
Collapse
Affiliation(s)
- Xinyue Zhang
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Jordan Gosnell
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Varatharajan Nainamalai
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Savannah Page
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Sihong Huang
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Marcus Haw
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Bo Peng
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Joseph Vettukattil
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
| | - Jingfeng Jiang
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| |
Collapse
|
10
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
11
|
Zhao Y, Chen X, McDonald B, Yu C, Mohamed ASR, Fuller CD, Court LE, Pan T, Wang H, Wang X, Phan J, Yang J. A transformer-based hierarchical registration framework for multimodality deformable image registration. Comput Med Imaging Graph 2023; 108:102286. [PMID: 37625307 PMCID: PMC10873569 DOI: 10.1016/j.compmedimag.2023.102286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 07/04/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023]
Abstract
Deformable image registration (DIR) between daily and reference images is fundamentally important for adaptive radiotherapy. In the last decade, deep learning-based image registration methods have been developed with faster computation time and improved robustness compared to traditional methods. However, the registration performance is often degraded in extra-cranial sites with large volume containing multiple anatomic regions, such as Computed Tomography (CT)/Magnetic Resonance (MR) images used in head and neck (HN) radiotherapy. In this study, we developed a hierarchical deformable image registration (DIR) framework, Patch-based Registration Network (Patch-RegNet), to improve the accuracy and speed of CT-MR and MR-MR registration for head-and-neck MR-Linac treatments. Patch-RegNet includes three steps: a whole volume global registration, a patch-based local registration, and a patch-based deformable registration. Following a whole-volume rigid registration, the input images were divided into overlapping patches. Then a patch-based rigid registration was applied to achieve accurate local alignment for subsequent DIR. We developed a ViT-Morph model, a combination of a convolutional neural network (CNN) and the Vision Transformer (ViT), for the patch-based DIR. A modality independent neighborhood descriptor was adopted in our model as the similarity metric to account for both inter-modality and intra-modality registration. The CT-MR and MR-MR DIR models were trained with 242 CT-MR and 213 MR-MR image pairs from 36 patients, respectively, and both tested with 24 image pairs (CT-MR and MR-MR) from 6 other patients. The registration performance was evaluated with 7 manually contoured organs (brainstem, spinal cord, mandible, left/right parotids, left/right submandibular glands) by comparing with the traditional registration methods in Monaco treatment planning system and the popular deep learning-based DIR framework, Voxelmorph. Evaluation results show that our method outperformed VoxelMorph by 6 % for CT-MR registration, and 4 % for MR-MR registration based on DSC measurements. Our hierarchical registration framework has been demonstrated achieving significantly improved DIR accuracy of both CT-MR and MR-MR registration for head-and-neck MR-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Brigid McDonald
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Cenji Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Abdalah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Tinsu Pan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA.
| |
Collapse
|
12
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
13
|
Chang HH. Multimodal Image Registration Using a Viscous Fluid Model with the Bhattacharyya Distance. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083278 DOI: 10.1109/embc40787.2023.10340615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Image registration is an elementary task in medical image processing and analysis, which can be divided into monomodal and multimodal. Direct 3D multimodal registration in volumetric medical images can provide more insight into the interpretation of subsequent image processing applications than 2D methods. This paper is dedicated to the development of a 3D multimodal image registration algorithm based on a viscous fluid model associated with the Bhattacharyya distance. In our approach, a modified Navier-Stoke's equation is exploited as the foundation of the multimodal image registration framework. The hopscotch method is numerically implemented to solve the velocity field, whose values at the explicit locations are first computed and the values at the implicit positions are solved by transposition. The differential of the Bhattacharyya distance is incorporated into the body force function, which is the main driving force for deformation, to enable multimodal registration. A variety of simulated and real brain MR images were utilized to assess the proposed 3D multimodal image registration system. Preliminary experimental results indicated that our algorithm produced high registration accuracy in various registration scenarios and outperformed other competing methods in many multimodal image registration tasks.Clinical Relevance- This facilitates the disease diagnosis and treatment planning that requires accurate 3D multimodal image registration without massive image data and extensive training regardless of the imaging modality.
Collapse
|
14
|
Zhu J, Chen X, Liu Y, Yang B, Wei R, Qin S, Yang Z, Hu Z, Dai J, Men K. Improving accelerated 3D imaging in MRI-guided radiotherapy for prostate cancer using a deep learning method. Radiat Oncol 2023; 18:108. [PMID: 37393282 DOI: 10.1186/s13014-023-02306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/21/2023] [Indexed: 07/03/2023] Open
Abstract
PURPOSE This study was to improve image quality for high-speed MR imaging using a deep learning method for online adaptive radiotherapy in prostate cancer. We then evaluated its benefits on image registration. METHODS Sixty pairs of 1.5 T MR images acquired with an MR-linac were enrolled. The data included low-speed, high-quality (LSHQ), and high-speed low-quality (HSLQ) MR images. We proposed a CycleGAN, which is based on the data augmentation technique, to learn the mapping between the HSLQ and LSHQ images and then generate synthetic LSHQ (synLSHQ) images from the HSLQ images. Five-fold cross-validation was employed to test the CycleGAN model. The normalized mean absolute error (nMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), and edge keeping index (EKI) were calculated to determine image quality. The Jacobian determinant value (JDV), Dice similarity coefficient (DSC), and mean distance to agreement (MDA) were used to analyze deformable registration. RESULTS Compared with the LSHQ, the proposed synLSHQ achieved comparable image quality and reduced imaging time by ~ 66%. Compared with the HSLQ, the synLSHQ had better image quality with improvement of 57%, 3.4%, 26.9%, and 3.6% for nMAE, SSIM, PSNR, and EKI, respectively. Furthermore, the synLSHQ enhanced registration accuracy with a superior mean JDV (6%) and preferable DSC and MDA values compared with HSLQ. CONCLUSION The proposed method can generate high-quality images from high-speed scanning sequences. As a result, it shows potential to shorten the scan time while ensuring the accuracy of radiotherapy.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
- School of Physics and Technology, Wuhan University, Wuhan, 430072, China
| | - Bining Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ran Wei
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Shirui Qin
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhuanbo Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhihui Hu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jianrong Dai
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
15
|
Zhong L, Huang P, Shu H, Li Y, Zhang Y, Feng Q, Wu Y, Yang W. United multi-task learning for abdominal contrast-enhanced CT synthesis through joint deformable registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107391. [PMID: 36804266 DOI: 10.1016/j.cmpb.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/13/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Synthesizing abdominal contrast-enhanced computed tomography (CECT) images from non-enhanced CT (NECT) images is of great importance, in the delineation of radiotherapy target volumes, to reduce the risk of iodinated contrast agent and the registration error between NECT and CECT for transferring the delineations. NECT images contain structural information that can reflect the contrast difference between lesions and surrounding tissues. However, existing methods treat synthesis and registration as two separate tasks, which neglects the task collaborative and fails to address misalignment between images after the standard image pre-processing in training a CECT synthesis model. Thus, we propose an united multi-task learning (UMTL) for joint synthesis and deformable registration of abdominal CECT. Specifically, our UMTL is an end-to-end multi-task framework, which integrates a deformation field learning network for reducing the misalignment errors and a 3D generator for synthesizing CECT images. Furthermore, the learning of enhanced component images and the multi-loss function are adopted for enhancing the performance of synthetic CECT images. The proposed method is evaluated on two different resolution datasets and a separate test dataset from another center. The synthetic venous phase CECT images of the separate test dataset yield mean absolute error (MAE) of 32.78±7.27 HU, mean MAE of 24.15±5.12 HU on liver region, mean peak signal-to-noise rate (PSNR) of 27.59±2.45 dB, and mean structural similarity (SSIM) of 0.96±0.01. The Dice similarity coefficients of liver region between the true and synthetic venous phase CECT images are 0.96±0.05 (high-resolution) and 0.95±0.07 (low-resolution), respectively. The proposed method has great potential in aiding the delineation of radiotherapy target volumes.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Pinyu Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yin Li
- Department of Information, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China.
| |
Collapse
|
16
|
Zhang Z, Wang Y, Zhou S, Li Z, Peng Y, Gao S, Zhu G, Wu F, Wu B. The automatic evaluation of steno-occlusive changes in time-of-flight magnetic resonance angiography of moyamoya patients using a 3D coordinate attention residual network. Quant Imaging Med Surg 2023; 13:1009-1022. [PMID: 36819290 PMCID: PMC9929428 DOI: 10.21037/qims-22-799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Background Moyamoya disease (MMD) is a rare cerebrovascular occlusive disease with progressive stenosis of the terminal portion of internal cerebral artery (ICA) and its main branches, which can cause complications, such as high risks of disability and increased mortality. Accurate and timely diagnosis may be difficult for physicians who are unfamiliar to MMD. Therefore, this study aims to achieve a preoperative deep-learning-based evaluation of MMD by detecting steno-occlusive changes in the middle cerebral artery or distal ICA areas. Methods A fine-tuned deep learning model was developed using a three-dimensional (3D) coordinate attention residual network (3D CA-ResNet). This study enrolled 50 preoperative patients with MMD and 50 controls, and the corresponding time of flight magnetic resonance angiography (TOF-MRA) imaging data were acquired. The 3D CA-ResNet was trained based on sub-volumes and tested using patch-based and subject-based methods. The performance of the 3D CA-ResNet, as evaluated by the area under the curve (AUC) of receiving-operator characteristic, was compared with that of three other conventional 3D networks. Results With the resulting network, the patch-based test achieved an AUC value of 0.94 for the 3D CA-ResNet in 480 patches from 10 test patients and 10 test controls, which is significantly higher than the results of the others. The 3D CA-ResNet correctly classified the MMD patients and normal healthy controls, and the vascular lesion distribution in subjects with the disease was investigated by generating a stenosis probability map and 3D vascular structure segmentation. Conclusions The results demonstrated the reliability of the proposed 3D CA-ResNet in detecting stenotic areas on TOF-MRA imaging, and it outperformed three other models in identifying vascular steno-occlusive changes in patients with MMD.
Collapse
Affiliation(s)
- Zeru Zhang
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China;,The School of Health Humanities, Peking University, Beijing, China
| | - Yituo Wang
- Department of Radiology, Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Shuai Zhou
- Department of Radiology, Shijiazhuang People’s Hospital, Shijiazhuang, China
| | - Zhaotong Li
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China;,The School of Health Humanities, Peking University, Beijing, China
| | - Ying Peng
- Department of Radiology, Seventh Medical Center of Chinese PLA General Hospital, Beijing, China;,The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Song Gao
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Guangming Zhu
- Department of Neurology, College of Medicine, University of Arizona, Tucson, Arizona, USA
| | - Fengliang Wu
- Beijing Key Laboratory of Spinal Disease Research, Engineering Research Center of Bone and Joint Precision Medicine, Department of Orthopedics, Peking University Third Hospital, Beijing, China
| | - Bing Wu
- Department of Radiology, Seventh Medical Center of Chinese PLA General Hospital, Beijing, China;,The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| |
Collapse
|
17
|
Zhong L, Chen Z, Shu H, Zheng Y, Zhang Y, Wu Y, Feng Q, Li Y, Yang W. QACL: Quartet attention aware closed-loop learning for abdominal MR-to-CT synthesis via simultaneous registration. Med Image Anal 2023; 83:102692. [PMID: 36442293 DOI: 10.1016/j.media.2022.102692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/27/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022]
Abstract
Synthesis of computed tomography (CT) images from magnetic resonance (MR) images is an important task to overcome the lack of electron density information in MR-only radiotherapy treatment planning (RTP). Some innovative methods have been proposed for abdominal MR-to-CT synthesis. However, it is still challenging due to the large misalignment between preprocessed abdominal MR and CT images and the insufficient feature information learned by models. Although several studies have used the MR-to-CT synthesis to alleviate the difficulty of multi-modal registration, this misalignment remains unsolved when training the MR-to-CT synthesis model. In this paper, we propose an end-to-end quartet attention aware closed-loop learning (QACL) framework for MR-to-CT synthesis via simultaneous registration. Specifically, the proposed quartet attention generator and mono-modal registration network form a closed-loop to improve the performance of MR-to-CT synthesis via simultaneous registration. In particular, a quartet-attention mechanism is developed to enlarge the receptive fields in networks to extract the long-range and cross-dimension spatial dependencies. Experimental results on two independent abdominal datasets demonstrate that our QACL achieves impressive results with MAE of 55.30±10.59 HU, PSNR of 22.85±1.43 dB, and SSIM of 0.83±0.04 for synthesis, and with Dice of 0.799±0.129 for registration. The proposed QACL outperforms the state-of-the-art MR-to-CT synthesis and multi-modal registration methods.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Zeli Chen
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yikai Zheng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yin Li
- Department of Information, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510655, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China.
| |
Collapse
|
18
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
19
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
20
|
Heo JU, Zhou F, Jones R, Zheng J, Song X, Qian P, Baydoun A, Traughber MS, Kuo JW, Helo RA, Thompson C, Avril N, DeVincent D, Hunt H, Gupta A, Faraji N, Kharouta MZ, Kardan A, Bitonte D, Langmack CB, Nelson A, Kruzer A, Yao M, Dorth J, Nakayama J, Waggoner SE, Biswas T, Harris E, Sandstrom S, Traughber BJ, Muzic RF. Abdominopelvic MR to CT registration using a synthetic CT intermediate. J Appl Clin Med Phys 2022; 23:e13731. [PMID: 35920116 PMCID: PMC9512351 DOI: 10.1002/acm2.13731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 04/25/2022] [Accepted: 06/27/2022] [Indexed: 11/21/2022] Open
Abstract
Accurate coregistration of computed tomography (CT) and magnetic resonance (MR) imaging can provide clinically relevant and complementary information and can serve to facilitate multiple clinical tasks including surgical and radiation treatment planning, and generating a virtual Positron Emission Tomography (PET)/MR for the sites that do not have a PET/MR system available. Despite the long‐standing interest in multimodality co‐registration, a robust, routine clinical solution remains an unmet need. Part of the challenge may be the use of mutual information (MI) maximization and local phase difference (LPD) as similarity metrics, which have limited robustness, efficiency, and are difficult to optimize. Accordingly, we propose registering MR to CT by mapping the MR to a synthetic CT intermediate (sCT) and further using it in a sCT‐CT deformable image registration (DIR) that minimizes the sum of squared differences. The resultant deformation field of a sCT‐CT DIR is applied to the MRI to register it with the CT. Twenty‐five sets of abdominopelvic imaging data are used for evaluation. The proposed method is compared to standard MI‐ and LPD‐based methods, and the multimodality DIR provided by a state of the art, commercially available FDA‐cleared clinical software package. The results are compared using global similarity metrics, Modified Hausdorff Distance, and Dice Similarity Index on six structures. Further, four physicians visually assessed and scored registered images for their registration accuracy. As evident from both quantitative and qualitative evaluation, the proposed method achieved registration accuracy superior to LPD‐ and MI‐based methods and can refine the results of the commercial package DIR when using its results as a starting point. Supported by these, this manuscript concludes the proposed registration method is more robust, accurate, and efficient than the MI‐ and LPD‐based methods.
Collapse
Affiliation(s)
- Jin Uk Heo
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Feifei Zhou
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Robert Jones
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Jiamin Zheng
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Xin Song
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Pengjiang Qian
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Atallah Baydoun
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Internal Medicine, Louis Stokes Cleveland VA Medical Center, Cleveland, Ohio, USA
| | - Melanie S Traughber
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Jung-Wen Kuo
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Rose Al Helo
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Cheryl Thompson
- Department of Public Health Sciences, Penn State College of Medicine, Hershey, Pennsylvania, USA
| | - Norbert Avril
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Daniel DeVincent
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Harold Hunt
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Amit Gupta
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Navid Faraji
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Michael Z Kharouta
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Arash Kardan
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - David Bitonte
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Christian B Langmack
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | | | | | - Min Yao
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Jennifer Dorth
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - John Nakayama
- Department of Obstetrics and Gynecology, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Steven E Waggoner
- Department of Obstetrics and Gynecology, Cleveland Clinic, Cleveland, Ohio, USA
| | - Tithi Biswas
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Eleanor Harris
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Susan Sandstrom
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Bryan J Traughber
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Raymond F Muzic
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
21
|
Danilevicz MF, Gill M, Anderson R, Batley J, Bennamoun M, Bayer PE, Edwards D. Plant Genotype to Phenotype Prediction Using Machine Learning. Front Genet 2022; 13:822173. [PMID: 35664329 PMCID: PMC9159391 DOI: 10.3389/fgene.2022.822173] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 03/07/2022] [Indexed: 12/13/2022] Open
Abstract
Genomic prediction tools support crop breeding based on statistical methods, such as the genomic best linear unbiased prediction (GBLUP). However, these tools are not designed to capture non-linear relationships within multi-dimensional datasets, or deal with high dimension datasets such as imagery collected by unmanned aerial vehicles. Machine learning (ML) algorithms have the potential to surpass the prediction accuracy of current tools used for genotype to phenotype prediction, due to their capacity to autonomously extract data features and represent their relationships at multiple levels of abstraction. This review addresses the challenges of applying statistical and machine learning methods for predicting phenotypic traits based on genetic markers, environment data, and imagery for crop breeding. We present the advantages and disadvantages of explainable model structures, discuss the potential of machine learning models for genotype to phenotype prediction in crop breeding, and the challenges, including the scarcity of high-quality datasets, inconsistent metadata annotation and the requirements of ML models.
Collapse
Affiliation(s)
- Monica F. Danilevicz
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
| | - Mitchell Gill
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
| | - Robyn Anderson
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
| | - Jacqueline Batley
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
| | - Mohammed Bennamoun
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Philipp E. Bayer
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
| | - David Edwards
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA, Australia
- *Correspondence: David Edwards,
| |
Collapse
|
22
|
Zhang J, Zhang E, Yuan C, Zhang H, Wang X, Yan F, Pei Y, Li Y, Wei M, Yang Z, Wang X, Dong L. Abnormal default mode network could be a potential prognostic marker in patients with disorders of consciousness. Clin Neurol Neurosurg 2022; 218:107294. [PMID: 35597165 DOI: 10.1016/j.clineuro.2022.107294] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 05/13/2022] [Indexed: 11/17/2022]
Abstract
OBJECTIVES The study aimed to investigate disorders of consciousness (DOC) mechanisms of patients with severe traumatic brain injury (sTBI) related to default mode network (DMN) and to introduce a machine learning model that predicts the prognosis of these patients for 6 months. METHODS The sTBI patients suffering from DOC and healthy controls underwent functional magnetic resonance imaging. We defined patients with Extended Glasgow Outcome Score ≥ 5 as good outcome group, otherwise they were poor outcome group. The differences of DMN between sTBI and healthy controls and between good and poor outcome groups were compared. Based on the brain regions with altered functional connectivity between good and poor outcome groups, they were divided into 8 regions of interests according to side. The Z values of the regions of interests were extracted by Rest 1.8. Based on Z values, the Subspace K-Nearest Neighbor (Subspace KNN) was conducted to classify prognosis of sTBI patients suffering from DOC. RESULTS A total of 84 DMNs derived from patients and 45 DMNs from healthy controls were finally analyzed. The connectivity of the DMN was significantly decreased in sTBI patients suffering from DOC (Alphasim corrected, P < 0.05). In addition, compared with the poor outcome group (DMN samples = 60), the brain regions of DMN with decreased functional connectivity in the good outcome group (DMN samples = 24) the following bilateral areas: brodman Area 11, anterior cingulate and paracingulate gyri, brodman Area 25, olfactory cortex (Alphasim corrected, P < 0.05). The ability of Subspace KNN machine learning to distinguish the prognosis of patients (area under curve) was 0.97. CONCLUSIONS The interruption of DMN may be one of the reasons for DOC in patients with sTBI. Furthermore, based on early DMN (1-4 weeks), Subspace KNN machine learning has the potential value to distinguish the prognosis (6 months after brain trauma) of sTBI patients suffering from DOC.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Neurosurgical Institute of Fudan University, Shanghai 200040, China; Shanghai Clinical Medical Center of Neurosurgery, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai 200040, China; Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Enpeng Zhang
- Department of Neurosurgery, Yangzhou School of Clinical Medicine of Dalian Medical University, Yangzhou 225000, China
| | - Cong Yuan
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Neurosurgical Institute of Fudan University, Shanghai 200040, China; Shanghai Clinical Medical Center of Neurosurgery, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai 200040, China
| | - Hengzhu Zhang
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Xingdong Wang
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Fuli Yan
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Yunlong Pei
- Department of Neurosurgery, Yangzhou School of Clinical Medicine of Dalian Medical University, Yangzhou 225000, China
| | - Yuping Li
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Min Wei
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Zhijie Yang
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China
| | - Xiaodong Wang
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China.
| | - Lun Dong
- Department of Neurosurgery, Clinical Medical College,Yangzhou University, Yangzhou 225000, China.
| |
Collapse
|
23
|
Chen X, Yang B, Li J, Zhu J, Ma X, Chen D, Hu Z, Men K, Dai J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys Med Biol 2021; 66. [PMID: 34700300 DOI: 10.1088/1361-6560/ac3345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022]
Abstract
Objective:Megavoltage computed tomography (MV-CT) is used for setup verification and adaptive radiotherapy in tomotherapy. However, its low contrast and high noise lead to poor image quality. This study aimed to develop a deep-learning-based method to generate synthetic kilovoltage CT (skV-CT) and then evaluate its ability to improve image quality and tumor segmentation.Approach:The planning kV-CT and MV-CT images of 270 patients with nasopharyngeal carcinoma (NPC) treated on an Accuray TomoHD system were used. An improved cycle-consistent adversarial network which used residual blocks as its generator was adopted to learn the mapping between MV-CT and kV-CT and then generate skV-CT from MV-CT. A Catphan 700 phantom and 30 patients with NPC were used to evaluate image quality. The quantitative indices included contrast-to-noise ratio (CNR), uniformity and signal-to-noise ratio (SNR) for the phantom and the structural similarity index measure (SSIM), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR) for patients. Next, we trained three models for segmentation of the clinical target volume (CTV): MV-CT, skV-CT, and MV-CT combined with skV-CT. The segmentation accuracy was compared with indices of the dice similarity coefficient (DSC) and mean distance agreement (MDA).Mainresults:Compared with MV-CT, skV-CT showed significant improvement in CNR (184.0%), image uniformity (34.7%), and SNR (199.0%) in the phantom study and improved SSIM (1.7%), MAE (24.7%), and PSNR (7.5%) in the patient study. For CTV segmentation with only MV-CT, only skV-CT, and MV-CT combined with skV-CT, the DSCs were 0.75 ± 0.04, 0.78 ± 0.04, and 0.79 ± 0.03, respectively, and the MDAs (in mm) were 3.69 ± 0.81, 3.14 ± 0.80, and 2.90 ± 0.62, respectively.Significance:The proposed method improved the image quality of MV-CT and thus tumor segmentation in helical tomotherapy. The method potentially can benefit adaptive radiotherapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jingwen Li
- Cloud Computing and Big Data Research Institute, China Academy of Information and Communications Technology, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhihui Hu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
24
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
25
|
McKenzie EM, Tong N, Ruan D, Cao M, Chin RK, Sheng K. Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents. Med Phys 2021; 48:4459-4471. [PMID: 34101198 DOI: 10.1002/mp.15039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/07/2021] [Accepted: 05/27/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.
Collapse
Affiliation(s)
- Elizabeth M McKenzie
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Nuo Tong
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
26
|
Field M, Hardcastle N, Jameson M, Aherne N, Holloway L. Machine learning applications in radiation oncology. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 19:13-24. [PMID: 34307915 PMCID: PMC8295850 DOI: 10.1016/j.phro.2021.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 05/19/2021] [Accepted: 05/22/2021] [Indexed: 12/23/2022]
Abstract
Machine learning technology has a growing impact on radiation oncology with an increasing presence in research and industry. The prevalence of diverse data including 3D imaging and the 3D radiation dose delivery presents potential for future automation and scope for treatment improvements for cancer patients. Harnessing this potential requires standardization of tools and data, and focused collaboration between fields of expertise. The rapid advancement of radiation oncology treatment technologies presents opportunities for machine learning integration with investments targeted towards data quality, data extraction, software, and engagement with clinical expertise. In this review, we provide an overview of machine learning concepts before reviewing advances in applying machine learning to radiation oncology and integrating these techniques into the radiation oncology workflows. Several key areas are outlined in the radiation oncology workflow where machine learning has been applied and where it can have a significant impact in terms of efficiency, consistency in treatment and overall treatment outcomes. This review highlights that machine learning has key early applications in radiation oncology due to the repetitive nature of many tasks that also currently have human review. Standardized data management of routinely collected imaging and radiation dose data are also highlighted as enabling engagement in research utilizing machine learning and the ability integrate these technologies into clinical workflow to benefit patients. Physicists need to be part of the conversation to facilitate this technical integration.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| | - Michael Jameson
- GenesisCare, Alexandria, NSW, Australia.,St Vincent's Clinical School, Faculty of Medicine, University of New South Wales, Australia
| | - Noel Aherne
- Mid North Coast Cancer Institute, NSW, Australia.,Rural Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.,Cancer Therapy Centre, Liverpool Hospital, Sydney, NSW, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
27
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|