1
|
Wang H, Ou Y, Fang W, Ambalathankandy P, Goto N, Ota G, Okino T, Fukae J, Sutherland K, Ikebe M, Kamishima T. A deep registration method for accurate quantification of joint space narrowing progression in rheumatoid arthritis. Comput Med Imaging Graph 2023; 108:102273. [PMID: 37531811 DOI: 10.1016/j.compmedimag.2023.102273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 07/15/2023] [Accepted: 07/15/2023] [Indexed: 08/04/2023]
Abstract
Rheumatoid arthritis (RA) is a chronic autoimmune inflammatory disease that leads to progressive articular destruction and severe disability. Joint space narrowing (JSN) has been regarded as an important indicator for RA progression and has received significant attention. Radiology plays a crucial role in the diagnosis and monitoring of RA through the assessment of joint space. A new framework for monitoring joint space by quantifying joint space narrowing (JSN) progression through image registration in radiographic images has emerged as a promising research direction. This framework offers the advantage of high accuracy; however, challenges still exist in reducing mismatches and improving reliability. In this work, we utilize a deep intra-subject rigid registration network to automatically quantify JSN progression in the early stages of RA. In our experiments, the mean-square error of the Euclidean distance between the moving and fixed images was 0.0031, the standard deviation was 0.0661 mm and the mismatching rate was 0.48%. Our method achieves sub-pixel level accuracy, surpassing manual measurements significantly. The proposed method is robust to noise, rotation and scaling of joints. Moreover, it provides misalignment visualization, which can assist radiologists and rheumatologists in assessing the reliability of quantification, exhibiting potential for future clinical applications. As a result, we are optimistic that our proposed method will make a significant contribution to the automatic quantification of JSN progression in RA. Code is available at https://github.com/pokeblow/Deep-Registration-QJSN-Finger.git.
Collapse
Affiliation(s)
- Haolin Wang
- Graduate School of Health Sciences, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan
| | - Yafei Ou
- Research Center For Integrated Quantum Electronics, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan; Graduate School of Information Science and Technology, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan.
| | - Wanxuan Fang
- Graduate School of Health Sciences, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan
| | - Prasoon Ambalathankandy
- Research Center For Integrated Quantum Electronics, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan; Graduate School of Information Science and Technology, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan
| | - Naoto Goto
- Research Center For Integrated Quantum Electronics, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan; Graduate School of Information Science and Technology, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan
| | - Gen Ota
- Research Center For Integrated Quantum Electronics, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan; Graduate School of Information Science and Technology, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan
| | - Taichi Okino
- Department of Radiological Technology, Sapporo City General Hospital, Sapporo, 060-8604, Hokkaido, Japan
| | - Jun Fukae
- Kuriyama Red Cross Hospital, Yubari, 069-1513, Hokkaido, Japan
| | - Kenneth Sutherland
- Global Center for Biomedical Science and Engineering, Hokkaido University, Sapporo, 060-8638, Hokkaido, Japan
| | - Masayuki Ikebe
- Research Center For Integrated Quantum Electronics, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan; Graduate School of Information Science and Technology, Hokkaido University, Sapporo, 060-0813, Hokkaido, Japan
| | - Tamotsu Kamishima
- Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan
| |
Collapse
|
2
|
Lallement A, Noblet V, Antoni D, Meyer P. Detecting and quantifying spatial misalignment between longitudinal kilovoltage computed tomography (kVCT) scans of the head and neck by using convolutional neural networks (CNNs). Technol Health Care 2023:THC220519. [PMID: 36776082 DOI: 10.3233/thc-220519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
BACKGROUND Adaptive radiotherapy (ART) aims to address anatomical modifications appearing during the treatment of patients by modifying the planning treatment according to the daily positioning image. Clinical implementation of ART relies on the quality of the deformable image registration (DIR) algorithms included in the ART workflow. To translate ART into clinical practice, automatic DIR assessment is needed. OBJECTIVE This article aims to estimate spatial misalignment between two head and neck kilovoltage computed tomography (kVCT) images by using two convolutional neural networks (CNNs). METHODS The first CNN quantifies misalignments between 0 mm and 15 mm and the second CNN detects and classifies misalignments into two classes (poor alignment and good alignment). Both networks take pairs of patches of 33x33x33 mm3 as inputs and use only the image intensity information. The training dataset was built by deforming kVCT images with basis splines (B-splines) to simulate DIR error maps. The test dataset was built using 2500 landmarks, consisting of hard and soft landmark tissues annotated by 6 clinicians at 10 locations. RESULTS The quantification CNN reaches a mean error of 1.26 mm (± 1.75 mm) on the landmark set which, depending on the location, has annotation errors between 1 mm and 2 mm. The errors obtained for the quantification network fit the computed interoperator error. The classification network achieves an overall accuracy of 79.32%, and although the classification network overdetects poor alignments, it performs well (i.e., it achieves a rate of 90.4%) in detecting poor alignments when given one. CONCLUSION The performances of the networks indicate the feasibility of using CNNs for an agnostic and generic approach to misalignment quantification and detection.
Collapse
Affiliation(s)
| | | | - Delphine Antoni
- Department of Radiation Therapy, Institut de Cancérologie de Strasbourg, Strasbourg, France
| | - Philippe Meyer
- ICube-UMR 7357, Strasbourg, France.,Department of Medical Physics, Institut de Cancérologie de Strasbourg, Strasbourg, France
| |
Collapse
|
3
|
Claessens M, Oria CS, Brouwer CL, Ziemer BP, Scholey JE, Lin H, Witztum A, Morin O, Naqa IE, Van Elmpt W, Verellen D. Quality Assurance for AI-Based Applications in Radiation Therapy. Semin Radiat Oncol 2022; 32:421-431. [DOI: 10.1016/j.semradonc.2022.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
4
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
5
|
Bierbrier J, Gueziri HE, Collins DL. Estimating medical image registration error and confidence: A taxonomy and scoping review. Med Image Anal 2022; 81:102531. [PMID: 35858506 DOI: 10.1016/j.media.2022.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 06/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022]
Abstract
Given that image registration is a fundamental and ubiquitous task in both clinical and research domains of the medical field, errors in registration can have serious consequences. Since such errors can mislead clinicians during image-guided therapies or bias the results of a downstream analysis, methods to estimate registration error are becoming more popular. To give structure to this new heterogenous field we developed a taxonomy and performed a scoping review of methods that quantitatively and automatically provide a dense estimation of registration error. The taxonomy breaks down error estimation methods into Approach (Image- or Transformation-based), Framework (Machine Learning or Direct) and Measurement (error or confidence) components. Following the PRISMA guidelines for scoping reviews, the 570 records found were reduced to twenty studies that met inclusion criteria, which were then reviewed according to the proposed taxonomy. Trends in the field, advantages and disadvantages of the methods, and potential sources of bias are also discussed. We provide suggestions for best practices and identify areas of future research.
Collapse
Affiliation(s)
- Joshua Bierbrier
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada.
| | - Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada
| | - D Louis Collins
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
6
|
Scalable quorum-based deep neural networks with adversarial learning for automated lung lobe segmentation in fast helical free-breathing CTs. Int J Comput Assist Radiol Surg 2021; 16:1775-1784. [PMID: 34378122 DOI: 10.1007/s11548-021-02454-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 07/06/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Fast helical free-breathing CT (FHFBCT) scans are widely used for 5DCT and 5D Cone Beam imaging protocols. For quantitative analysis of lung physiology and function, it is important to segment the lung lobes in these scans. Since the 5DCT protocols use up to 25 FHFBCT scans, it is important that this segmentation task be automated. In this paper, we present a deep neural network (DNN) framework for segmenting the lung lobes in near real time. METHODS A total of 22 patient datasets (550 3D CT scans) were used for the study. Each of the lung lobes was manually segmented and considered ground-truth. A supervised and constrained generative adversarial network (CGAN) was employed for learning each set of lobe segmentations for each patient with 12 patients designated for training data. The resulting generator DNNs represented the lobe segmentations for each training dataset. A quorum-based algorithm was then implemented to test validation data consisting of 10 separate patient datasets (250 3D CTs). Each of the DNNs predicted their corresponding lobes for the validation data, and equal weights were given to the 12 generator CGANs. The quorum process worked by selecting the weighted average result of all 12 CGAN results for each lobe. RESULTS When evaluated against ground-truth segmentations, the quorum-based lobe segmentation was observed to have average structural similarity index, normalized cross-correlation coefficient, and dice coefficient values of 0.929, 0.806, and 0.814, respectively, compared to values of 0.911, 0.698, and 0.696, respectively, using a conventional strategy. CONCLUSION The proposed quorum-based approach computed segmentations with clinically acceptable accuracy in near real time using a multi-GPU-based computing setup. This method is scalable as more patient-specific CGANs can be added to the quorum over time.
Collapse
|
7
|
Field M, Hardcastle N, Jameson M, Aherne N, Holloway L. Machine learning applications in radiation oncology. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 19:13-24. [PMID: 34307915 PMCID: PMC8295850 DOI: 10.1016/j.phro.2021.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 05/19/2021] [Accepted: 05/22/2021] [Indexed: 12/23/2022]
Abstract
Machine learning technology has a growing impact on radiation oncology with an increasing presence in research and industry. The prevalence of diverse data including 3D imaging and the 3D radiation dose delivery presents potential for future automation and scope for treatment improvements for cancer patients. Harnessing this potential requires standardization of tools and data, and focused collaboration between fields of expertise. The rapid advancement of radiation oncology treatment technologies presents opportunities for machine learning integration with investments targeted towards data quality, data extraction, software, and engagement with clinical expertise. In this review, we provide an overview of machine learning concepts before reviewing advances in applying machine learning to radiation oncology and integrating these techniques into the radiation oncology workflows. Several key areas are outlined in the radiation oncology workflow where machine learning has been applied and where it can have a significant impact in terms of efficiency, consistency in treatment and overall treatment outcomes. This review highlights that machine learning has key early applications in radiation oncology due to the repetitive nature of many tasks that also currently have human review. Standardized data management of routinely collected imaging and radiation dose data are also highlighted as enabling engagement in research utilizing machine learning and the ability integrate these technologies into clinical workflow to benefit patients. Physicists need to be part of the conversation to facilitate this technical integration.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| | - Michael Jameson
- GenesisCare, Alexandria, NSW, Australia.,St Vincent's Clinical School, Faculty of Medicine, University of New South Wales, Australia
| | - Noel Aherne
- Mid North Coast Cancer Institute, NSW, Australia.,Rural Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.,Cancer Therapy Centre, Liverpool Hospital, Sydney, NSW, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
8
|
Hussein M, Akintonde A, McClelland J, Speight R, Clark CH. Clinical use, challenges, and barriers to implementation of deformable image registration in radiotherapy - the need for guidance and QA tools. Br J Radiol 2021; 94:20210001. [PMID: 33882253 PMCID: PMC8173691 DOI: 10.1259/bjr.20210001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/06/2021] [Accepted: 04/12/2021] [Indexed: 12/22/2022] Open
Abstract
OBJECTIVE The aim of this study was to evaluate the current status of the clinical use of deformable image registration (DIR) in radiotherapy and to gain an understanding of the challenges faced by centres in clinical implementation of DIR, including commissioning and quality assurance (QA), and to determine the barriers faced. The goal was to inform whether additional guidance and QA tools were needed. METHODS A survey focussed on clinical use, metrics used, how centres would like to use DIR in the future and challenges faced, was designed and sent to 71 radiotherapy centres in the UK. Data were gathered specifically on which centres we using DIR clinically, which applications were being used, what commissioning and QA tests were performed, and what barriers were preventing the integration of DIR into the clinical workflow. Centres that did not use DIR clinically were encouraged to fill in the survey and were asked if they have any future plans and in what timescale. RESULTS 51 out of 71 (70%) radiotherapy centres responded. 47 centres reported access to a commercial software that could perform DIR. 20 centres already used DIR clinically, and 22 centres had plans to implement an application of DIR within 3 years of the survey. The most common clinical application of DIR was to propagate contours from one scan to another (19 centres). In each of the applications, the types of commissioning and QA tests performed varied depending on the type of application and between centres. Some of the key barriers were determining when a DIR was satisfactory including which metrics to use, and lack of resources. CONCLUSION The survey results highlighted that there is a need for additional guidelines, training, better tools for commissioning DIR software and for the QA of registration results, which should include developing or recommending which quantitative metrics to use. ADVANCES IN KNOWLEDGE This survey has given a useful picture of the clinical use and lack of use of DIR in UK radiotherapy centres. The survey provided useful insight into how centres commission and QA DIR applications, especially the variability among centres. It was also possible to highlight key barriers to implementation and determine factors that may help overcome this which include the need for additional guidance specific to different applications, better tools and metrics.
Collapse
Affiliation(s)
- Mohammad Hussein
- Metrology for Medical Physics Centre, National Physical Laboratory, Teddington, UK
| | - Adeyemi Akintonde
- Centre for Medical Image Computing, University College London, London, UK
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, UK
| | - Richard Speight
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | |
Collapse
|
9
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
10
|
|
11
|
Poortmans PMP, Takanen S, Marta GN, Meattini I, Kaidar-Person O. Winter is over: The use of Artificial Intelligence to individualise radiation therapy for breast cancer. Breast 2020; 49:194-200. [PMID: 31931265 PMCID: PMC7375562 DOI: 10.1016/j.breast.2019.11.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/16/2019] [Accepted: 11/20/2019] [Indexed: 01/08/2023] Open
Abstract
Artificial intelligence demonstrated its value for automated contouring of organs at risk and target volumes as well as for auto-planning of radiation dose distributions in terms of saving time, increasing consistency, and improving dose-volumes parameters. Future developments include incorporating dose/outcome data to optimise dose distributions with optimal coverage of the high-risk areas, while at the same time limiting doses to low-risk areas. An infinite gradient of volumes and doses to deliver spatially-adjusted radiation can be generated, allowing to avoid unnecessary radiation to organs at risk. Therefore, data about patient-, tumour-, and treatment-related factors have to be combined with dose distributions and outcome-containing databases.
Collapse
Affiliation(s)
| | - Silvia Takanen
- Institut Curie, Department of Radiation Oncology, Paris, France
| | - Gustavo Nader Marta
- Department of Radiation Oncology - Hospital Sírio-Libanês, Brazil; Department of Radiology and Oncology - Radiation Oncology, Instituto Do Câncer Do Estado de São Paulo (ICESP), Faculdade de Medicina da Universidade de São Paulo, Brazil
| | - Icro Meattini
- Department of Experimental and Clinical Biomedical Sciences "M. Serio", University of Florence, Florence, Italy; Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero-Universitaria Careggi, Florence, Italy
| | - Orit Kaidar-Person
- Radiation Oncology Unit, Breast Radiation Unit, Sheba Tel Ha'shomer, Ramat Gan, Israel
| |
Collapse
|
12
|
Galib SM, Lee HK, Guy CL, Riblett MJ, Hugo GD. A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks. Med Phys 2019; 47:99-109. [DOI: 10.1002/mp.13890] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 10/14/2019] [Accepted: 10/15/2019] [Indexed: 11/09/2022] Open
Affiliation(s)
- Shaikat M. Galib
- Department of Nuclear Engineering Missouri University of Science and Technology Rolla MO 65409 USA
| | - Hyoung K. Lee
- Department of Nuclear Engineering Missouri University of Science and Technology Rolla MO 65409 USA
| | - Christopher L. Guy
- Department of Radiation Oncology Virginia Commonwealth University Richmond VA 23298 USA
| | - Matthew J. Riblett
- Department of Radiation Oncology Virginia Commonwealth University Richmond VA 23298 USA
| | - Geoffrey D. Hugo
- Department of Radiation Oncology Washington University School of Medicine St. Louis 63110 MO USA
| |
Collapse
|
13
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
14
|
Rigaud B, Simon A, Castelli J, Lafond C, Acosta O, Haigron P, Cazoulat G, de Crevoisier R. Deformable image registration for radiation therapy: principle, methods, applications and evaluation. Acta Oncol 2019; 58:1225-1237. [PMID: 31155990 DOI: 10.1080/0284186x.2019.1620331] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background: Deformable image registration (DIR) is increasingly used in the field of radiation therapy (RT) to account for anatomical deformations. The aims of this paper are to describe the main applications of DIR in RT and discuss current DIR evaluation methods. Methods: Articles on DIR published from January 2000 to October 2018 were extracted from PubMed and Science Direct. Our search was restricted to articles that report data obtained from humans, were written in English, and address DIR methods for RT. A total of 207 articles were selected from among 2506 identified in the search process. Results: At planning, DIR is used for organ delineation using atlas-based segmentation, deformation-based planning target volume definition, functional planning and magnetic resonance imaging-based dose calculation. In image-guided RT, DIR is used for contour propagation and dose calculation on per-treatment imaging. DIR is also used to determine the accumulated dose from fraction to fraction in external beam RT and brachytherapy, both for dose reporting and adaptive RT. In the case of re-irradiation, DIR can be used to estimate the cumulated dose of the two irradiations. Finally, DIR can be used to predict toxicity in voxel-wise population analysis. However, the evaluation of DIR remains an open issue, especially when dealing with complex cases such as the disappearance of matter. To quantify DIR uncertainties, most evaluation methods are limited to geometry-based metrics. Software companies have now integrated DIR tools into treatment planning systems for clinical use, such as contour propagation and fraction dose accumulation. Conclusions: DIR is increasingly important in RT applications, from planning to toxicity prediction. DIR is routinely used to reduce the workload of contour propagation. However, its use for complex dosimetric applications must be carefully evaluated by combining quantitative and qualitative analyses.
Collapse
Affiliation(s)
- Bastien Rigaud
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Antoine Simon
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Joël Castelli
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Caroline Lafond
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Oscar Acosta
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Pascal Haigron
- CLCC Eugène Marquis, University of Rennes, Inserm , Rennes , France
| | - Guillaume Cazoulat
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center , Houston , TX , USA
| | | |
Collapse
|
15
|
Chen X, Men K, Li Y, Yi J, Dai J. A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning. Med Phys 2019; 46:56-64. [PMID: 30367492 PMCID: PMC7379709 DOI: 10.1002/mp.13262] [Citation(s) in RCA: 108] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 10/17/2018] [Accepted: 10/21/2018] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To develop a method for predicting optimal dose distributions, given the planning image and segmented anatomy, by applying deep learning techniques to a database of previously optimized and approved Intensity-modulated radiation therapy treatment plans. METHODS Eighty cases of early-stage nasopharyngeal cancer (NPC) were included in the study. Seventy cases were chosen randomly as the training set and the remaining as the test set. The inputs were the images with structures, with each target and organs at risk (OARs) assigned a unique label. The outputs were dose maps, including coarse dose maps and converted fine dose maps (FDM) from convolution. Two types of input images with structures were used in the model building. One type of input included the images (with associated structures) without manipulation. The second type of input involved modifying the image gray label with information from radiation beam geometry. ResNet101 was chosen as the deep learning network for both. The accuracy of predicted dose distributions was evaluated against the corresponding dose as used in the clinic. A global three-dimensional gamma analysis was calculated for the evaluation. RESULTS The proposed model trained with the two different sets of input images and structures could both predict patient-specific dose distributions accurately. For the out-of-field dose distributions, the model obtained from the input with radiation geometry performed better (dose difference in %, 4.7 ± 6.1% vs 5.5 ± 7.9%, P < 0.05). The mean Gamma pass rates of dose distributions predicted with both types of input were comparable for most OARs (P > 0.05), except for the bilateral optic nerves and the optic chiasm. CONCLUSIONS The proposed system with radiation geometry added to the input is a promising method to generate patient-specific dose distributions for radiotherapy. It can be applied to obtain the dose distributions slice-by-slice for planning quality assurance and for guiding automated planning.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing100021China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing100021China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing100021China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing100021China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing100021China
| |
Collapse
|
16
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2018; 46:e1-e36. [PMID: 30367497 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 372] [Impact Index Per Article: 62.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| | - Kenny H Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
17
|
Paganelli C, Meschini G, Molinelli S, Riboldi M, Baroni G. “Patient-specific validation of deformable image registration in radiation therapy: Overview and caveats”. Med Phys 2018; 45:e908-e922. [DOI: 10.1002/mp.13162] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 07/30/2018] [Accepted: 08/24/2018] [Indexed: 12/26/2022] Open
Affiliation(s)
- Chiara Paganelli
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | - Giorgia Meschini
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | | | - Marco Riboldi
- Department of Medical Physics; Ludwig-Maximilians-Universitat Munchen; Munich 80539 Germany
| | - Guido Baroni
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
- Centro Nazionale di Adroterapia Oncologica; Pavia 27100 Italy
| |
Collapse
|