1
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
2
|
Irannejad M, Abedi I, Lonbani VD, Hassanvand M. Deep-neural network approaches for predicting 3D dose distribution in intensity-modulated radiotherapy of the brain tumors. J Appl Clin Med Phys 2024; 25:e14197. [PMID: 37933891 PMCID: PMC10962483 DOI: 10.1002/acm2.14197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/24/2023] [Accepted: 10/23/2023] [Indexed: 11/08/2023] Open
Abstract
PURPOSE The aim of this study is to reduce treatment planning time by predicting the intensity-modulated radiotherapy 3D dose distribution using deep learning for brain cancer patients. "For this purpose, two different approaches in dose prediction, i.e., first only planning target volume (PTV) and second PTV with organs at risk (OARs) as input of the U-net model, are employed and their results are compared." METHODS AND MATERIALS The data of 99 patients with glioma tumors referred for IMRT treatment were used so that the images of 90 patients were regarded as training datasets and the others were for the test. All patients were manually planned and treated with sixth-field IMRT; the photon energy was 6MV. The treatment plans were done with the Collapsed Cone Convolution algorithm to deliver 60 Gy in 30 fractions. RESULTS The obtained accuracy and similarity for the proposed methods in dose prediction when compared to the clinical dose distributions on test patients according to MSE, dice metric and SSIM for the Only-PTV and PTV-OARs methods are on average (0.05, 0.851, 0.83) and (0.056, 0.842, 0.82) respectively. Also, dose prediction is done in an extremely short time. CONCLUSION The same results of the two proposed methods prove that the presence of OARs in addition to PTV does not provide new knowledge to the network and only by defining the PTV and its location in the imaging slices, does the dose distribution become predictable. Therefore, the Only-PTV method by eliminating the process of introducing OARs can reduce the overall designing time of treatment by IMRT in patients with glioma tumors.
Collapse
Affiliation(s)
- Maziar Irannejad
- Department of Electrical Engineering, Najafabad BranchIslamic Azad UniversityNajafabadIran
| | - Iraj Abedi
- Medical Physics Department, School of MedicineIsfahan University of Medical SciencesIsfahanIran
| | | | | |
Collapse
|
3
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
4
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
IntroductionOrgan-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data.MethodsTwo head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient.ResultsMean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs.ConclusionDL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
- *Correspondence: J. John Lucido,
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
5
|
Udupa JK, Liu T, Jin C, Zhao L, Odhner D, Tong Y, Agrawal V, Pednekar G, Nag S, Kotia T, Goodman M, Wileyto EP, Mihailidis D, Lukens JN, Berman AT, Stambaugh J, Lim T, Chowdary R, Jalluri D, Jabbour SK, Kim S, Reyhan M, Robinson CG, Thorstad WL, Choi JI, Press R, Simone CB, Camaratta J, Owens S, Torigian DA. Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring. Med Phys 2022; 49:7118-7149. [PMID: 35833287 PMCID: PMC10087050 DOI: 10.1002/mp.15854] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/20/2022] [Accepted: 06/30/2022] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Collapse
Affiliation(s)
- Jayaram K. Udupa
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tiange Liu
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
| | - Chao Jin
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Liming Zhao
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dewey Odhner
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Yubing Tong
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Vibhu Agrawal
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Gargi Pednekar
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Sanghita Nag
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Tarun Kotia
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | | | - E. Paul Wileyto
- Department of Biostatistics and EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dimitris Mihailidis
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - John Nicholas Lukens
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Abigail T. Berman
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Joann Stambaugh
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tristan Lim
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rupa Chowdary
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dheeraj Jalluri
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Salma K. Jabbour
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Sung Kim
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Meral Reyhan
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | | | - Wade L. Thorstad
- Department of Radiation OncologyWashington UniversitySt. LouisMissouriUSA
| | | | | | | | - Joe Camaratta
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Steve Owens
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Drew A. Torigian
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
6
|
Ye X, Guo D, Ge J, Yan S, Xin Y, Song Y, Yan Y, Huang BS, Hung TM, Zhu Z, Peng L, Ren Y, Liu R, Zhang G, Mao M, Chen X, Lu Z, Li W, Chen Y, Huang L, Xiao J, Harrison AP, Lu L, Lin CY, Jin D, Ho TY. Comprehensive and clinically accurate head and neck cancer organs-at-risk delineation on a multi-institutional study. Nat Commun 2022; 13:6137. [PMID: 36253346 PMCID: PMC9576793 DOI: 10.1038/s41467-022-33178-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 09/07/2022] [Indexed: 12/24/2022] Open
Abstract
Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.
Collapse
Affiliation(s)
- Xianghua Ye
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Dazhou Guo
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Jia Ge
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Senxiang Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yi Xin
- Ping An Technology, Shenzhen, China
| | - Yuchen Song
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yongheng Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Bing-shen Huang
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Tsung-Min Hung
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Zhuotun Zhu
- grid.21107.350000 0001 2171 9311Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - Ling Peng
- grid.417401.70000 0004 1798 6507Department of Respiratory Disease, Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, China
| | - Yanping Ren
- grid.413597.d0000 0004 1757 8802Department of Radiation Oncology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Rui Liu
- grid.452438.c0000 0004 1760 8119Department of Radiation Oncology, The First Affiliated Hospital, Xi’an Jiaotong University, Xi’an, China
| | - Gong Zhang
- Department of Radiation Oncology, People’s Hospital of Shanxi Province, Shanxi, China
| | - Mengyuan Mao
- grid.284723.80000 0000 8877 7471Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaohua Chen
- grid.412643.60000 0004 1757 2902Department of Radiation Oncology, The First Hospital of Lanzhou University, Lanzhou, Gansu China
| | - Zhongjie Lu
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Wenxiang Li
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yuzhen Chen
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | | | | | | | - Le Lu
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Chien-Yu Lin
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC ,grid.413801.f0000 0001 0711 0593Particle Physics and Beam Delivery Core Laboratory, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan, ROC
| | - Dakai Jin
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Tsung-Ying Ho
- grid.413801.f0000 0001 0711 0593Department of Nuclear Medicine, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| |
Collapse
|
7
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
8
|
Luximon DC, Abdulkadir Y, Chow PE, Morris ED, Lamb JM. Machine-assisted interpolation algorithm for semi-automated segmentation of highly deformable organs. Med Phys 2022; 49:41-51. [PMID: 34783027 PMCID: PMC8758550 DOI: 10.1002/mp.15351] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/03/2021] [Accepted: 11/01/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Accurate and robust auto-segmentation of highly deformable organs (HDOs), for example, stomach or bowel, remains an outstanding problem due to these organs' frequent and large anatomical variations. Yet, time-consuming manual segmentation of these organs presents a particular challenge to time-limited modern radiotherapy techniques such as on-line adaptive radiotherapy and high-dose-rate brachytherapy. We propose a machine-assisted interpolation (MAI) that uses prior information in the form of sparse manual delineations to facilitate rapid, accurate segmentation of the stomach from low field magnetic resonance images (MRI) and the bowel from computed tomography (CT) images. METHODS Stomach MR images from 116 patients undergoing 0.35T MRI-guided abdominal radiotherapy and bowel CT images from 120 patients undergoing high dose rate pelvic brachytherapy treatment were collected. For each patient volume, the manual delineation of the HDO was extracted from every 8th slice. These manually drawn contours were first interpolated to obtain an initial estimate of the HDO contour. A two-channel 64 × 64 pixel patch-based convolutional neural network (CNN) was trained to localize the position of the organ's boundary on each slice within a five-pixel wide road using the image and interpolated contour estimate. This boundary prediction was then input, in conjunction with the image, to an organ closing CNN which output the final organ segmentation. A Dense-UNet architecture was used for both networks. The MAI algorithm was separately trained for the stomach segmentation and the bowel segmentation. Algorithm performance was compared against linear interpolation (LI) alone and against fully automated segmentation (FAS) using a Dense-UNet trained on the same datasets. The Dice Similarity Coefficient (DSC) and mean surface distance (MSD) metrics were used to compare the predictions from the three methods. Statistically significance was tested using Student's t test. RESULTS For the stomach segmentation, the mean DSC from MAI (0.91 ± 0.02) was 5.0% and 10.0% higher as compared to LI and FAS, respectively. The average MSD from MAI (0.77 ± 0.25 mm) was 0.54 and 3.19 mm lower compared to the two other methods. Only 7% of MAI stomach predictions resulted in a DSC < 0.8, as compared to 30% and 28% for LI and FAS, respectively. For the bowel segmentation, the mean DSC of MAI (0.90 ± 0.04) was 6% and 18% higher, and the average MSD of MAI (0.93 ± 0.48 mm) was 0.42 and 4.9 mm lower as compared to LI and FAS. Sixteen percent of the predicted contour from MAI resulted in a DSC < 0.8, as compared to 46% and 60% for FAS and LI, respectively. All comparisons between MAI and the baseline methods were found to be statistically significant (p-value < 0.001). CONCLUSIONS The proposed MAI algorithm significantly outperformed LI in terms of accuracy and robustness for both stomach segmentation from low-field MRIs and bowel segmentation from CT images. At this time, FAS methods for HDOs still require significant manual editing. Therefore, we believe that the MAI algorithm has the potential to expedite the process of HDO delineation within the radiation therapy workflow.
Collapse
Affiliation(s)
- Dishane C Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Yasin Abdulkadir
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Phillip E Chow
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Eric D Morris
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| |
Collapse
|
9
|
Chang Y, Wang Z, Peng Z, Zhou J, Pi Y, Xu XG, Pei X. Clinical application and improvement of a CNN-based autosegmentation model for clinical target volumes in cervical cancer radiotherapy. J Appl Clin Med Phys 2021; 22:115-125. [PMID: 34643320 PMCID: PMC8598149 DOI: 10.1002/acm2.13440] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/16/2021] [Accepted: 09/17/2021] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVE Clinical target volume (CTV) autosegmentation for cervical cancer is desirable for radiation therapy. Data heterogeneity and interobserver variability (IOV) limit the clinical adaptability of such methods. The adaptive method is proposed to improve the adaptability of CNN-based autosegmentation of CTV contours in cervical cancer. METHODS This study included 400 cervical cancer treatment planning cases with CTV delineated by radiation oncologists from three hospitals. The datasets were divided into five subdatasets (80 cases each). The cases in datasets 1, 2, and 3 were delineated by physicians A, B, and C, respectively. The cases in datasets 4 and 5 were delineated by multiple physicians. Dataset 1 was divided into training (50 cases), validation (10 cases), and testing (20 cases) cohorts, and they were used to construct the pretrained model. Datasets 2-5 were regarded as host datasets to evaluate the accuracy of the pretrained model. In the adaptive process, the pretrained model was fine-tuned to measure improvements by gradually adding more training cases selected from the host datasets. The accuracy of the autosegmentation model on each host dataset was evaluated using the corresponding test cases. The Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD_95) were used to evaluate the accuracy. RESULTS Before and after adaptive improvements, the average DSC values on the host datasets were 0.818 versus 0.882, 0.763 versus 0.810, 0.727 versus 0.772, and 0.679 versus 0.789, which are improvements of 7.82%, 6.16%, 6.19%, and 16.05%, respectively. The average HD_95 values were 11.143 mm versus 6.853 mm, 22.402 mm versus 14.076 mm, 28.145 mm versus 16.437 mm, and 33.034 mm versus 16.441 mm, which are improvements of 37.94%, 37.17%, 41.60%, and 50.23%, respectively. CONCLUSION The proposed method improved the adaptability of the CNN-based autosegmentation model when applied to host datasets.
Collapse
Affiliation(s)
- Yankui Chang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhi Wang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Zhao Peng
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Jieping Zhou
- Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Yifei Pi
- Radiation Oncology Department, First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - X George Xu
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Xi Pei
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| |
Collapse
|
10
|
Liu Z, Sun C, Wang H, Li Z, Gao Y, Lei W, Zhang S, Wang G, Zhang S. Automatic segmentation of organs-at-risks of nasopharynx cancer and lung cancer by cross-layer attention fusion network with TELD-Loss. Med Phys 2021; 48:6987-7002. [PMID: 34608652 DOI: 10.1002/mp.15260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 07/26/2021] [Accepted: 09/01/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Radiotherapy is one of the main treatments of nasopharyngeal cancer (NPC) and lung cancer. Accurate segmentation of organs at risks (OARs) in CT images is a key step in radiotherapy planning for NPC and lung cancer. However, the segmentation of OARs is influenced by the highly imbalanced size of organs, which often results in very poor segmentation results for small and difficult-to-segment organs. In addition, the complex morphological changes and fuzzy boundaries of OARs also pose great challenges to the segmentation task. In this paper, we propose a cross-layer attention fusion network (CLAF-CNN) to solve the problem of accurately segmenting OARs. METHODS In CLAF-CNN, we integrate the spatial attention maps of the adjacent spatial attention modules to make the segmentation targets more accurately focused, so that the network can capture more target-related features. In this way, the spatial attention modules in the network can be learned and optimized together. In addition, we introduce a new Top-K exponential logarithmic Dice loss (TELD-Loss) to solve the imbalance problem in OAR segmentation. The TELD-Loss further introduces a Top-K optimization mechanism based on Dice loss and exponential logarithmic loss, which makes the network pay more attention to small organs and difficult-to-segment organs, so as to enhance the overall performance of the segmentation model. RESULTS We validated our framework on the OAR segmentation datasets of the head and neck and lung CT images in the StructSeg 2019 challenge. Experiments show that the CLAF-CNN outperforms the state-of-the-art attention-based segmentation methods in the OAR segmentation task with average Dice coefficient of 79.65% for head and neck OARs and 88.39% for lung OARs. CONCLUSIONS This work provides a new network named CLAF-CNN which contains cross-layer spatial attention map fusion architecture and TELD-Loss for OAR segmentation. Results demonstrated that the proposed method could obtain accurate segmentation results for OARs, which has a potential of improving the efficiency of radiotherapy planning for nasopharynx cancer and lung cancer.
Collapse
Affiliation(s)
- Zuhao Liu
- Glasgow College, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, 100044, China
| | - Huan Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhiqi Li
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yibo Gao
- Glasgow College, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Wenhui Lei
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.,SenseTime Research, Shanghai, 200233, China
| |
Collapse
|
11
|
Fang Y, Wang J, Ou X, Ying H, Hu C, Zhang Z, Hu W. The impact of training sample size on deep learning-based organ auto-segmentation for head-and-neck patients. Phys Med Biol 2021; 66. [PMID: 34450599 DOI: 10.1088/1361-6560/ac2206] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022]
Abstract
To investigate the impact of training sample size on the performance of deep learning-based organ auto-segmentation for head-and-neck cancer patients, a total of 1160 patients with head-and-neck cancer who received radiotherapy were enrolled in this study. Patient planning CT images and regions of interest (ROIs) delineation, including the brainstem, spinal cord, eyes, lenses, optic nerves, temporal lobes, parotids, larynx and body, were collected. An evaluation dataset with 200 patients were randomly selected and combined with Dice similarity index to evaluate the model performances. Eleven training datasets with different sample sizes were randomly selected from the remaining 960 patients to form auto-segmentation models. All models used the same data augmentation methods, network structures and training hyperparameters. A performance estimation model of the training sample size based on the inverse power law function was established. Different performance change patterns were found for different organs. Six organs had the best performance with 800 training samples and others achieved their best performance with 600 training samples or 400 samples. The benefit of increasing the size of the training dataset gradually decreased. Compared to the best performance, optic nerves and lenses reached 95% of their best effect at 200, and the other organs reached 95% of their best effect at 40. For the fitting effect of the inverse power law function, the fitted root mean square errors of all ROIs were less than 0.03 (left eye: 0.024, others: <0.01), and theRsquare of all ROIs except for the body was greater than 0.5. The sample size has a significant impact on the performance of deep learning-based auto-segmentation. The relationship between sample size and performance depends on the inherent characteristics of the organ. In some cases, relatively small samples can achieve satisfactory performance.
Collapse
Affiliation(s)
- Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Hongmei Ying
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| |
Collapse
|
12
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
13
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 103] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
14
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
15
|
Zhong Y, Yang Y, Fang Y, Wang J, Hu W. A Preliminary Experience of Implementing Deep-Learning Based Auto-Segmentation in Head and Neck Cancer: A Study on Real-World Clinical Cases. Front Oncol 2021; 11:638197. [PMID: 34026615 PMCID: PMC8132944 DOI: 10.3389/fonc.2021.638197] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 04/15/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose While artificial intelligence has shown great promise in organs-at-risk (OARs) auto segmentation for head and neck cancer (HNC) radiotherapy, to reach the level of clinical acceptance of this technology in real-world routine practice is still a challenge. The purpose of this study was to validate a U-net-based full convolutional neural network (CNN) for the automatic delineation of OARs of HNC, focusing on clinical implementation and evaluation. Methods In the first phase, the CNN was trained on 364 clinical HNC patients’ CT images with annotated contouring from routine clinical cases by different oncologists. The automated delineation accuracy was quantified using the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD). To assess efficiency, the time required to edit the auto-contours to a clinically acceptable standard was evaluated by a questionnaire. For subjective evaluation, expert oncologists (more than 10 years’ experience) were randomly presented with automated delineations or manual contours of 15 OARs for 30 patient cases. In the second phase, the network was retrained with an additional 300 patients, which were generated by pre-trained CNN and edited by oncologists until to meet clinical acceptance. Results Based on DSC, the CNN performed best for the spinal cord, brainstem, temporal lobe, eyes, optic nerve, parotid glands and larynx (DSC >0.7). Higher conformity for the OARs delineation was achieved by retraining our architecture, largest DSC improvement on oral cavity (0.53 to 0.93). Compared with the manual delineation time, after using auto-contouring, this duration was significantly shortened from hours to minutes. In the subjective evaluation, two observes showed an apparent inclination on automatic OARs contouring, even for relatively low DSC values. Most of the automated OARs segmentation can reach the clinical acceptance level compared to manual delineations. Conclusions After retraining, the CNN developed for OARs automated delineation in HNC was proved to be more robust, efficiency and consistency in clinical practice. Deep learning-based auto-segmentation shows great potential to alleviate the labor-intensive contouring of OAR for radiotherapy treatment planning.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yanju Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
16
|
Zhang S, Wang H, Tian S, Zhang X, Li J, Lei R, Gao M, Liu C, Yang L, Bi X, Zhu L, Zhu S, Xu T, Yang R. A slice classification model-facilitated 3D encoder-decoder network for segmenting organs at risk in head and neck cancer. JOURNAL OF RADIATION RESEARCH 2021; 62:94-103. [PMID: 33029634 PMCID: PMC7779351 DOI: 10.1093/jrr/rraa094] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/30/2020] [Indexed: 06/06/2023]
Abstract
For deep learning networks used to segment organs at risk (OARs) in head and neck (H&N) cancers, the class-imbalance problem between small volume OARs and whole computed tomography (CT) images results in delineation with serious false-positives on irrelevant slices and unnecessary time-consuming calculations. To alleviate this problem, a slice classification model-facilitated 3D encoder-decoder network was developed and validated. In the developed two-step segmentation model, a slice classification model was firstly utilized to classify CT slices into six categories in the craniocaudal direction. Then the target categories for different OARs were pushed to the different 3D encoder-decoder segmentation networks, respectively. All the patients were divided into training (n = 120), validation (n = 30) and testing (n = 20) datasets. The average accuracy of the slice classification model was 95.99%. The Dice similarity coefficient and 95% Hausdorff distance, respectively, for each OAR were as follows: right eye (0.88 ± 0.03 and 1.57 ± 0.92 mm), left eye (0.89 ± 0.03 and 1.35 ± 0.43 mm), right optic nerve (0.72 ± 0.09 and 1.79 ± 1.01 mm), left optic nerve (0.73 ± 0.09 and 1.60 ± 0.71 mm), brainstem (0.87 ± 0.04 and 2.28 ± 0.99 mm), right temporal lobe (0.81 ± 0.12 and 3.28 ± 2.27 mm), left temporal lobe (0.82 ± 0.09 and 3.73 ± 2.08 mm), right temporomandibular joint (0.70 ± 0.13 and 1.79 ± 0.79 mm), left temporomandibular joint (0.70 ± 0.16 and 1.98 ± 1.48 mm), mandible (0.89 ± 0.02 and 1.66 ± 0.51 mm), right parotid (0.77 ± 0.07 and 7.30 ± 4.19 mm) and left parotid (0.71 ± 0.12 and 8.41 ± 4.84 mm). The total segmentation time was 40.13 s. The 3D encoder-decoder network facilitated by the slice classification model demonstrated superior performance in accuracy and efficiency in segmenting OARs in H&N CT images. This may significantly reduce the workload for radiation oncologists.
Collapse
Affiliation(s)
- Shuming Zhang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Hao Wang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Suqing Tian
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Xuyang Zhang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
- Cancer Center, Beijing Luhe Hospital, Capital Medical University, Beijing, China
| | - Jiaqi Li
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
- Department of Emergency, Beijing Children’s Hospital, Capital Medical University, Beijing, China
| | - Runhong Lei
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Mingze Gao
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Chunlei Liu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Li Yang
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Xinfang Bi
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Linlin Zhu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Senhua Zhu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Ting Xu
- Institute of Science and Technology Development, Beijing University of Posts and Telecommunications, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
17
|
Wang H, Zhang H, Hu J, Song Y, Bai S, Yi Z. DeepEC: An error correction framework for dose prediction and organ segmentation using deep neural networks. INT J INTELL SYST 2020. [DOI: 10.1002/int.22280] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Han Wang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Haixian Zhang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Ying Song
- Department of Radiation Oncology, West China Hospital Sichuan University Chengdu China
| | - Sen Bai
- Department of Radiation Oncology, West China Hospital Sichuan University Chengdu China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| |
Collapse
|
18
|
杨 鑫, 李 学, 张 晓, 宋 凡, 黄 思, 夏 云. [Segmentation of organs at risk in nasopharyngeal cancer for radiotherapy using a self-adaptive Unet network]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2020; 40:1579-1586. [PMID: 33243744 PMCID: PMC7704375 DOI: 10.12122/j.issn.1673-4254.2020.11.07] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Indexed: 12/21/2022]
Abstract
OBJECTIVE To investigate the accuracy of automatic segmentation of organs at risk (OARs) in radiotherapy for nasopharyngeal carcinoma (NPC). METHODS The CT image data of 147 NPC patients with manual segmentation of the OARs were randomized into the training set (115 cases), validation set (12 cases), and the test set (20 cases). An improved network based on three-dimensional (3D) Unet was established (named as AUnet) and its efficiency was improved through end-to-end training. Organ size was introduced as a priori knowledge to improve the performance of the model in convolution kernel size design, which enabled the network to better extract the features of different organs of different sizes. The adaptive histogram equalization algorithm was used to preprocess the input CT images to facilitate contour recognition. The similarity evaluation indexes, including Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), were calculated to verify the validity of segmentation. RESULTS DSC and HD of the test dataset were 0.86±0.02 and 4.0±2.0 mm, respectively. No significant difference was found between the results of AUnet and manual segmentation of the OARs (P > 0.05) except for the optic nerves and the optic chiasm. CONCLUSIONS AUnet, an improved deep learning neural network, is capable of automatic segmentation of the OARs in radiotherapy for NPC based on CT images, and for most organs, the results are comparable to those of manual segmentation.
Collapse
Affiliation(s)
- 鑫 杨
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - 学妍 李
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
- 中山大学新华学院,广东 广州 510520Xinhua College of Sun Yat-sen University, Guangzhou 510520, China
| | - 晓婷 张
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
- 中山大学新华学院,广东 广州 510520Xinhua College of Sun Yat-sen University, Guangzhou 510520, China
| | - 凡 宋
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
- 广东工业大学,广东 广州 510006Guangdong University of Technology, Guangzhou 510006, China
| | - 思娟 黄
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - 云飞 夏
- 中山大学肿瘤防治中心//华南肿瘤学国家重点实验室//肿瘤医学协同创新中心//广东省鼻咽癌诊治研究重点实验室,广东 广州 510060Sun Yat- sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| |
Collapse
|
19
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
20
|
Eppel S, Xu H, Bismuth M, Aspuru-Guzik A. Computer Vision for Recognition of Materials and Vessels in Chemistry Lab Settings and the Vector-LabPics Data Set. ACS CENTRAL SCIENCE 2020; 6:1743-1752. [PMID: 33145411 PMCID: PMC7596871 DOI: 10.1021/acscentsci.0c00460] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Indexed: 06/11/2023]
Abstract
This work presents a machine learning approach for the computer vision-based recognition of materials inside vessels in the chemistry lab and other settings. In addition, we release a data set associated with the training of the model for further model development. The task to learn is finding the region, boundaries, and category for each material phase and vessel in an image. Handling materials inside mostly transparent containers is the main activity performed by human and robotic chemists in the laboratory. Visual recognition of vessels and their contents is essential for performing this task. Modern machine-vision methods learn recognition tasks by using data sets containing a large number of annotated images. This work presents the Vector-LabPics data set, which consists of 2187 images of materials within mostly transparent vessels in a chemistry lab and other general settings. The images are annotated for both the vessels and the individual material phases inside them, and each instance is assigned one or more classes (liquid, solid, foam, suspension, powder, ...). The fill level, labels, corks, and parts of the vessel are also annotated. Several convolutional nets for semantic and instance segmentation were trained on this data set. The trained neural networks achieved good accuracy in detecting and segmenting vessels and material phases, and in classifying liquids and solids, but relatively low accuracy in segmenting multiphase systems such as phase-separating liquids.
Collapse
Affiliation(s)
- Sagi Eppel
- Department
of Chemistry, University of Toronto, Toronto, Ontario M5G 1Z8, Canada
- Department
of Computer Science, University of Toronto, Toronto, Ontario M5G 1Z8, Canada
| | - Haoping Xu
- Department
of Computer Science, University of Toronto, Toronto, Ontario M5G 1Z8, Canada
- Vector
Institute for Artificial Intelligence, Toronto, Ontario M5S 1M1, Canada
| | - Mor Bismuth
- Department
of Cognitive Science, Open University of
Israel, Raanana 43107, Israel
| | - Alan Aspuru-Guzik
- Department
of Chemistry, University of Toronto, Toronto, Ontario M5G 1Z8, Canada
- Department
of Computer Science, University of Toronto, Toronto, Ontario M5G 1Z8, Canada
- Vector
Institute for Artificial Intelligence, Toronto, Ontario M5S 1M1, Canada
- Canadian
Institute for Advanced Research, Toronto, Ontario M5G 1M1, Canada
| |
Collapse
|
21
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
22
|
Cheng G, Ji H, Ding Z. Spatial-channel relation learning for brain tumor segmentation. Med Phys 2020; 47:4885-4894. [PMID: 32671845 DOI: 10.1002/mp.14392] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 07/02/2020] [Accepted: 07/07/2020] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Recently, research on brain tumor segmentation has made great progress. However, ambiguous patterns in magnetic resonance imaging data and linear fusion omitting semantic gaps between features in different branches remain challenging. We need to design a mechanism to fully utilize the similarity within the spatial space and channel space and the correlation between these two spaces to improve the result of volumetric segmentation. METHODS We propose a revised cascade structure network. In each subnetwork, a context exploitation module is introduced between the encoder and decoder, in which the dual attention mechanism is adopted to learn the information within the spatial space and channel space, and space interaction learning is employed to model the relation between the spatial and channel spaces. RESULTS Extensive experiments on the BraTS19 dataset have evaluated that our approach improves the dice coefficient (DC) by a margin of 2.1, 2.0, and 1.4 for whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively, obtaining results competitive with the state-of-art approaches working on brain tumor segmentation. CONCLUSIONS Context exploitation in the embedding feature spaces, including intraspace relations and interspace relations, can effectively model dependency in semantic features and alleviate the semantic gap in multimodel data. Our approach is also robust to variations in different modality.
Collapse
Affiliation(s)
- Guohua Cheng
- Institute of Science and Technology for Brain-Inspired Intelligence, Ministry of Education-Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Shanghai, 200433, China
| | - Hongli Ji
- Jianpei Technology Co., Ltd., Hangzhou, 310000, China
| | - Zhongxiang Ding
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Translational Medicine Research Center, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China
| |
Collapse
|
23
|
Men K, Geng H, Biswas T, Liao Z, Xiao Y. Automated Quality Assurance of OAR Contouring for Lung Cancer Based on Segmentation With Deep Active Learning. Front Oncol 2020; 10:986. [PMID: 32719742 PMCID: PMC7350536 DOI: 10.3389/fonc.2020.00986] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 05/19/2020] [Indexed: 12/25/2022] Open
Abstract
Purpose: Ensuring high-quality data for clinical trials in radiotherapy requires the generation of contours that comply with protocol definitions. The current workflow includes a manual review of the submitted contours, which is time-consuming and subjective. In this study, we developed an automated quality assurance (QA) system for lung cancer based on a segmentation model trained with deep active learning. Methods: The data included a gold atlas with 36 cases and 110 cases from the “NRG Oncology/RTOG 1308 Trial”. The first 70 cases enrolled to the RTOG 1308 formed the candidate set, and the remaining 40 cases were randomly assigned to validation and test sets (each with 20 cases). The organs-at-risk included the heart, esophagus, spinal cord, and lungs. A preliminary convolutional neural network segmentation model was trained with the gold standard atlas. To address the deficiency of the limited training data, we selected quality images from the candidate set to be added to the training set for fine-tuning of the model with deep active learning. The trained robust segmentation models were used for QA purposes. The segmentation evaluation metrics derived from the validation set, including the Dice and Hausdorff distance, were used to develop the criteria for QA decision making. The performance of the strategy was assessed using the test set. Results: The QA method achieved promising contouring error detection, with the following metrics for the heart, esophagus, spinal cord, left lung, and right lung: balanced accuracy, 0.96, 0.95, 0.96, 0.97, and 0.97, respectively; sensitivity, 0.95, 0.98, 0.96, 1.0, and 1.0, respectively; specificity, 0.98, 0.92, 0.97, 0.94, and 0.94, respectively; and area under the receiving operator characteristic curve, 0.96, 0.95, 0.96, 0.97, and 0.94, respectively. Conclusions: The proposed system automatically detected contour errors for QA. It could provide consistent and objective evaluations with much reduced investigator intervention in multicenter clinical trials.
Collapse
Affiliation(s)
- Kuo Men
- University of Pennsylvania, Philadelphia, PA, United States.,National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huaizhi Geng
- University of Pennsylvania, Philadelphia, PA, United States
| | - Tithi Biswas
- UH Cleveland Medical Center, Cleveland, OH, United States
| | - Zhongxing Liao
- MD Anderson Cancer Center, The University of Texas, Houston, TX, United States
| | - Ying Xiao
- University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
24
|
Chen X, Men K, Chen B, Tang Y, Zhang T, Wang S, Li Y, Dai J. CNN-Based Quality Assurance for Automatic Segmentation of Breast Cancer in Radiotherapy. Front Oncol 2020; 10:524. [PMID: 32426272 PMCID: PMC7212344 DOI: 10.3389/fonc.2020.00524] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 03/24/2020] [Indexed: 11/28/2022] Open
Abstract
Purpose: More and more automatic segmentation tools are being introduced in routine clinical practice. However, physicians need to spend a considerable amount of time in examining the generated contours slice by slice. This greatly reduces the benefit of the tool's automaticity. In order to overcome this shortcoming, we developed an automatic quality assurance (QA) method for automatic segmentation using convolutional neural networks (CNNs). Materials and Methods: The study cohort comprised 680 patients with early-stage breast cancer who received whole breast radiation. The overall architecture of the automatic QA method for deep learning-based segmentation included the following two main parts: a segmentation CNN model and a QA network that was established based on ResNet-101. The inputs were from computed tomography, segmentation probability maps, and uncertainty maps. Two kinds of Dice similarity coefficient (DSC) outputs were tested. One predicted the DSC quality level of each slice ([0.95, 1] for “good,” [0.8, 0.95] for “medium,” and [0, 0.8] for “bad” quality), and the other predicted the DSC value of each slice directly. The performances of the method to predict the quality levels were evaluated with quantitative metrics: balanced accuracy, F score, and the area under the receiving operator characteristic curve (AUC). The mean absolute error (MAE) was used to evaluate the DSC value outputs. Results: The proposed methods involved two types of output, both of which achieved promising accuracy in terms of predicting the quality level. For the good, medium, and bad quality level prediction, the balanced accuracy was 0.97, 0.94, and 0.89, respectively; the F score was 0.98, 0.91, and 0.81, respectively; and the AUC was 0.96, 0.93, and 0.88, respectively. For the DSC value prediction, the MAE was 0.06 ± 0.19. The prediction time was approximately 2 s per patient. Conclusions: Our method could predict the segmentation quality automatically. It can provide useful information for physicians regarding further verification and revision of automatic contours. The integration of our method into current automatic segmentation pipelines can improve the efficiency of radiotherapy contouring.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Tang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
25
|
Mlynarski P, Delingette H, Alghamdi H, Bondiau PY, Ayache N. Anatomically consistent CNN-based segmentation of organs-at-risk in cranial radiotherapy. J Med Imaging (Bellingham) 2020; 7:014502. [PMID: 32064300 PMCID: PMC7016364 DOI: 10.1117/1.jmi.7.1.014502] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
Planning of radiotherapy involves accurate segmentation of a large number of organs at risk (OAR), i.e., organs for which irradiation doses should be minimized to avoid important side effects of the therapy. We propose a deep learning method for segmentation of OAR inside the head, from magnetic resonance images (MRIs). Our system performs segmentation of eight structures: eye, lens, optic nerve, optic chiasm, pituitary gland, hippocampus, brainstem, and brain. We propose an efficient algorithm to train neural networks for an end-to-end segmentation of multiple and nonexclusive classes, addressing problems related to computational costs and missing ground truth segmentations for a subset of classes. We enforce anatomical consistency of the result in a postprocessing step. In particular, we introduce a graph-based algorithm for segmentation of the optic nerves, enforcing the connectivity between the eyes and the optic chiasm. We report cross-validated quantitative results on a database of 44 contrast-enhanced T1-weighted MRIs with provided segmentations of the considered OAR, which were originally used for radiotherapy planning. In addition, the segmentations produced by our model on an independent test set of 50 MRIs were evaluated by an experienced radiotherapist in order to qualitatively assess their accuracy. The mean distances between produced segmentations and the ground truth ranged from 0.1 to 0.7 mm across different organs. A vast majority (96%) of the produced segmentations were found acceptable for radiotherapy planning.
Collapse
Affiliation(s)
- Pawel Mlynarski
- Université Côte d’Azur, Inria, Epione Research Team, Nice, France
| | - Hervé Delingette
- Université Côte d’Azur, Inria, Epione Research Team, Nice, France
| | - Hamza Alghamdi
- Université Côte d’Azur, Centre Antoine Lacassagne, Nice, France
| | | | - Nicholas Ayache
- Université Côte d’Azur, Inria, Epione Research Team, Nice, France
| |
Collapse
|
26
|
Kajikawa T, Kadoya N, Ito K, Takayama Y, Chiba T, Tomori S, Nemoto H, Dobashi S, Takeda K, Jingu K. A convolutional neural network approach for IMRT dose distribution prediction in prostate cancer patients. JOURNAL OF RADIATION RESEARCH 2019; 60:685-693. [PMID: 31322704 PMCID: PMC6805973 DOI: 10.1093/jrr/rrz051] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/06/2019] [Indexed: 06/10/2023]
Abstract
The purpose of the study was to compare a 3D convolutional neural network (CNN) with the conventional machine learning method for predicting intensity-modulated radiation therapy (IMRT) dose distribution using only contours in prostate cancer. In this study, which included 95 IMRT-treated prostate cancer patients with available dose distributions and contours for planning target volume (PTVs) and organs at risk (OARs), a supervised-learning approach was used for training, where the dose for a voxel set in the dataset was defined as the label. The adaptive moment estimation algorithm was employed for optimizing a 3D U-net similar network. Eighty cases were used for the training and validation set in 5-fold cross-validation, and the remaining 15 cases were used as the test set. The predicted dose distributions were compared with the clinical dose distributions, and the model performance was evaluated by comparison with RapidPlan™. Dose-volume histogram (DVH) parameters were calculated for each contour as evaluation indexes. The mean absolute errors (MAE) with one standard deviation (1SD) between the clinical and CNN-predicted doses were 1.10% ± 0.64%, 2.50% ± 1.17%, 2.04% ± 1.40%, and 2.08% ± 1.99% for D2, D98 in PTV-1 and V65 in rectum and V65 in bladder, respectively, whereas the MAEs with 1SD between the clinical and the RapidPlan™-generated doses were 1.01% ± 0.66%, 2.15% ± 1.25%, 5.34% ± 2.13% and 3.04% ± 1.79%, respectively. Our CNN model could predict dose distributions that were superior or comparable with that generated by RapidPlan™, suggesting the potential of CNN in dose distribution prediction.
Collapse
Affiliation(s)
- Tomohiro Kajikawa
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriyuki Kadoya
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kengo Ito
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Yoshiki Takayama
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takahito Chiba
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Seiji Tomori
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
- Department of Radiology, National Hospital Organization Sendai Medical Center, Sendai, Japan
| | - Hikaru Nemoto
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Suguru Dobashi
- Department of Radiological Technology, School of Health Sciences, Faculty of medicine, Tohoku University, Sendai, Japan
| | - Ken Takeda
- Department of Radiological Technology, School of Health Sciences, Faculty of medicine, Tohoku University, Sendai, Japan
| | - Keiichi Jingu
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Japan
| |
Collapse
|