1
|
Kim S, Yuan L, Kim S, Suh TS. Generation of tissues outside the field of view (FOV) of radiation therapy simulation imaging based on machine learning and patient body outline (PBO). Radiat Oncol 2024; 19:15. [PMID: 38273278 PMCID: PMC10811833 DOI: 10.1186/s13014-023-02384-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 11/28/2023] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND It is not unusual to see some parts of tissues are excluded in the field of view of CT simulation images. A typical mitigation is to avoid beams entering the missing body parts at the cost of sub-optimal planning. METHODS This study is to solve the problem by developing 3 methods, (1) deep learning (DL) mechanism for missing tissue generation, (2) using patient body outline (PBO) based on surface imaging, and (3) hybrid method combining DL and PBO. The DL model was built upon a Globally and Locally Consistent Image Completion to learn features by Convolutional Neural Networks-based inpainting, based on Generative Adversarial Network. The database used comprised 10,005 CT training slices of 322 lung cancer patients and 166 CT evaluation test slices of 15 patients. CT images were from the publicly available database of the Cancer Imaging Archive. Since existing data were used PBOs were acquired from the CT images. For evaluation, Structural Similarity Index Metric (SSIM), Root Mean Square Error (RMSE) and Peak signal-to-noise ratio (PSNR) were evaluated. For dosimetric validation, dynamic conformal arc plans were made with the ground truth images and images generated by the proposed method. Gamma analysis was conducted at relatively strict criteria of 1%/1 mm (dose difference/distance to agreement) and 2%/2 mm under three dose thresholds of 1%, 10% and 50% of the maximum dose in the plans made on the ground truth image sets. RESULTS The average SSIM in generation part only was 0.06 at epoch 100 but reached 0.86 at epoch 1500. Accordingly, the average SSIM in the whole image also improved from 0.86 to 0.97. At epoch 1500, the average values of RMSE and PSNR in the whole image were 7.4 and 30.9, respectively. Gamma analysis showed excellent agreement with the hybrid method (equal to or higher than 96.6% of the mean of pass rates for all scenarios). CONCLUSIONS It was first demonstrated that missing tissues in simulation imaging could be generated with high similarity, and dosimetric limitation could be overcome. The benefit of this study can be significantly enlarged when MR-only simulation is considered.
Collapse
Affiliation(s)
- Sunmi Kim
- Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Seoul, 03722, Republic of Korea
| | - Lulin Yuan
- Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, VA, 23284, USA
| | - Siyong Kim
- Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, VA, 23284, USA.
| | - Tae Suk Suh
- Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea.
| |
Collapse
|
2
|
Swiderska K, Blackie CA, Maldonado-Codina C, Morgan PB, Read ML, Fergie M. A Deep Learning Approach for Meibomian Gland Appearance Evaluation. OPHTHALMOLOGY SCIENCE 2023; 3:100334. [PMID: 37920420 PMCID: PMC10618829 DOI: 10.1016/j.xops.2023.100334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/09/2023] [Accepted: 05/16/2023] [Indexed: 11/04/2023]
Abstract
Purpose To develop and evaluate a deep learning algorithm for Meibomian gland characteristics calculation. Design Evaluation of diagnostic technology. Subjects A total of 1616 meibography images of both the upper (697) and lower (919) eyelids from a total of 282 individuals. Methods Images were collected using the LipiView II device. All the provided data were split into 3 sets: the training, validation, and test sets. Data partitions used proportions of 70/10/20% and included data from 2 optometry settings. Each set was separately partitioned with these proportions, resulting in a balanced distribution of data from both settings. The images were divided based on patient identifiers, such that all images collected for one participant could end up only in one set. The labeled images were used to train a deep learning model, which was subsequently used for Meibomian gland segmentation. The model was then applied to calculate individual Meibomian gland metrics. Interreader agreement and agreement between manual and automated methods for Meibomian gland segmentation were also carried out to assess the accuracy of the automated approach. Main Outcome Measures Meibomian gland metrics, including length ratio, area, tortuosity, intensity, and width, were measured. Additionally, the performance of the automated algorithms was evaluated using the aggregated Jaccard index. Results The proposed semantic segmentation-based approach achieved average aggregated Jaccard index of mean 0.4718 (95% confidence interval [CI], 0.4680-0.4771) for the 'gland' class and a mean of 0.8470 (95% CI, 0.8432-0.8508) for the 'eyelid' class. The result for object detection-based approach was a mean of 0.4476 (95% CI, 0.4426-0.4533). Both artificial intelligence-based algorithms underestimated area, length ratio, tortuosity, widthmean, widthmedian, width10th, and width90th. Meibomian gland intensity was overestimated by both algorithms compared with the manual approach. The object detection-based algorithm seems to be as reliable as the manual approach only for Meibomian gland width10th calculation. Conclusions The proposed approach can successfully segment Meibomian glands; however, to overcome problems with gland overlap and lack of image sharpness, the proposed method requires further development. The study presents another approach to utilizing automated, artificial intelligence-based methods in Meibomian gland health assessment that may assist clinicians in the diagnosis, treatment, and management of Meibomian gland dysfunction. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Kasandra Swiderska
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| | | | - Carole Maldonado-Codina
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| | - Philip B. Morgan
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| | - Michael L. Read
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| | - Martin Fergie
- Division of Informatics, Imaging and Data Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
3
|
Jo SW, Khil EK, Lee KY, Choi I, Yoon YS, Cha JG, Lee JH, Kim H, Lee SY. Deep learning system for automated detection of posterior ligamentous complex injury in patients with thoracolumbar fracture on MRI. Sci Rep 2023; 13:19017. [PMID: 37923853 PMCID: PMC10624679 DOI: 10.1038/s41598-023-46208-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 10/29/2023] [Indexed: 11/06/2023] Open
Abstract
This study aimed to develop a deep learning (DL) algorithm for automated detection and localization of posterior ligamentous complex (PLC) injury in patients with acute thoracolumbar (TL) fracture on magnetic resonance imaging (MRI) and evaluate its diagnostic performance. In this retrospective multicenter study, using midline sagittal T2-weighted image with fracture (± PLC injury), a training dataset and internal and external validation sets of 300, 100, and 100 patients, were constructed with equal numbers of injured and normal PLCs. The DL algorithm was developed through two steps (Attention U-net and Inception-ResNet-V2). We evaluate the diagnostic performance for PLC injury between the DL algorithm and radiologists with different levels of experience. The area under the curves (AUCs) generated by the DL algorithm were 0.928, 0.916 for internal and external validations, and by two radiologists for observer performance test were 0.930, 0.830, respectively. Although no significant difference was found in diagnosing PLC injury between the DL algorithm and radiologists, the DL algorithm exhibited a trend of higher AUC than the radiology trainee. Notably, the radiology trainee's diagnostic performance significantly improved with DL algorithm assistance. Therefore, the DL algorithm exhibited high diagnostic performance in detecting PLC injuries in acute TL fractures.
Collapse
Affiliation(s)
- Sang Won Jo
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Eun Kyung Khil
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea.
- Department of Radiology, Fastbone Orthopedic Hospital, Hwaseong-si, Republic of Korea.
| | - Kyoung Yeon Lee
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Il Choi
- Department of Neurologic Surgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong-si, Republic of Korea
| | - Yu Sung Yoon
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
- Department of Radiology, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Jang Gyu Cha
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
| | | | | | | |
Collapse
|
4
|
Shen DD, Bao SL, Wang Y, Chen YC, Zhang YC, Li XC, Ding YC, Jia ZZ. An automatic and accurate deep learning-based neuroimaging pipeline for the neonatal brain. Pediatr Radiol 2023; 53:1685-1697. [PMID: 36884052 DOI: 10.1007/s00247-023-05620-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 03/09/2023]
Abstract
BACKGROUND Accurate segmentation of neonatal brain tissues and structures is crucial for studying normal development and diagnosing early neurodevelopmental disorders. However, there is a lack of an end-to-end pipeline for automated segmentation and imaging analysis of the normal and abnormal neonatal brain. OBJECTIVE To develop and validate a deep learning-based pipeline for neonatal brain segmentation and analysis of structural magnetic resonance images (MRI). MATERIALS AND METHODS Two cohorts were enrolled in the study, including cohort 1 (582 neonates from the developing Human Connectome Project) and cohort 2 (37 neonates imaged using a 3.0-tesla MRI scanner in our hospital).We developed a deep leaning-based architecture capable of brain segmentation into 9 tissues and 87 structures. Then, extensive validations were performed for accuracy, effectiveness, robustness and generality of the pipeline. Furthermore, regional volume and cortical surface estimation were measured through in-house bash script implemented in FSL (Oxford Centre for Functional MRI of the Brain Software Library) to ensure reliability of the pipeline. Dice similarity score (DSC), the 95th percentile Hausdorff distance (H95) and intraclass correlation coefficient (ICC) were calculated to assess the quality of our pipeline. Finally, we finetuned and validated our pipeline on 2-dimensional thick-slice MRI in cohorts 1 and 2. RESULTS The deep learning-based model showed excellent performance for neonatal brain tissue and structural segmentation, with the best DSC and the 95th percentile Hausdorff distance (H95) of 0.96 and 0.99 mm, respectively. In terms of regional volume and cortical surface analysis, our model showed good agreement with ground truth. The ICC values for the regional volume were all above 0.80. Considering the thick-slice image pipeline, the same trend was observed for brain segmentation and analysis. The best DSC and H95 were 0.92 and 3.00 mm, respectively. The regional volumes and surface curvature had ICC values just below 0.80. CONCLUSIONS We propose an automatic, accurate, stable and reliable pipeline for neonatal brain segmentation and analysis from thin and thick structural MRI. The external validation showed very good reproducibility of the pipeline.
Collapse
Affiliation(s)
- Dan Dan Shen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Shan Lei Bao
- Department of Nuclear Medicine, Affiliated Hospital and Medical School of Nantong University, Jiangsu, People's Republic of China
| | - Yan Wang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Ying Chi Chen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Cheng Zhang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Xing Can Li
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Chen Ding
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Zhong Zheng Jia
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China.
| |
Collapse
|
5
|
Song Y, Hu J, Wang Q, Yu C, Su J, Chen L, Jiang X, Chen B, Zhang L, Yu Q, Li P, Wang F, Bai S, Luo Y, Yi Z. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin Transl Radiat Oncol 2023; 41:100635. [PMID: 37251619 PMCID: PMC10213188 DOI: 10.1016/j.ctro.2023.100635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/31/2023] Open
Abstract
Background To comprehensively investigate the behaviors of oncologists with different working experiences and institute group styles in deep learning-based organs-at-risk (OAR) contouring. Methods A deep learning-based contouring system (DLCS) was modeled from 188 CT datasets of patients with nasopharyngeal carcinoma (NPC) in institute A. Three institute oncology groups, A, B, and C, were included; each contained a beginner and an expert. For each of the 28 OARs, two trials were performed with manual contouring first and post-DLCS edition later, for ten test cases. Contouring performance and group consistency were quantified by volumetric and surface Dice coefficients. A volume-based and a surface-based oncologist satisfaction rate (VOSR and SOSR) were defined to evaluate the oncologists' acceptance of DLCS. Results Based on DLCS, experience inconsistency was eliminated. Intra-institute consistency was eliminated for group C but still existed for group A and group B. Group C benefits most from DLCS with the highest number of improved OARs (8 for volumetric Dice and 10 for surface Dice), followed by group B. Beginners obtained more numbers of improved OARs than experts (7 v.s. 4 in volumetric Dice and 5 v.s. 4 in surface Dice). VOSR and SOSR varied for institute groups, but the rates of beginners were all significantly higher than those of experts for OARs with experience group significance. A remarkable positive linear relationship was found between VOSR and post-DLCS edition volumetric Dice with a coefficient of 0.78. Conclusions The DLCS was effective for various institutes and the beginners benefited more than the experts.
Collapse
Affiliation(s)
- Ying Song
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Qiang Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Chengrong Yu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Jiachong Su
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Lin Chen
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Xiaorui Jiang
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Bo Chen
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Lei Zhang
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Qian Yu
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Ping Li
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Feng Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Sen Bai
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Yong Luo
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| |
Collapse
|
6
|
Cubero L, Castelli J, Simon A, de Crevoisier R, Acosta O, Pascau J. Deep Learning-Based Segmentation of Head and Neck Organs-at-Risk with Clinical Partially Labeled Data. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24111661. [PMID: 36421515 PMCID: PMC9689629 DOI: 10.3390/e24111661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 06/06/2023]
Abstract
Radiotherapy is one of the main treatments for localized head and neck (HN) cancer. To design a personalized treatment with reduced radio-induced toxicity, accurate delineation of organs at risk (OAR) is a crucial step. Manual delineation is time- and labor-consuming, as well as observer-dependent. Deep learning (DL) based segmentation has proven to overcome some of these limitations, but requires large databases of homogeneously contoured image sets for robust training. However, these are not easily obtained from the standard clinical protocols as the OARs delineated may vary depending on the patient's tumor site and specific treatment plan. This results in incomplete or partially labeled data. This paper presents a solution to train a robust DL-based automated segmentation tool exploiting a clinical partially labeled dataset. We propose a two-step workflow for OAR segmentation: first, we developed longitudinal OAR-specific 3D segmentation models for pseudo-contour generation, completing the missing contours for some patients; with all OAR available, we trained a multi-class 3D convolutional neural network (nnU-Net) for final OAR segmentation. Results obtained in 44 independent datasets showed superior performance of the proposed methodology for the segmentation of fifteen OARs, with an average Dice score coefficient and surface Dice similarity coefficient of 80.59% and 88.74%. We demonstrated that the model can be straightforwardly integrated into the clinical workflow for standard and adaptive radiotherapy.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Antoine Simon
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| |
Collapse
|
7
|
Henderson EG, Vasquez Osorio EM, van Herk M, Green AF. Optimising a 3D convolutional neural network for head and neck computed tomography segmentation with limited training data. Phys Imaging Radiat Oncol 2022; 22:44-50. [PMID: 35514528 PMCID: PMC9065428 DOI: 10.1016/j.phro.2022.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 04/11/2022] [Accepted: 04/20/2022] [Indexed: 11/19/2022] Open
Abstract
Convolutional neural networks (CNNs) are used for auto-segmentation in radiotherapy. However, CNNs rely on large, high-quality datasets: a scarcity in radiotherapy. We develop a CNN model, trained with limited data, for accurate segmentation. Multiple experiments were performed to optimise key features of our custom model. Our model is competitive with state-of-the-art methods on a public dataset.
Background and purpose Convolutional neural networks (CNNs) are increasingly used to automate segmentation for radiotherapy planning, where accurate segmentation of organs-at-risk (OARs) is crucial. Training CNNs often requires large amounts of data. However, large, high quality datasets are scarce. The aim of this study was to develop a CNN capable of accurate head and neck (HN) 3D auto-segmentation of planning CT scans using a small training dataset (34 CTs). Materials and Method Elements of our custom CNN architecture were varied to optimise segmentation performance. We tested and evaluated the impact of: using multiple contrast channels for the CT scan input at specific soft tissue and bony anatomy windows, resize vs. transpose convolutions, and loss functions based on overlap metrics and cross-entropy in different combinations. Model segmentation performance was compared with the inter-observer deviation of two doctors’ gold standard segmentations using the 95th percentile Hausdorff distance and mean distance-to-agreement (mDTA). The best performing configuration was further validated on a popular public dataset to compare with state-of-the-art (SOTA) auto-segmentation methods. Results Our best performing CNN configuration was competitive with current SOTA methods when evaluated on the public dataset with mDTA of (0.81±0.31) mm for the brainstem, (0.20±0.08) mm for the mandible, (0.77±0.14) mm for the left parotid and (0.81±0.28) mm for the right parotid. Conclusions Through careful tuning and customisation we trained a 3D CNN with a small dataset to produce segmentations of HN OARs with an accuracy that is comparable with inter-clinician deviations. Our proposed model performed competitively with current SOTA methods.
Collapse
Affiliation(s)
- Edward G.A. Henderson
- The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
- Corresponding author.
| | - Eliana M. Vasquez Osorio
- The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
- Radiotherapy Related Research, The Christie NHS Foundation Trust, Manchester M20 4BX, UK
| | - Marcel van Herk
- The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
- Radiotherapy Related Research, The Christie NHS Foundation Trust, Manchester M20 4BX, UK
| | - Andrew F. Green
- The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
- Radiotherapy Related Research, The Christie NHS Foundation Trust, Manchester M20 4BX, UK
| |
Collapse
|