1
|
Xie L, Xu Y, Zheng M, Chen Y, Sun M, Archer MA, Mao W, Tong Y, Wan Y. An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning. Comput Med Imaging Graph 2024; 118:102438. [PMID: 39426342 PMCID: PMC11620937 DOI: 10.1016/j.compmedimag.2024.102438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/18/2024] [Accepted: 09/19/2024] [Indexed: 10/21/2024]
Abstract
The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.
Collapse
Affiliation(s)
- Lipeng Xie
- School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou, China
| | - Yongrui Xu
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China
| | - Mingfeng Zheng
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China
| | - Yundi Chen
- Department of Biomedical Engineering, Binghamton University, Binghamton, NY, USA
| | - Min Sun
- Division of Oncology, University of Pittsburgh Medical Center Hillman Cancer Center at St. Margaret, Pittsburgh, PA, USA
| | - Michael A Archer
- Division of Thoracic Surgery, SUNY Upstate Medical University, USA
| | - Wenjun Mao
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA 19104, USA.
| | - Yuan Wan
- Department of Biomedical Engineering, Binghamton University, Binghamton, NY, USA.
| |
Collapse
|
2
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
3
|
He D, Udupa JK, Tong Y, Torigian DA. Predicting the effort required to manually mend auto-segmentations. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.06.12.24308779. [PMID: 38947045 PMCID: PMC11213037 DOI: 10.1101/2024.06.12.24308779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Auto-segmentation is one of the critical and foundational steps for medical image analysis. The quality of auto-segmentation techniques influences the efficiency of precision radiology and radiation oncology since high- quality auto-segmentations usually require limited manual correction. Segmentation metrics are necessary and important to evaluate auto-segmentation results and guide the development of auto-segmentation techniques. Currently widely applied segmentation metrics usually compare the auto-segmentation with the ground truth in terms of the overlapping area (e.g., Dice Coefficient (DC)) or the distance between boundaries (e.g., Hausdorff Distance (HD)). However, these metrics may not well indicate the manual mending effort required when observing the auto-segmentation results in clinical practice. In this article, we study different segmentation metrics to explore the appropriate way of evaluating auto-segmentations with clinical demands. The mending time for correcting auto-segmentations by experts is recorded to indicate the required mending effort. Five well-defined metrics, the overlapping area-based metric DC, the segmentation boundary distance-based metric HD, the segmentation boundary length-based metrics surface DC (surDC) and added path length (APL), and a newly proposed hybrid metric Mendability Index (MI) are discussed in the correlation analysis experiment and regression experiment. In addition to these explicitly defined metrics, we also preliminarily explore the feasibility of using deep learning models to predict the mending effort, which takes segmentation masks and the original images as the input. Experiments are conducted using datasets of 7 objects from three different institutions, which contain the original computed tomography (CT) images, the ground truth segmentations, the auto-segmentations, the corrected segmentations, and the recorded mending time. According to the correlation analysis and regression experiments for the five well-defined metrics, the variety of MI shows the best performance to indicate the mending effort for sparse objects, while the variety of HD works best when assessing the mending effort for non-sparse objects. Moreover, the deep learning models could well predict efforts required to mend auto-segmentations, even without the need of ground truth segmentations, demonstrating the potential of a novel and easy way to evaluate and boost auto-segmentation techniques.
Collapse
Affiliation(s)
- Da He
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jayaram K. Udupa
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Yubing Tong
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Drew A. Torigian
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
4
|
Abdel-Wahab M, Coleman CN, Eriksen JG, Lee P, Kraus R, Harsdorf E, Lee B, Dicker A, Hahn E, Agarwal JP, Prasanna PGS, MacManus M, Keall P, Mayr NA, Jereczek-Fossa BA, Giammarile F, Kim IA, Aggarwal A, Lewison G, Lu JJ, Guedes de Castro D, Kong FMS, Afifi H, Sharp H, Vanderpuye V, Olasinde T, Atrash F, Goethals L, Corn BW. Addressing challenges in low-income and middle-income countries through novel radiotherapy research opportunities. Lancet Oncol 2024; 25:e270-e280. [PMID: 38821101 PMCID: PMC11382686 DOI: 10.1016/s1470-2045(24)00038-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/17/2024] [Accepted: 01/18/2024] [Indexed: 06/02/2024]
Abstract
Although radiotherapy continues to evolve as a mainstay of the oncological armamentarium, research and innovation in radiotherapy in low-income and middle-income countries (LMICs) faces challenges. This third Series paper examines the current state of LMIC radiotherapy research and provides new data from a 2022 survey undertaken by the International Atomic Energy Agency and new data on funding. In the context of LMIC-related challenges and impediments, we explore several developments and advances-such as deep phenotyping, real-time targeting, and artificial intelligence-to flag specific opportunities with applicability and relevance for resource-constrained settings. Given the pressing nature of cancer in LMICs, we also highlight some best practices and address the broader need to develop the research workforce of the future. This Series paper thereby serves as a resource for radiation professionals.
Collapse
Affiliation(s)
- May Abdel-Wahab
- Division of Human Health, International Atomic Energy Agency, Vienna, Austria.
| | - C Norman Coleman
- Radiation Research Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Jesper Grau Eriksen
- Department of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Peter Lee
- Division of Human Health, International Atomic Energy Agency, Vienna, Austria
| | - Ryan Kraus
- Department of Radiation Oncology, Barrow Neurological Institute, Phoenix, AZ, USA
| | - Ekaterina Harsdorf
- Division of Human Health, International Atomic Energy Agency, Vienna, Austria
| | - Becky Lee
- Department of Radiation Medicine, Loma Linda University, Loma Linda, CA, USA; Department of Radiation Oncology, Summa Health, Akron, OH, USA
| | - Adam Dicker
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Ezra Hahn
- Department of Radiation Oncology, Radiation Medicine Program, Princess Margaret Cancer Centre, University of Toronto, ON, Canada
| | - Jai Prakash Agarwal
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Pataje G S Prasanna
- Radiation Research Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Michael MacManus
- Department of Radiation Oncology, Peter MacCallum Cancer Centre and the Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, VIC, Australia
| | - Paul Keall
- Image X Institute, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Nina A Mayr
- College of Human Medicine, Michigan State University, East Lansing, MI, USA
| | - Barbara Alicja Jereczek-Fossa
- Department of Oncology and Hemato-oncology, University of Milan, Milan, Italy; Division of Radiotherapy, European Institute of Oncology, IRCCS, Milan, Italy
| | | | - In Ah Kim
- Department of Radiation Oncology, Seoul National University Bundang Hospital, Seoul, South Korea; Seoul National University, College of Medicine, Seoul, South Korea
| | - Ajay Aggarwal
- Department of Health Services Research and Policy, London School of Hygiene & Tropical Medicine, London, UK; Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Grant Lewison
- Institute of Cancer Policy, King's College London, London, UK
| | - Jiade J Lu
- Shanghai Proton and Heavy Ion Centre, Fudan University School of Medicine, Shanghai, China
| | | | - Feng-Ming Spring Kong
- Department of Clinical Oncology, HKU-Shenzhen Hospital and Queen Mary Hospital, Li Ka Shing Faculty of Medicine, Hong Kong Special Administrative Region, China
| | - Haidy Afifi
- Division of Human Health, International Atomic Energy Agency, Vienna, Austria
| | - Hamish Sharp
- Institute of Cancer Policy, King's College London, London, UK
| | - Verna Vanderpuye
- National Center for Radiotherapy, Oncology and Nuclear Medicine, Korlebu Teaching Hospital, Accra, Ghana
| | | | - Fadi Atrash
- Augusta Victoria Hospital, Jerusalem, Israel
| | - Luc Goethals
- Division of Human Health, International Atomic Energy Agency, Vienna, Austria
| | | |
Collapse
|
5
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
6
|
Podobnik G, Ibragimov B, Peterlin P, Strojan P, Vrtovec T. vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images. Med Phys 2024; 51:2175-2186. [PMID: 38230752 DOI: 10.1002/mp.16924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD95 $_{95}$ ). RESULTS The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
7
|
Dai J, Liu T, Torigian DA, Tong Y, Han S, Nie P, Zhang J, Li R, Xie F, Udupa JK. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition. Med Image Anal 2024; 91:102987. [PMID: 37837691 PMCID: PMC10841506 DOI: 10.1016/j.media.2023.102987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 07/27/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
PURPOSE Body composition analysis (BCA) of the body torso plays a vital role in the study of physical health and pathology and provides biomarkers that facilitate the diagnosis and treatment of many diseases, such as type 2 diabetes mellitus, cardiovascular disease, obstructive sleep apnea, and osteoarthritis. In this work, we propose a body composition tissue segmentation method that can automatically delineate those key tissues, including subcutaneous adipose tissue, skeleton, skeletal muscle tissue, and visceral adipose tissue, on positron emission tomography/computed tomography scans of the body torso. METHODS To provide appropriate and precise semantic and spatial information that is strongly related to body composition tissues for the deep neural network, first we introduce a new concept of the body area and integrate it into our proposed segmentation network called Geographical Attention Network (GA-Net). The body areas are defined following anatomical principles such that the whole body torso region is partitioned into three non-overlapping body areas. Each body composition tissue of interest is fully contained in exactly one specific minimal body area. Secondly, the proposed GA-Net has a novel dual-decoder schema that is composed of a tissue decoder and an area decoder. The tissue decoder segments the body composition tissues, while the area decoder segments the body areas as an auxiliary task. The features of body areas and body composition tissues are fused through a soft attention mechanism to gain geographical attention relevant to the body tissues. Thirdly, we propose a body composition tissue annotation approach that takes the body area labels as the region of interest, which significantly improves the reproducibility, precision, and efficiency of delineating body composition tissues. RESULTS Our evaluations on 50 low-dose unenhanced CT images indicate that GA-Net outperforms other architectures statistically significantly based on the Dice metric. GA-Net also shows improvements for the 95% Hausdorff Distance metric in most comparisons. Notably, GA-Net exhibits more sensitivity to subtle boundary information and produces more reliable and robust predictions for such structures, which are the most challenging parts to manually mend in practice, with potentially significant time-savings in the post hoc correction of these subtle boundary placement errors. Due to the prior knowledge provided from body areas, GA-Net achieves competitive performance with less training data. Our extension of the dual-decoder schema to TransUNet and 3D U-Net demonstrates that the new schema significantly improves the performance of these classical neural networks as well. Heatmaps obtained from attention gate layers further illustrate the geographical guidance function of body areas for identifying body tissues. CONCLUSIONS (i) Prior anatomic knowledge supplied in the form of appropriately designed anatomic container objects significantly improves the segmentation of bodily tissues. (ii) Of particular note are the improvements achieved in the delineation of subtle boundary features which otherwise would take much effort for manual correction. (iii) The method can be easily extended to existing networks to improve their accuracy for this application.
Collapse
Affiliation(s)
- Jian Dai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Shiwei Han
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Pengju Nie
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Jing Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Ran Li
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Fei Xie
- School of AOAIR, Xidian University, Xi'an 710071, Shaanxi, China.
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| |
Collapse
|
8
|
Xie F, Ju J, Zhang T, Wang H, Liu J, Wang J, Zhou Y, Zhao X. A Small Intestinal Stromal Tumor Detection Method Based on an Attention Balance Feature Pyramid. SENSORS (BASEL, SWITZERLAND) 2023; 23:9723. [PMID: 38139569 PMCID: PMC10747994 DOI: 10.3390/s23249723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/05/2023] [Accepted: 11/15/2023] [Indexed: 12/24/2023]
Abstract
Small intestinal stromal tumor (SIST) is a common gastrointestinal tumor. Currently, SIST diagnosis relies on clinical radiologists reviewing CT images from medical imaging sensors. However, this method is inefficient and greatly affected by subjective factors. The automatic detection method for stromal tumors based on computer vision technology can better solve these problems. However, in CT images, SIST have different shapes and sizes, blurred edge texture, and little difference from surrounding normal tissues, which to a large extent challenges the use of computer vision technology for the automatic detection of stromal tumors. Furthermore, there are the following issues in the research on the detection and recognition of SIST. After analyzing mainstream target detection models on SIST data, it was discovered that there is an imbalance in the features at different levels during the feature fusion stage of the network model. Therefore, this paper proposes an algorithm, based on the attention balance feature pyramid (ABFP), for detecting SIST with unbalanced feature fusion in the target detection model. By combining weighted multi-level feature maps from the backbone network, the algorithm creates a balanced semantic feature map. Spatial attention and channel attention modules are then introduced to enhance this map. In the feature fusion stage, the algorithm scales the enhanced balanced semantic feature map to the size of each level feature map and enhances the original feature information with the original feature map, effectively addressing the imbalance between deep and shallow features. Consequently, the SIST detection model's detection performance is significantly improved, and the method is highly versatile. Experimental results show that the ABFP method can enhance traditional target detection methods, and is compatible with various models and feature fusion strategies.
Collapse
Affiliation(s)
- Fei Xie
- Xi’an Key Laboratory of Human–Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi’an 710123, China; (F.X.); (J.W.)
- School of AOAIR, Xidian University, Xi’an 710075, China
| | - Jianguo Ju
- School of Information Science and Technology, Northwest University, Xi’an 710126, China; (T.Z.); (J.L.); (Y.Z.)
| | - Tongtong Zhang
- School of Information Science and Technology, Northwest University, Xi’an 710126, China; (T.Z.); (J.L.); (Y.Z.)
| | - Hexu Wang
- Xi’an Key Laboratory of Human–Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi’an 710123, China; (F.X.); (J.W.)
| | - Jindong Liu
- School of Information Science and Technology, Northwest University, Xi’an 710126, China; (T.Z.); (J.L.); (Y.Z.)
| | - Juan Wang
- Xi’an Key Laboratory of Human–Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi’an 710123, China; (F.X.); (J.W.)
| | - Yang Zhou
- School of Information Science and Technology, Northwest University, Xi’an 710126, China; (T.Z.); (J.L.); (Y.Z.)
| | - Xuesong Zhao
- Departments of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| |
Collapse
|
9
|
Hirotaki K, Tomizawa K, Moriya S, Oyoshi H, Raturi V, Ito M, Sakae T. Fully automated volumetric modulated arc therapy planning for locally advanced rectal cancer: feasibility and efficiency. Radiat Oncol 2023; 18:147. [PMID: 37670390 PMCID: PMC10481560 DOI: 10.1186/s13014-023-02334-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND Volumetric modulated arc therapy (VMAT) for locally advanced rectal cancer (LARC) has emerged as a promising technique, but the planning process can be time-consuming and dependent on planner expertise. We aimed to develop a fully automated VMAT planning program for LARC and evaluate its feasibility and efficiency. METHODS A total of 26 LARC patients who received VMAT treatment and the computed tomography (CT) scans were included in this study. Clinical target volumes and organs at risk were contoured by radiation oncologists. The automatic planning program, developed within the Raystation treatment planning system, used scripting capabilities and a Python environment to automate the entire planning process. The automated VMAT plan (auto-VMAT) was created by our automated planning program with the 26 CT scans used in the manual VMAT plan (manual-VMAT) and their regions of interests. Dosimetric parameters and time efficiency were compared between the auto-VMAT and the manual-VMAT created by experienced planners. All results were analyzed using the Wilcoxon signed-rank sum test. RESULTS The auto-VMAT achieved comparable coverage of the target volume while demonstrating improved dose conformity and uniformity compared with the manual-VMAT. V30 and V40 in the small bowel were significantly lower in the auto-VMAT compared with those in the manual-VMAT (p < 0.001 and < 0.001, respectively); the mean dose of the bladder was also significantly reduced in the auto-VMAT (p < 0.001). Furthermore, auto-VMAT plans were consistently generated with less variability in quality. In terms of efficiency, the auto-VMAT markedly reduced the time required for planning and expedited plan approval, with 93% of cases approved within one day. CONCLUSION We developed a fully automatic feasible VMAT plan creation program for LARC. The auto-VMAT maintained target coverage while providing organs at risk dose reduction. The developed program dramatically reduced the time to approval.
Collapse
Affiliation(s)
- Kouta Hirotaki
- Doctoral Program in Medical Sciences, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba, Japan
| | - Kento Tomizawa
- Department of Radiation Oncology, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, 277-8577, Kashiwa, Chiba, Japan.
| | | | - Hajime Oyoshi
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba, Japan
| | - Vijay Raturi
- Department of Radiation Oncology, Apollomedics Hospital, Lucknow, India
| | - Masashi Ito
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba, Japan
| | - Takeji Sakae
- Faculty of Medicine, University of Tsukuba, Ibaraki, Japan
| |
Collapse
|
10
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
11
|
Akhtar Y, Udupa JK, Tong Y, Liu T, Wu C, Odhner D, Mcdonough JM, Lott C, Clark A, Anari JB, Cahill P, Torigian DA. Auto-segmentation of thoraco-abdominal organs in free breathing pediatric dynamic MRI. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12466:124660T. [PMID: 38957379 PMCID: PMC11218912 DOI: 10.1117/12.2654995] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
Quantitative analysis of the dynamic properties of thoraco-abdominal organs such as lungs during respiration could lead to more accurate surgical planning for disorders such as Thoracic Insufficiency Syndrome (TIS). This analysis can be done from semi-automatic delineations of the aforesaid organs in scans of the thoraco-abdominal body region. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for this application, although automatic segmentation of the organs in these images is very challenging. In this paper, we describe an auto-segmentation system we built and evaluated based on dMRI acquisitions from 95 healthy subjects. For the three recognition approaches, the system achieves a best average location error (LE) of about 1 voxel for the lungs. The standard deviation (SD) of LE is about 1-2 voxels. For the delineation approach, the average Dice coefficient (DC) is about 0.95 for the lungs. The standard deviation of DC is about 0.01 to 0.02 for the lungs. The system seems to be able to cope with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data and on other thoraco-abdominal organs including liver, kidneys, and spleen.
Collapse
Affiliation(s)
- Yusuf Akhtar
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei 066004, China
| | - Caiyun Wu
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Dewey Odhner
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Joseph M Mcdonough
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Carina Lott
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Abbie Clark
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Jason B Anari
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Patrick Cahill
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Drew A Torigian
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|