1
|
Ghaderi S, Mohammadi S, Ghaderi K, Kiasat F, Mohammadi M. Marker-controlled watershed algorithm and fuzzy C-means clustering machine learning: automated segmentation of glioblastoma from MRI images in a case series. Ann Med Surg (Lond) 2024; 86:1460-1475. [PMID: 38463066 PMCID: PMC10923355 DOI: 10.1097/ms9.0000000000001756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/16/2024] [Indexed: 03/12/2024] Open
Abstract
Introduction and importance Automated segmentation of glioblastoma multiforme (GBM) from MRI images is crucial for accurate diagnosis and treatment planning. This paper presents a new and innovative approach for automating the segmentation of GBM from MRI images using the marker-controlled watershed segmentation (MCWS) algorithm. Case presentation and methods The technique involves several image processing techniques, including adaptive thresholding, morphological filtering, gradient magnitude calculation, and regional maxima identification. The MCWS algorithm efficiently segments images based on local intensity structures using the watershed transform, and fuzzy c-means (FCM) clustering improves segmentation accuracy. The presented approach achieved improved segmentation accuracy in detecting and segmenting GBM tumours from axial T2-weighted (T2-w) MRI images, as demonstrated by the mean characteristics performance metrics for GBM segmentation (sensitivity: 0.9905, specificity: 0.9483, accuracy: 0.9508, precision: 0.5481, F_measure: 0.7052, and jaccard: 0.9340). Clinical discussion The results of this study underline the importance of reliable and accurate image segmentation for effective diagnosis and treatment planning of GBM tumours. Conclusion The MCWS technique provides an effective and efficient approach for the segmentation of challenging medical images.
Collapse
Affiliation(s)
- Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran
| | - Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj
| | - Fereshteh Kiasat
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Mikic N, Gentilal N, Cao F, Lok E, Wong ET, Ballo M, Glas M, Miranda PC, Thielscher A, Korshoej AR. Tumor-treating fields dosimetry in glioblastoma: Insights into treatment planning, optimization, and dose-response relationships. Neurooncol Adv 2024; 6:vdae032. [PMID: 38560348 PMCID: PMC10981464 DOI: 10.1093/noajnl/vdae032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024] Open
Abstract
Tumor-treating fields (TTFields) are currently a Category 1A treatment recommendation by the US National Comprehensive Cancer Center for patients with newly diagnosed glioblastoma. Although the mechanism of action of TTFields has been partly elucidated, tangible and standardized metrics are lacking to assess antitumor dose and effects of the treatment. This paper outlines and evaluates the current standards and methodologies in the estimation of the TTFields distribution and dose measurement in the brain and highlights the most important principles governing TTFields dosimetry. The focus is on clinical utility to facilitate a practical understanding of these principles and how they can be used to guide treatment. The current evidence for a correlation between TTFields dose, tumor growth, and clinical outcome will be presented and discussed. Furthermore, we will provide perspectives and updated insights into the planning and optimization of TTFields therapy for glioblastoma by reviewing how the dose and thermal effects of TTFields are affected by factors such as tumor location and morphology, peritumoral edema, electrode array position, treatment duration (compliance), array "edge effect," electrical duty cycle, and skull-remodeling surgery. Finally, perspectives are provided on how to optimize the efficacy of future TTFields therapy.
Collapse
Affiliation(s)
- Nikola Mikic
- Department of Neurosurgery, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Nichal Gentilal
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisboa, Portugal
| | - Fang Cao
- Department of Health Technology, Center for Magnetic Resonance, Technical University of Denmark, Kgs. Lyngby, Denmark
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| | - Edwin Lok
- Brain Tumor Center and Neuro-Oncology Unit, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Eric T Wong
- Division of Hematology/Oncology, Department of Medicine, Rhode Island Hospital, Providence, Rhode Island, USA
| | - Matthew Ballo
- Department of Radiation Oncology, West Cancer Center and Research Institute, Memphis, Tennessee, USA
| | - Martin Glas
- Division of Clinical Neurooncology, Department of Neurology and German Cancer Consortium (DKTK) Partner Site, University Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Pedro C Miranda
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisboa, Portugal
| | - Axel Thielscher
- Department of Health Technology, Center for Magnetic Resonance, Technical University of Denmark, Kgs. Lyngby, Denmark
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| | - Anders R Korshoej
- Department of Neurosurgery, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
3
|
Rapp J, Böhringer D, Schlunck G, Agostini H, Reinhard T, Bucher F. Addressing bias in manual segmentation of spheroid sprouting assays with U-Net. Mol Vis 2023; 29:197-205. [PMID: 38222450 PMCID: PMC10784213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 10/15/2023] [Indexed: 01/16/2024] Open
Abstract
Purpose Angiogenesis research faces the issue of false-positive findings due to the manual analysis pipelines involved in many assays. For example, the spheroid sprouting assay, one of the most prominent in vitro angiogenesis models, is commonly based on manual segmentation of sprouts. In this study, we propose a method for mitigating subconscious or fraudulent bias caused by manual segmentation. This approach involves training a U-Net model on manual segmentations and using the readout of this U-Net model instead of the potentially biased original segmentations. Our hypothesis is that U-Net will mitigate any bias in the manual segmentations because this will impose only random noise during model training. We assessed this idea using a simulation study. Methods The training data comprised 1531 phase contrast images and manual segmentations from various spheroid sprouting assays. We randomly divided the images 1:1 into two groups: a fictitious intervention group and a control group. Bias was simulated exclusively in the intervention group. We simulated two adversarial scenarios: 1) removal of a single randomly selected sprout and 2) systematic shortening of all sprouts. For both scenarios, we compared the original segmentation, adversarial segmentation, and respective U-Net readouts. In the second step, we assessed the sensitivity of this approach to detect a true positive effect. We sampled multiple treatment and control groups with decreasing treatment effects based on unbiased ground truth segmentation. Results This approach was able to mitigate bias in both adversarial scenarios. However, in both scenarios, U-Net detected the real treatment effects based on a comparison to the ground truth. Conclusions This method may prove useful for verifying positive findings in angiogenesis experiments with a manual analysis pipeline when full investigator masking has been neglected or is not feasible.
Collapse
Affiliation(s)
- Julian Rapp
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Daniel Böhringer
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Günther Schlunck
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Hansjürgen Agostini
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Thomas Reinhard
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Felicitas Bucher
- Eye Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| |
Collapse
|
4
|
Turcas A, Leucuta D, Balan C, Clementel E, Gheara C, Kacso A, Kelly SM, Tanasa D, Cernea D, Achimas-Cadariu P. Deep-learning magnetic resonance imaging-based automatic segmentation for organs-at-risk in the brain: Accuracy and impact on dose distribution. Phys Imaging Radiat Oncol 2023; 27:100454. [PMID: 37333894 PMCID: PMC10276287 DOI: 10.1016/j.phro.2023.100454] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023] Open
Abstract
Background and purpose Normal tissue sparing in radiotherapy relies on proper delineation. While manual contouring is time consuming and subject to inter-observer variability, auto-contouring could optimize workflows and harmonize practice. We assessed the accuracy of a commercial, deep-learning, MRI-based tool for brain organs-at-risk delineation. Materials and methods Thirty adult brain tumor patients were retrospectively manually recontoured. Two additional structure sets were obtained: AI (artificial intelligence) and AIedit (manually corrected auto-contours). For 15 selected cases, identical plans were optimized for each structure set. We used Dice Similarity Coefficient (DSC) and mean surface-distance (MSD) for geometric comparison and gamma analysis and dose-volume-histogram comparison for dose metrics evaluation. Wilcoxon signed-ranks test was used for paired data, Spearman coefficient(ρ) for correlations and Bland-Altman plots to assess level of agreement. Results Auto-contouring was significantly faster than manual (1.1/20 min, p < 0.01). Median DSC and MSD were 0.7/0.9 mm for AI and 0.8/0.5 mm for AIedit. DSC was significantly correlated with structure size (ρ = 0.76, p < 0.01), with higher DSC for large structures. Median gamma pass rate was 74% (71-81%) for Plan_AI and 82% (75-86%) for Plan_AIedit, with no correlation with DSC or MSD. Differences between Dmean_AI and Dmean_Ref were ≤ 0.2 Gy (p < 0.05). The dose difference was moderately correlated with DSC. Bland Altman plot showed minimal discrepancy (0.1/0) between AI and reference Dmean/Dmax. Conclusions The AI-model showed good accuracy for large structures, but developments are required for smaller ones. Auto-segmentation was significantly faster, with minor differences in dose distribution caused by geometric variations.
Collapse
Affiliation(s)
- Andrada Turcas
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Daniel Leucuta
- University of Medicine and Pharmacy “Iuliu Hatieganu”, Department of Medical Informatics and Biostatistics, Cluj-Napoca, Romania
| | - Cristina Balan
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Enrico Clementel
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
| | - Cristina Gheara
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Alex Kacso
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Sarah M. Kelly
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
| | - Delia Tanasa
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Dana Cernea
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Patriciu Achimas-Cadariu
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Surgery Department, Cluj-Napoca, Romania
| |
Collapse
|
5
|
Lin CY, Chou LS, Wu YH, Kuo JS, Mehta MP, Shiau AC, Liang JA, Hsu SM, Wang TH. Developing an AI-assisted planning pipeline for hippocampal avoidance whole brain radiotherapy. Radiother Oncol 2023; 181:109528. [PMID: 36773828 DOI: 10.1016/j.radonc.2023.109528] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/04/2023] [Accepted: 02/03/2023] [Indexed: 02/12/2023]
Abstract
BACKGROUND AND PURPOSE Hippocampal avoidance whole brain radiotherapy (HA-WBRT) is effective for controlling disease and preserving neuro-cognitive function for brain metastases. However, contouring and planning of HA-WBRT is complex and time-consuming. We designed and evaluated a pipeline using deep learning tools for a fully automated treatment planning workflow to generate HA-WBRT radiotherapy plans. MATERIALS AND METHODS We retrospectively collected 50 adult patients who received HA-WBRT. Using RTOG- 0933 clinical trial protocol guidelines, all organs-at-risk (OARs) and the clinical target volume (CTV) were contoured by experienced radiation oncologists. A deep-learning segmentation model was designed and trained. Next, we developed a volumetric-modulated arc therapy (VMAT) auto-planning algorithm for 30 Gy in 10 fractions. Automated segmentations were evaluated using the Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95 % HD). Auto-plans were evaluated by the percentage of PTV volume that receives 30 Gy (V30Gy), conformity index (CI), and homogeneity index (HI) of planning target volume (PTV) and the minimum dose (D100%) and maximum dose (Dmax) for the hippocampus, Dmax for the lens, eyes, optic nerve, brain stem, and chiasm. RESULTS We developed a deep-learning segmentation model and an auto-planning script. For the 10 cases in the independent test set, the overall average DSC and 95 % HD of contours were greater than 0.8 and less than 7 mm, respectively. All auto-plans met the RTOG- 0933 criteria. The HA-WBRT plan automatically created time was about 10 min. CONCLUSIONS An artificial intelligence (AI)-assisted pipeline using deep learning tools can rapidly and accurately generate clinically acceptable HA-WBRT plans with minimal manual intervention and increase efficiency of this treatment for brain metastases.
Collapse
Affiliation(s)
- Chih-Yuan Lin
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lin-Shan Chou
- Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yuan-Hung Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan; Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - John S Kuo
- Neuroscience and Brain Disease Center, China Medical University, Taichung, Taiwan; Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan; Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Minesh P Mehta
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, Florida, USA; Florida International University, Miami, Florida, USA
| | - An-Cheng Shiau
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan; Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan
| | - Ji-An Liang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan; Department of Medicine, China Medical University, Taichung, Taiwan
| | - Shih-Ming Hsu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Ti-Hao Wang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
6
|
Krishnapriya S, Karuna Y. Pre-trained deep learning models for brain MRI image classification. Front Hum Neurosci 2023; 17:1150120. [PMID: 37151901 PMCID: PMC10157370 DOI: 10.3389/fnhum.2023.1150120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/06/2023] [Indexed: 05/09/2023] Open
Abstract
Brain tumors are serious conditions caused by uncontrolled and abnormal cell division. Tumors can have devastating implications if not accurately and promptly detected. Magnetic resonance imaging (MRI) is one of the methods frequently used to detect brain tumors owing to its excellent resolution. In the past few decades, substantial research has been conducted in the field of classifying brain images, ranging from traditional methods to deep-learning techniques such as convolutional neural networks (CNN). To accomplish classification, machine-learning methods require manually created features. In contrast, CNN achieves classification by extracting visual features from unprocessed images. The size of the training dataset had a significant impact on the features that CNN extracts. The CNN tends to overfit when its size is small. Deep CNNs (DCNN) with transfer learning have therefore been developed. The aim of this work was to investigate the brain MR image categorization potential of pre-trained DCNN VGG-19, VGG-16, ResNet50, and Inception V3 models using data augmentation and transfer learning techniques. Validation of the test set utilizing accuracy, recall, Precision, and F1 score showed that the pre-trained VGG-19 model with transfer learning exhibited the best performance. In addition, these methods offer an end-to-end classification of raw images without the need for manual attribute extraction.
Collapse
|
7
|
Pálsson S, Cerri S, Poulsen HS, Urup T, Law I, Van Leemput K. Predicting survival of glioblastoma from automatic whole-brain and tumor segmentation of MR images. Sci Rep 2022; 12:19744. [PMID: 36396681 PMCID: PMC9671967 DOI: 10.1038/s41598-022-19223-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 08/25/2022] [Indexed: 11/18/2022] Open
Abstract
Survival prediction models can potentially be used to guide treatment of glioblastoma patients. However, currently available MR imaging biomarkers holding prognostic information are often challenging to interpret, have difficulties generalizing across data acquisitions, or are only applicable to pre-operative MR data. In this paper we aim to address these issues by introducing novel imaging features that can be automatically computed from MR images and fed into machine learning models to predict patient survival. The features we propose have a direct anatomical-functional interpretation: They measure the deformation caused by the tumor on the surrounding brain structures, comparing the shape of various structures in the patient's brain to their expected shape in healthy individuals. To obtain the required segmentations, we use an automatic method that is contrast-adaptive and robust to missing modalities, making the features generalizable across scanners and imaging protocols. Since the features we propose do not depend on characteristics of the tumor region itself, they are also applicable to post-operative images, which have been much less studied in the context of survival prediction. Using experiments involving both pre- and post-operative data, we show that the proposed features carry prognostic value in terms of overall- and progression-free survival, over and above that of conventional non-imaging features.
Collapse
Affiliation(s)
- Sveinn Pálsson
- grid.5170.30000 0001 2181 8870Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Stefano Cerri
- grid.5170.30000 0001 2181 8870Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Hans Skovgaard Poulsen
- grid.475435.4Department of Oncology, The Finsen Center, Rigshospitalet, Copenhagen, Denmark
| | - Thomas Urup
- grid.475435.4Department of Oncology, The Finsen Center, Rigshospitalet, Copenhagen, Denmark
| | - Ian Law
- grid.475435.4Department of Clinical Physiology, Nuclear Medicine and PET, Center of Diagnostic Investigation, Rigshospitalet, Copenhagen, Denmark
| | - Koen Van Leemput
- grid.5170.30000 0001 2181 8870Department of Health Technology, Technical University of Denmark, Lyngby, Denmark ,grid.32224.350000 0004 0386 9924Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| |
Collapse
|
8
|
Liu L, Liu Y, Zhou J, Guo C, Duan H. A novel MCF-Net: Multi-level context fusion network for 2D medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107160. [PMID: 36191351 DOI: 10.1016/j.cmpb.2022.107160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 08/14/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
Medical image segmentation is a crucial step in the clinical applications for diagnosis and analysis of some diseases. U-Net-based convolution neural networks have achieved impressive performance in medical image segmentation tasks. However, the multi-level contextual information integration capability and the feature extraction ability are often insufficient. In this paper, we present a novel multi-level context fusion network (MCF-Net) to improve the performance of U-Net on various segmentation tasks by designing three modules, hybrid attention-based residual atrous convolution (HARA) module, multi-scale feature memory (MSFM) module, and multi-receptive field fusion (MRFF) module, to fuse multi-scale contextual information. HARA module was proposed to effectively extract multi-receptive field features by combing atrous spatial pyramid pooling and attention mechanism. We further design the MSFM and MRFF modules to fuse features of different levels and effectively extract contextual information. The proposed MCF-Net was evaluated on the ISIC 2018, DRIVE, BUSI, and Kvasir-SEG datasets, which have challenging images of many sizes and widely varying anatomy. The experimental results show that MCF-Net is very competitive with other U-Net models, and it offers tremendous potential as a general-purpose deep learning model for 2D medical image segmentation.
Collapse
Affiliation(s)
- Lizhu Liu
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China; National Engineering Laboratory of Robot Visual Perception and Control Technology, School of Robotics, Hunan University, Changsha 410082, China.
| | - Yexin Liu
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Jian Zhou
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Cheng Guo
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Huigao Duan
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| |
Collapse
|
9
|
|
10
|
Crouzen JA, Petoukhova AL, Wiggenraad RGJ, Hutschemaekers S, Gadellaa-van Hooijdonk CGM, van der Voort van Zyp NCMG, Mast ME, Zindler JD. Development and evaluation of an automated EPTN-consensus based organ at risk atlas in the brain on MRI. Radiother Oncol 2022; 173:262-268. [PMID: 35714807 DOI: 10.1016/j.radonc.2022.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 04/29/2022] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND PURPOSE During radiotherapy treatment planning, avoidance of organs at risk (OARs) is important. An international consensus-based delineation guideline was recently published with 34 OARs in the brain. We developed an MR-based OAR autosegmentation atlas and evaluated its performance compared to manual delineation. MATERIALS AND METHODS Anonymized cerebral T1-weighted MR scans (voxel size 0.9x0.9x0.9mm 3) were available. OARs were manually delineated according to international consensus. Fifty MR scans were used to develop the autosegmentation atlas in a commercially available treatment planning system (Raystation®). The performance of this atlas was tested on another 40 MR scans by automatically delineating 34 OARs, as defined by the 2018 EPTN consensus. Spatial overlap between manual and automated delineations was determined by calculating the Dice similarity coefficient (DSC). Two radiation oncologists determined the quality of each automatically delineated OAR. The time needed to delineate all OARs manually or to adjust automatically delineated OARs was determined. RESULTS DSC was ≥0.75 in 31 (91%) out of 34 automated OAR delineations. Delineations were rated by radiation oncologists as excellent or good in 29 (85%) out 34 OAR delineations, while 4 were rated fair (12%) and 1 was rated poor (3%). Interobserver agreement between the radiation oncologists ranged from 77-100% per OAR. The time to manually delineate all OARs was 88.5 minutes, while the time needed to adjust automatically delineated OARs was 15.8 minutes. CONCLUSION Autosegmentation of OARs enables high-quality contouring within a limited time. Accurate OAR delineation helps to define OAR constraints to mitigate serious complications and helps with the development of NTCP models.
Collapse
Affiliation(s)
- Jeroen A Crouzen
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Anna L Petoukhova
- Haaglanden Medical Center, Department of Medical Physics, BA Leidschendam, The Netherlands.
| | - Ruud G J Wiggenraad
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands
| | - Stefan Hutschemaekers
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | | | | | - Mirjam E Mast
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Jaap D Zindler
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| |
Collapse
|
11
|
Ni J, Wu J, Elazab A, Tong J, Chen Z. DNL-Net: deformed non-local neural network for blood vessel segmentation. BMC Med Imaging 2022; 22:109. [PMID: 35668351 PMCID: PMC9169317 DOI: 10.1186/s12880-022-00836-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/31/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The non-local module has been primarily used in literature to capturing long-range dependencies. However, it suffers from prohibitive computational complexity and lacks the interactions among positions across the channels. METHODS We present a deformed non-local neural network (DNL-Net) for medical image segmentation, which has two prominent components; deformed non-local module (DNL) and multi-scale feature fusion. The former optimizes the structure of the non-local block (NL), hence, reduces the problem of excessive computation and memory usage, significantly. The latter is derived from the attention mechanisms to fuse the features of different levels and improve the ability to exchange information across channels. In addition, we introduce a residual squeeze and excitation pyramid pooling (RSEP) module that is like spatial pyramid pooling to effectively resample the features at different scales and improve the network receptive field. RESULTS The proposed method achieved 96.63% and 92.93% for Dice coefficient and mean intersection over union, respectively, on the intracranial blood vessel dataset. Also, DNL-Net attained 86.64%, 96.10%, and 98.37% for sensitivity, accuracy and area under receiver operation characteristic curve, respectively, on the DRIVE dataset. CONCLUSIONS The overall performance of DNL-Net outperforms other current state-of-the-art vessel segmentation methods, which indicates that the proposed network is more suitable for blood vessel segmentation, and is of great clinical significance.
Collapse
Affiliation(s)
- Jiajia Ni
- College of Internet of Things Engineering, HoHai University, Changzhou, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| | - Ahmed Elazab
- School of Biomedical Engineering, Shenzhen University, Shenzhen, China
- Computer Science Department, Misr Higher Institute for Commerce and Computers, Mansoura, Egypt
| | - Jing Tong
- College of Internet of Things Engineering, HoHai University, Changzhou, China
| | - Zhengming Chen
- College of Internet of Things Engineering, HoHai University, Changzhou, China
| |
Collapse
|
12
|
Shin H, Choi GS, Shon OJ, Kim GB, Chang MC. Development of convolutional neural network model for diagnosing meniscus tear using magnetic resonance image. BMC Musculoskelet Disord 2022; 23:510. [PMID: 35637451 PMCID: PMC9150332 DOI: 10.1186/s12891-022-05468-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
Background Deep learning (DL) is an advanced machine learning approach used in diverse areas, such as image analysis, bioinformatics, and natural language processing. A convolutional neural network (CNN) is a representative DL model that is advantageous for image recognition and classification. In this study, we aimed to develop a CNN to detect meniscal tears and classify tear types using coronal and sagittal magnetic resonance (MR) images of each patient. Methods We retrospectively collected 599 cases (medial meniscus tear = 384, lateral meniscus tear = 167, and medial and lateral meniscus tear = 48) of knee MR images from patients with meniscal tears and 449 cases of knee MR images from patients without meniscal tears. To develop the DL model for evaluating the presence of meniscal tears, all the collected knee MR images of 1048 cases were used. To develop the DL model for evaluating the type of meniscal tear, 538 cases with meniscal tears (horizontal tear = 268, complex tear = 147, radial tear = 48, and longitudinal tear = 75) and 449 cases without meniscal tears were used. Additionally, a CNN algorithm was used. To measure the model’s performance, 70% of the included data were randomly assigned to the training set, and the remaining 30% were assigned to the test set. Results The area under the curves (AUCs) of our model were 0.889, 0.817, and 0.924 for medial meniscal tears, lateral meniscal tears, and medial and lateral meniscal tears, respectively. The AUCs of the horizontal, complex, radial, and longitudinal tears were 0.761, 0.850, 0.601, and 0.858, respectively. Conclusion Our study showed that the CNN model has the potential to be used in diagnosing the presence of meniscal tears and differentiating the types of meniscal tears.
Collapse
Affiliation(s)
- Hyunkwang Shin
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Gyu Sang Choi
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Oog-Jin Shon
- Department of Orthopedic Surgery, Yeungnam University College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea
| | - Gi Beom Kim
- Department of Orthopedic Surgery, Yeungnam University College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea.
| | - Min Cheol Chang
- Department of Physical Medicine and Rehabilitation, College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea.
| |
Collapse
|
13
|
Huang P, Li D, Jiao Z, Wei D, Cao B, Mo Z, Wang Q, Zhang H, Shen D. Common Feature Learning for Brain Tumor MRI Synthesis by Context-aware Generative Adversarial Network. Med Image Anal 2022; 79:102472. [DOI: 10.1016/j.media.2022.102472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 02/18/2022] [Accepted: 05/03/2022] [Indexed: 11/28/2022]
|
14
|
Li J, Yu H, Chen C, Ding M, Zha S. Category guided attention network for brain tumor segmentation in MRI. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac628a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/30/2022] [Indexed: 12/26/2022]
Abstract
Abstract
Objective. Magnetic resonance imaging (MRI) has been widely used for the analysis and diagnosis of brain diseases. Accurate and automatic brain tumor segmentation is of paramount importance for radiation treatment. However, low tissue contrast in tumor regions makes it a challenging task. Approach. We propose a novel segmentation network named Category Guided Attention U-Net (CGA U-Net). In this model, we design a Supervised Attention Module (SAM) based on the attention mechanism, which can capture more accurate and stable long-range dependency in feature maps without introducing much computational cost. Moreover, we propose an intra-class update approach to reconstruct feature maps by aggregating pixels of the same category. Main results. Experimental results on the BraTS 2019 datasets show that the proposed method outperformers the state-of-the-art algorithms in both segmentation performance and computational complexity. Significance. The CGA U-Net can effectively capture the global semantic information in the MRI image by using the SAM module, while significantly reducing the computational cost. Code is available at https://github.com/delugewalker/CGA-U-Net.
Collapse
|
15
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
16
|
Yoganathan SA, Zhang R. Segmentation of Organs and Tumor within Brain Magnetic Resonance Images Using K-Nearest Neighbor Classification. J Med Phys 2022; 47:40-49. [PMID: 35548028 PMCID: PMC9084578 DOI: 10.4103/jmp.jmp_87_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/24/2021] [Accepted: 12/11/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE To fully exploit the benefits of magnetic resonance imaging (MRI) for radiotherapy, it is desirable to develop segmentation methods to delineate patients' MRI images fast and accurately. The purpose of this work is to develop a semi-automatic method to segment organs and tumor within the brain on standard T1- and T2-weighted MRI images. METHODS AND MATERIALS Twelve brain cancer patients were retrospectively included in this study, and a simple rigid registration was used to align all the images to the same spatial coordinates. Regions of interest were created for organs and tumor segmentations. The K-nearest neighbor (KNN) classification algorithm was used to characterize the knowledge of previous segmentations using 15 image features (T1 and T2 image intensity, 4 Gabor filtered images, 6 image gradients, and 3 Cartesian coordinates), and the trained models were used to predict organ and tumor contours. Dice similarity coefficient (DSC), normalized surface dice, sensitivity, specificity, and Hausdorff distance were used to evaluate the performance of segmentations. RESULTS Our semi-automatic segmentations matched with the ground truths closely. The mean DSC value was between 0.49 (optical chiasm) and 0.89 (right eye) for organ segmentations and was 0.87 for tumor segmentation. Overall performance of our method is comparable or superior to the previous work, and the accuracy of our semi-automatic segmentation is generally better for large volume objects. CONCLUSION The proposed KNN method can accurately segment organs and tumor using standard brain MRI images, provides fast and accurate image processing and planning tools, and paves the way for clinical implementation of MRI-guided radiotherapy and adaptive radiotherapy.
Collapse
Affiliation(s)
- S. A. Yoganathan
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA
| | - Rui Zhang
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA,Department of Radiation Oncology, Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana, USA,Address for correspondence: Dr. Rui Zhang, Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA. E-mail:
| |
Collapse
|
17
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
18
|
Wang Z, Demarcy T, Vandersteen C, Gnansia D, Raffaelli C, Guevara N, Delingette H. Bayesian logistic shape model inference: Application to cochlear image segmentation. Med Image Anal 2021; 75:102268. [PMID: 34710654 DOI: 10.1016/j.media.2021.102268] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/01/2021] [Accepted: 10/08/2021] [Indexed: 11/28/2022]
Abstract
Incorporating shape information is essential for the delineation of many organs and anatomical structures in medical images. While previous work has mainly focused on parametric spatial transformations applied to reference template shapes, in this paper, we address the Bayesian inference of parametric shape models for segmenting medical images with the objective of providing interpretable results. The proposed framework defines a likelihood appearance probability and a prior label probability based on a generic shape function through a logistic function. A reference length parameter defined in the sigmoid controls the trade-off between shape and appearance information. The inference of shape parameters is performed within an Expectation-Maximisation approach in which a Gauss-Newton optimization stage provides an approximation of the posterior probability of the shape parameters. This framework is applied to the segmentation of cochlear structures from clinical CT images constrained by a 10-parameter shape model. It is evaluated on three different datasets, one of which includes more than 200 patient images. The results show performances comparable to supervised methods and better than previously proposed unsupervised ones. It also enables an analysis of parameter distributions and the quantification of segmentation uncertainty, including the effect of the shape model.
Collapse
Affiliation(s)
- Zihao Wang
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France.
| | - Thomas Demarcy
- Oticon Medical, 14 Chemin de Saint-Bernard Porte, Vallauris 06220, France
| | - Clair Vandersteen
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France; Head and Neck University Institute, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Dan Gnansia
- Oticon Medical, 14 Chemin de Saint-Bernard Porte, Vallauris 06220, France
| | - Charles Raffaelli
- Department of Radiology, Centre Hospitalier Universitaire de Nice, 31 Avenue de Valombrose, Nice 06100, France
| | - Nicolas Guevara
- Head and Neck University Institute, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Hervé Delingette
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France
| |
Collapse
|
19
|
Liu B, Jiang F, Sun J, Wang F, Liu K. Biomacromolecule-based photo-thermal agents for tumor treatment. J Mater Chem B 2021; 9:7007-7022. [PMID: 34023868 DOI: 10.1039/d1tb00725d] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cancer treatment has become one of the biggest challenges in modern medicine. Recently, many efforts have been devoted to treat tumors by surgical resection, radiotherapy, or chemotherapy. In comparison to these methods, photo-thermal therapy (PTT) with noninvasive, controllable, direct, and precise characteristics has received tremendous attention in eliminating tumor cells over the past decades. In particular, PTT based on biomacromolecule-based photo-thermal agents (PTAs) outperforms other systems with high photo-thermal efficiency, simple coating, and low immunogenicity. Considering the unique advantages of biomacromolecule-based PTAs in tumor treatment, it is necessary to summarize the recent progress in the field of biomacromolecule-based PTAs for tumor treatment. Herein, this minireview outlines recent progress in the fabrication and applications of biomacromolecule-based PTAs. Within this framework, various types of biomacromolecule-based PTAs are highlighted, including cell-based agents, protein-based agents, nucleotide-based agents, and polysaccharide-based PTAs. In each section, the functional design, photo-thermal effects, and potential clinical applications of each type of PTA are discussed. Finally, a brief perspective for the development of biomacromolecule-based PTAs is presented.
Collapse
Affiliation(s)
- Bin Liu
- Department of Urology, China-Japan Union Hospital of Jilin University, Changchun 130033, China and State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022, China
| | - Fuquan Jiang
- Department of Urology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Jing Sun
- Institute of Organic Chemistry, University of Ulm, Albert-Einstein-Allee 11, 89081, Ulm, Germany
| | - Fan Wang
- State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022, China
| | - Kai Liu
- State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022, China and Department of Chemistry, Tsinghua University, Beijing 100084, China
| |
Collapse
|
20
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
21
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. Segmentation evaluation with sparse ground truth data: Simulating true segmentations as perfect/imperfect as those generated by humans. Med Image Anal 2021; 69:101980. [PMID: 33588116 PMCID: PMC7933105 DOI: 10.1016/j.media.2021.101980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 10/22/2022]
Abstract
Fully annotated data sets play important roles in medical image segmentation and evaluation. Expense and imprecision are the two main issues in generating ground truth (GT) segmentations. In this paper, in an attempt to overcome these two issues jointly, we propose a method, named SparseGT, which exploit variability among human segmenters to maximally save manual workload in GT generation for evaluating actual segmentations by algorithms. Pseudo ground truth (p-GT) segmentations are created by only a small fraction of workload and with human-level perfection/imperfection, and they can be used in practice as a substitute for fully manual GT in evaluating segmentation algorithms at the same precision. p-GT segmentations are generated by first selecting slices sparsely, where manual contouring is conducted only on these sparse slices, and subsequently filling segmentations on other slices automatically. By creating p-GT with different levels of sparseness, we determine the largest workload reduction achievable for each considered object, where the variability of the generated p-GT is statistically indistinguishable from inter-segmenter differences in full manual GT segmentations for that object. Furthermore, we investigate the segmentation evaluation errors introduced by variability in manual GT by applying p-GT in evaluation of actual segmentations by an algorithm. Experiments are conducted on ∼500 computed tomography (CT) studies involving six objects in two body regions, Head & Neck and Thorax, where optimal sparseness and corresponding evaluation errors are determined for each object and each strategy. Our results indicate that creating p-GT by the concatenated strategy of uniformly selecting sparse slices and filling segmentations via deep-learning (DL) network show highest manual workload reduction by ∼80-96% without sacrificing evaluation accuracy compared to fully manual GT. Nevertheless, other strategies also have obvious contributions in different situations. A non-uniform strategy for slice selection shows its advantage for objects with irregular shape change from slice to slice. An interpolation strategy for filling segmentations can achieve ∼60-90% of workload reduction in simulating human-level GT without the need of an actual training stage and shows potential in enlarging data sets for training p-GT generation networks. We conclude that not only over 90% reduction in workload is feasible without sacrificing evaluation accuracy but also the suitable strategy and the optimal sparseness level achievable for creating p-GT are object- and application-specific.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| |
Collapse
|
22
|
Menze B, Isensee F, Wiest R, Wiestler B, Maier-Hein K, Reyes M, Bakas S. Analyzing magnetic resonance imaging data from glioma patients using deep learning. Comput Med Imaging Graph 2021; 88:101828. [PMID: 33571780 PMCID: PMC8040671 DOI: 10.1016/j.compmedimag.2020.101828] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 10/29/2020] [Accepted: 11/18/2020] [Indexed: 12/21/2022]
Abstract
The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
Collapse
Affiliation(s)
- Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland.
| | | | - Roland Wiest
- Support Center for Advanced Neuroimaging, Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern, Switzerland.
| | | | | | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
23
|
Test-time adaptable neural networks for robust medical image segmentation. Med Image Anal 2021; 68:101907. [DOI: 10.1016/j.media.2020.101907] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 11/11/2020] [Accepted: 11/12/2020] [Indexed: 11/20/2022]
|
24
|
Puonti O, Van Leemput K, Saturnino GB, Siebner HR, Madsen KH, Thielscher A. Accurate and robust whole-head segmentation from magnetic resonance images for individualized head modeling. Neuroimage 2020; 219:117044. [PMID: 32534963 PMCID: PMC8048089 DOI: 10.1016/j.neuroimage.2020.117044] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 05/15/2020] [Accepted: 06/09/2020] [Indexed: 12/18/2022] Open
Abstract
Transcranial brain stimulation (TBS) has been established as a method for modulating and mapping the function of the human brain, and as a potential treatment tool in several brain disorders. Typically, the stimulation is applied using a one-size-fits-all approach with predetermined locations for the electrodes, in electric stimulation (TES), or the coil, in magnetic stimulation (TMS), which disregards anatomical variability between individuals. However, the induced electric field distribution in the head largely depends on anatomical features implying the need for individually tailored stimulation protocols for focal dosing. This requires detailed models of the individual head anatomy, combined with electric field simulations, to find an optimal stimulation protocol for a given cortical target. Considering the anatomical and functional complexity of different brain disorders and pathologies, it is crucial to account for the anatomical variability in order to translate TBS from a research tool into a viable option for treatment. In this article we present a new method, called CHARM, for automated segmentation of fifteen different head tissues from magnetic resonance (MR) scans. The new method compares favorably to two freely available software tools on a five-tissue segmentation task, while obtaining reasonable segmentation accuracy over all fifteen tissues. The method automatically adapts to variability in the input scans and can thus be directly applied to clinical or research scans acquired with different scanners, sequences or settings. We show that an increase in automated segmentation accuracy results in a lower relative error in electric field simulations when compared to anatomical head models constructed from reference segmentations. However, also the improved segmentations and, by implication, the electric field simulations are affected by systematic artifacts in the input MR scans. As long as the artifacts are unaccounted for, this can lead to local simulation differences up to 30% of the peak field strength on reference simulations. Finally, we exemplarily demonstrate the effect of including all fifteen tissue classes in the field simulations against the standard approach of using only five tissue classes and show that for specific stimulation configurations the local differences can reach 10% of the peak field strength.
Collapse
Affiliation(s)
- Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Koen Van Leemput
- Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| | - Guilherme B Saturnino
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen, Denmark; Institute for Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
| | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark.
| |
Collapse
|
25
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
26
|
Rebsamen M, Knecht U, Reyes M, Wiest R, Meier R, McKinley R. Divide and Conquer: Stratifying Training Data by Tumor Grade Improves Deep Learning-Based Brain Tumor Segmentation. Front Neurosci 2019; 13:1182. [PMID: 31749678 PMCID: PMC6848279 DOI: 10.3389/fnins.2019.01182] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 10/18/2019] [Indexed: 11/13/2022] Open
Abstract
It is a general assumption in deep learning that more training data leads to better performance, and that models will learn to generalize well across heterogeneous input data as long as that variety is represented in the training set. Segmentation of brain tumors is a well-investigated topic in medical image computing, owing primarily to the availability of a large publicly-available dataset arising from the long-running yearly Multimodal Brain Tumor Segmentation (BraTS) challenge. Research efforts and publications addressing this dataset focus predominantly on technical improvements of model architectures and less on properties of the underlying data. Using the dataset and the method ranked third in the BraTS 2018 challenge, we performed experiments to examine the impact of tumor type on segmentation performance. We propose to stratify the training dataset into high-grade glioma (HGG) and low-grade glioma (LGG) subjects and train two separate models. Although we observed only minor gains in overall mean dice scores by this stratification, examining case-wise rankings of individual subjects revealed statistically significant improvements. Compared to a baseline model trained on both HGG and LGG cases, two separately trained models led to better performance in 64.9% of cases (p < 0.0001) for the tumor core. An analysis of subjects which did not profit from stratified training revealed that cases were missegmented which had poor image quality, or which presented clinically particularly challenging cases (e.g., underrepresented subtypes such as IDH1-mutant tumors), underlining the importance of such latent variables in the context of tumor segmentation. In summary, we found that segmentation models trained on the BraTS 2018 dataset, stratified according to tumor type, lead to a significant increase in segmentation performance. Furthermore, we demonstrated that this gain in segmentation performance is evident in the case-wise ranking of individual subjects but not in summary statistics. We conclude that it may be useful to consider the segmentation of brain tumors of different types or grades as separate tasks, rather than developing one tool to segment them all. Consequently, making this information available for the test data should be considered, potentially leading to a more clinically relevant BraTS competition.
Collapse
Affiliation(s)
- Michael Rebsamen
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Urspeter Knecht
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- Healthcare Imaging A.I. Lab, Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Raphael Meier
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Richard McKinley
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|