1
|
Li B, Sun Q, Fang X, Yang Y, Li X. A novel metastatic tumor segmentation method with a new evaluation metric in clinic study. Front Med (Lausanne) 2024; 11:1375851. [PMID: 39416869 PMCID: PMC11479867 DOI: 10.3389/fmed.2024.1375851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 09/18/2024] [Indexed: 10/19/2024] Open
Abstract
Background Brain metastases are the most common brain malignancies. Automatic detection and segmentation of brain metastases provide significant assistance for radiologists in discovering the location of the lesion and making accurate clinical decisions on brain tumor type for precise treatment. Objectives However, due to the small size of the brain metastases, existing brain metastases segmentation produces unsatisfactory results and has not been evaluated on clinic datasets. Methodology In this work, we propose a new metastasis segmentation method DRAU-Net, which integrates a new attention mechanism multi-branch weighted attention module and DResConv module, making the extraction of tumor boundaries more complete. To enhance the evaluation of both the segmentation quality and the number of targets, we propose a novel medical image segmentation evaluation metric: multi-objective segmentation integrity metric, which effectively improves the evaluation results on multiple brain metastases with small size. Results Experimental results evaluated on the BraTS2023 dataset and collected clinical data show that the proposed method has achieved excellent performance with an average dice coefficient of 0.6858 and multi-objective segmentation integrity metric of 0.5582. Conclusion Compared with other methods, our proposed method achieved the best performance in the task of segmenting metastatic tumors.
Collapse
Affiliation(s)
- Bin Li
- Department of Neurology, The First Hospital of Anhui University of Science and Technology, Huainan, China
| | - Qiushi Sun
- Department of Anesthesiology, Fudan University Affiliated Huashan Hospital, Shanghai, China
| | - Xianjin Fang
- Department of Anesthesiology, Fudan University Affiliated Huashan Hospital, Huainan, China
| | - Yang Yang
- Department of Anesthesiology, Fudan University Affiliated Huashan Hospital, Huainan, China
| | - Xiang Li
- Department of Anesthesiology, Fudan University Affiliated Huashan Hospital, Huainan, China
- School of Safety Science and Engineering, Anhui University of Science and Technology, Huainan, China
| |
Collapse
|
2
|
Kudus K, Wagner MW, Namdar K, Bennett J, Nobre L, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F. Beyond hand-crafted features for pretherapeutic molecular status identification of pediatric low-grade gliomas. Sci Rep 2024; 14:19102. [PMID: 39154039 PMCID: PMC11330469 DOI: 10.1038/s41598-024-69870-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024] Open
Abstract
The use of targeted agents in the treatment of pediatric low-grade gliomas (pLGGs) relies on the determination of molecular status. It has been shown that genetic alterations in pLGG can be identified non-invasively using MRI-based radiomic features or convolutional neural networks (CNNs). We aimed to build and assess a combined radiomics and CNN non-invasive pLGG molecular status identification model. This retrospective study used the tumor regions, manually segmented from T2-FLAIR MR images, of 336 patients treated for pLGG between 1999 and 2018. We designed a CNN and Random Forest radiomics model, along with a model relying on a combination of CNN and radiomic features, to predict the genetic status of pLGG. Additionally, we investigated whether CNNs could predict radiomic feature values from MR images. The combined model (mean AUC: 0.824) outperformed the radiomics model (0.802) and CNN (0.764). The differences in model performance were statistically significant (p-values < 0.05). The CNN was able to learn predictive radiomic features such as surface-to-volume ratio (average correlation: 0.864), and difference matrix dependence non-uniformity normalized (0.924) well but was unable to learn others such as run-length matrix variance (- 0.017) and non-uniformity normalized (- 0.042). Our results show that a model relying on both CNN and radiomic-based features performs better than either approach separately in differentiating the genetic status of pLGGs, and that CNNs are unable to express all handcrafted features.
Collapse
Affiliation(s)
- Kareem Kudus
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Matthias W Wagner
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Khashayar Namdar
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Julie Bennett
- Division of Hematology and Oncology, The Hospital for Sick Children, Toronto, Canada
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, Toronto, Canada
- Department of Pediatrics, University of Toronto, Toronto, Canada
| | - Liana Nobre
- Department of Paediatrics, University of Alberta, Edmonton, Canada
- Division of Immunology, Hematology/Oncology and Palliative Care, Stollery Children's Hospital, Edmonton, Canada
| | - Uri Tabori
- Division of Hematology and Oncology, The Hospital for Sick Children, Toronto, Canada
| | - Cynthia Hawkins
- Paediatric Laboratory Medicine, Division of Pathology, The Hospital for Sick Children, Toronto, Canada
| | - Birgit Betina Ertl-Wagner
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada.
- Institute of Medical Science, University of Toronto, Toronto, Canada.
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada.
| |
Collapse
|
3
|
Moshe YH, Buchsweiler Y, Teicher M, Artzi M. Handling Missing MRI Data in Brain Tumors Classification Tasks: Usage of Synthetic Images vs. Duplicate Images and Empty Images. J Magn Reson Imaging 2024; 60:561-573. [PMID: 37864370 DOI: 10.1002/jmri.29072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/30/2023] [Accepted: 10/02/2023] [Indexed: 10/22/2023] Open
Abstract
BACKGROUND Deep-learning is widely used for lesion classification. However, in the clinic patient data often has missing images. PURPOSE To evaluate the use of generated, duplicate and empty(black) images for replacing missing MRI data in AI brain tumor classification tasks. STUDY TYPE Retrospective. POPULATION 224 patients (local-dataset; low-grade-glioma (LGG) = 37, high-grade-glioma (HGG) = 187) and 335 patients (public-dataset (BraTS); LGG = 76, HGG = 259). The local-dataset was divided into training (64), validation (16), and internal-test-data (20), while the public-dataset was an independent test-set. FIELD STRENGTH/SEQUENCE T1WI, T1WI+C, T2WI, and FLAIR images (1.5T/3.0T-MR), obtained from different suppliers. ASSESSMENT Three image-to-image translation generative-adversarial-network (Pix2Pix-GAN) models were trained on the local-dataset, to generate T1WI, T2WI, and FLAIR images. The rating-and-preference-judgment assessment was performed by three human-readers (radiologist (MD) and two MRI-technicians). Resnet152 was used for classification, and inference was performed on both datasets, with baseline input, and with missing data replaced by 1) generated images; 2) duplication of existing images; and 3) black images. STATISTICAL TESTS The similarity between the generated and the original images was evaluated using the peak-signal-to-noise-ratio (PSNR) and the structural-similarity-index-measure (SSIM). Classification results were evaluated using accuracy, F1-score and the Kolmogorov-Smirnov test and distance. RESULTS For baseline-state, the classification model reached to accuracy = 0.93,0.82 on the local and public-datasets. For the missing-data methods, high similarity was obtained between the generated and the original images with mean PSNR = 35.65,32.94 and SSIM = 0.87,0.91 on the local and public-datasets; 39% of the generated-images were labeled as real images by the human-readers. The classification model using generated-images to replace missing images produced the highest results with mean accuracy = 0.91,0.82 compared to 0.85,0.79 for duplicated and 0.77,0.68 for use of black images; DATA CONCLUSION: The feasibility for inference classification model on an MRI dataset with missing images using the Pix2pix-GAN generated images, was shown. The stability and generalization ability of the model was demonstrated by producing consistent results on two independent datasets. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 5.
Collapse
Affiliation(s)
- Yael H Moshe
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Department of Mathematics, Bar Ilan University, Ramat Gan, Israel
| | - Yuval Buchsweiler
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- The Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Mina Teicher
- Department of Mathematics, Bar Ilan University, Ramat Gan, Israel
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Moran Artzi
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
4
|
Aung MTZ, Lim SH, Han J, Yang S, Kang JH, Kim JE, Huh KH, Yi WJ, Heo MS, Lee SS. Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study. Imaging Sci Dent 2024; 54:81-91. [PMID: 38571772 PMCID: PMC10985527 DOI: 10.5624/isd.20230245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/09/2024] [Accepted: 01/10/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.
Collapse
Affiliation(s)
- Moe Thu Zar Aung
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
- Department of Oral Medicine, University of Dental Medicine, Mandalay, Myanmar
| | - Sang-Heon Lim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
| | - Jiyong Han
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| |
Collapse
|
5
|
Fairchild A, Salama JK, Godfrey D, Wiggins WF, Ackerson BG, Oyekunle T, Niedzwiecki D, Fecci PE, Kirkpatrick JP, Floyd SR. Incidence and imaging characteristics of difficult to detect retrospectively identified brain metastases in patients receiving repeat courses of stereotactic radiosurgery. J Neurooncol 2024:10.1007/s11060-024-04594-6. [PMID: 38340295 DOI: 10.1007/s11060-024-04594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
PURPOSE During stereotactic radiosurgery (SRS) planning for brain metastases (BM), brain MRIs are reviewed to select appropriate targets based on radiographic characteristics. Some BM are difficult to detect and/or definitively identify and may go untreated initially, only to become apparent on future imaging. We hypothesized that in patients receiving multiple courses of SRS, reviewing the initial planning MRI would reveal early evidence of lesions that developed into metastases requiring SRS. METHODS Patients undergoing two or more courses of SRS to BM within 6 months between 2016 and 2018 were included in this single-institution, retrospective study. Brain MRIs from the initial course were reviewed for lesions at the same location as subsequently treated metastases; if present, this lesion was classified as a "retrospectively identified metastasis" or RIM. RIMs were subcategorized as meeting or not meeting diagnostic imaging criteria for BM (+ DC or -DC, respectively). RESULTS Among 683 patients undergoing 923 SRS courses, 98 patients met inclusion criteria. There were 115 repeat courses of SRS, with 345 treated metastases in the subsequent course, 128 of which were associated with RIMs found in a prior MRI. 58% of RIMs were + DC. 17 (15%) of subsequent courses consisted solely of metastases associated with + DC RIMs. CONCLUSION Radiographic evidence of brain metastases requiring future treatment was occasionally present on brain MRIs from prior SRS treatments. Most RIMs were + DC, and some subsequent SRS courses treated only + DC RIMs. These findings suggest enhanced BM detection might enable earlier treatment and reduce the need for additional SRS.
Collapse
Affiliation(s)
- Andrew Fairchild
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.
- Piedmont Radiation Oncology, 3333 Silas Creek Parkway, Winston Salem, NC, 27103, USA.
| | - Joseph K Salama
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Radiation Oncology Service, Durham VA Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Walter F Wiggins
- Deartment of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Bradley G Ackerson
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Taofik Oyekunle
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Peter E Fecci
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - John P Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - Scott R Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
6
|
Helland RH, Ferles A, Pedersen A, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Dunås T, Nibali MC, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Tewari RN, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sagberg LM, Sciortino T, Aalders T, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, Majewska PL, Jakola AS, Solheim O, Hamer PCDW, Reinertsen I, Eijgelaar RS, Bouget D. Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks. Sci Rep 2023; 13:18897. [PMID: 37919325 PMCID: PMC10622432 DOI: 10.1038/s41598-023-45456-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 10/19/2023] [Indexed: 11/04/2023] Open
Abstract
Extent of resection after surgery is one of the main prognostic factors for patients diagnosed with glioblastoma. To achieve this, accurate segmentation and classification of residual tumor from post-operative MR images is essential. The current standard method for estimating it is subject to high inter- and intra-rater variability, and an automated method for segmentation of residual tumor in early post-operative MRI could lead to a more accurate estimation of extent of resection. In this study, two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task. The models were extensively validated on a multicenter dataset with nearly 1000 patients, from 12 hospitals in Europe and the United States. The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy, with a demonstrated ability to generalize across hospitals. In addition, the segmentation performance of the best models was on par with human expert raters. The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
Collapse
Affiliation(s)
- Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491, Trondheim, Norway.
| | - Alexandros Ferles
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ivar Kommers
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, 5042 AD, Tilburg, The Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, WC1E 6BT, UK
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122, Milan, Italy
| | - Mitchel S Berger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, 94143, USA
| | - Tora Dunås
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, 405 30, Gothenburg, Sweden
| | | | - Julia Furtner
- Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
- Research Center for Medical Image Analysis and Artificial Intelligence (MIAAI), Faculty of Medicine and Dentistry, Danube Private University, 3500, Krems, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, 94143, USA
| | - Albert J S Idema
- Department of Neurosurgery, Northwest Clinics, 1815 JD, Alkmaar, The Netherlands
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, 1090, Vienna, Austria
| | - Rishi Nandoe Tewari
- Department of Neurosurgery, Haaglanden Medical Center, 2512 VA, The Hague, The Netherlands
| | - Emmanuel Mandonnet
- Department of Neurological Surgery, Hôpital Lariboisière, 75010, Paris, France
| | - Domenique M J Müller
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Pierre A Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Marco Rossi
- Department of Medical Biotechnology and Translational Medicine, Università Degli Studi di Milano, 20122, Milan, Italy
| | - Lisa M Sagberg
- Department of Neurosurgery, St. Olavs hospital, Trondheim University Hospital, 7030, Trondheim, Norway
- Department of Public Health and Nursing, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | | | - Tom Aalders
- Department of Neurosurgery, Isala, 8025 AB, Zwolle, The Netherlands
| | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, 9713 GZ, Groningen, The Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, 1090, Vienna, Austria
| | - Marnix G Witte
- Department of Radiation Oncology, The Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands
| | - Aeilko H Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, 1105 AZ, Amsterdam, The Netherlands
| | - Paulina L Majewska
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Asgeir S Jakola
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, 405 30, Gothenburg, Sweden
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Ole Solheim
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Philip C De Witt Hamer
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491, Trondheim, Norway
| | - Roelant S Eijgelaar
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| |
Collapse
|
7
|
Bouget D, Alsinan D, Gaitan V, Helland RH, Pedersen A, Solheim O, Reinertsen I. Raidionics: an open software for pre- and postoperative central nervous system tumor segmentation and standardized reporting. Sci Rep 2023; 13:15570. [PMID: 37730820 PMCID: PMC10511510 DOI: 10.1038/s41598-023-42048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 09/05/2023] [Indexed: 09/22/2023] Open
Abstract
For patients suffering from central nervous system tumors, prognosis estimation, treatment decisions, and postoperative assessments are made from the analysis of a set of magnetic resonance (MR) scans. Currently, the lack of open tools for standardized and automatic tumor segmentation and generation of clinical reports, incorporating relevant tumor characteristics, leads to potential risks from inherent decisions' subjectivity. To tackle this problem, the proposed Raidionics open-source software has been developed, offering both a user-friendly graphical user interface and stable processing backend. The software includes preoperative segmentation models for each of the most common tumor types (i.e., glioblastomas, lower grade gliomas, meningiomas, and metastases), together with one early postoperative glioblastoma segmentation model. Preoperative segmentation performances were quite homogeneous across the four different brain tumor types, with an average Dice around 85% and patient-wise recall and precision around 95%. Postoperatively, performances were lower with an average Dice of 41%. Overall, the generation of a standardized clinical report, including the tumor segmentation and features computation, requires about ten minutes on a regular laptop. The proposed Raidionics software is the first open solution enabling an easy use of state-of-the-art segmentation models for all major tumor types, including preoperative and postsurgical standardized reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Demah Alsinan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Valeria Gaitan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, 7491, Trondheim, Norway
- Norwegian University of Science and Technology (NTNU), Department of Neuromedicine and Movement Science, 7491, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway.
| |
Collapse
|
8
|
Chen M, Guo Y, Wang P, Chen Q, Bai L, Wang S, Su Y, Wang L, Gong G. An Effective Approach to Improve the Automatic Segmentation and Classification Accuracy of Brain Metastasis by Combining Multi-phase Delay Enhanced MR Images. J Digit Imaging 2023; 36:1782-1793. [PMID: 37259008 PMCID: PMC10406988 DOI: 10.1007/s10278-023-00856-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 06/02/2023] Open
Abstract
The objective of this study is to analyse the diffusion rule of the contrast media in multi-phase delayed enhanced magnetic resonance (MR) T1 images using radiomics and to construct an automatic classification and segmentation model of brain metastases (BM) based on support vector machine (SVM) and Dpn-UNet. A total of 189 BM patients with 1047 metastases were enrolled. Contrast-enhanced MR images were obtained at 1, 3, 5, 10, 18, and 20 min following contrast medium injection. The tumour target volume was delineated, and the radiomics features were extracted and analysed. BM segmentation and classification models in the MR images with different enhancement phases were constructed using Dpn-UNet and SVM, and differences in the BM segmentation and classification models with different enhancement times were compared. (1) The signal intensity for BM decreased with time delay and peaked at 3 min. (2) Among the 144 optimal radiomics features, 22 showed strong correlation with time (highest R-value = 0.82), while 41 showed strong correlation with volume (highest R-value = 0.99). (3) The average dice similarity coefficients of both the training and test sets were the highest at 10 min for the automatic segmentation of BM, reaching 0.92 and 0.82, respectively. (4) The areas under the curve (AUCs) for the classification of BM pathology type applying single-phase MRI was the highest at 10 min, reaching 0.674. The AUC for the classification of BM by applying the six-phase image combination was the highest, reaching 0.9596, and improved by 42.3% compared with that by applying single-phase images at 10 min. The dynamic changes of contrast media diffusion in BM can be reflected by multi-phase delayed enhancement based on radiomics, which can more objectively reflect the pathological types and significantly improve the accuracy of BM segmentation and classification.
Collapse
Affiliation(s)
- Mingming Chen
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
- College of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250117, China
| | - Yujie Guo
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Pengcheng Wang
- College of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250117, China
| | - Qi Chen
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Lu Bai
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Ya Su
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Lizhen Wang
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Guanzhong Gong
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China.
- Department of Engineering Physics, Tsing Hua University, Beijing, 100084, China.
| |
Collapse
|
9
|
Ozkara BB, Federau C, Dagher SA, Pattnaik D, Ucisik FE, Chen MM, Wintermark M. Correlating volumetric and linear measurements of brain metastases on MRI scans using intelligent automation software: a preliminary study. J Neurooncol 2023; 162:363-371. [PMID: 36988746 DOI: 10.1007/s11060-023-04297-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023]
Abstract
PURPOSE The Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) working group proposed a guide for treatment responses for BMs by utilizing the longest diameter; however, despite recognizing that many patients with BMs have sub-centimeter lesions, the group referred to these lesions as unmeasurable due to issues with repeatability and interpretation. In light of RANO-BM recommendations, we aimed to correlate linear and volumetric measurements in sub-centimeter BMs on contrast-enhanced MRI using intelligent automation software. METHODS In this retrospective study, patients with BMs scanned with MRI between January 1, 2018, and December 31, 2021, were screened. Inclusion criteria were: (1) at least one sub-centimeter BM with an integer millimeter-longest diameter was noted in the MRI report; (2) patients were a minimum of 18 years of age; (3) patients with available pre-treatment three-dimensional T1-weighted spoiled gradient-echo MRI scan. The screening was terminated when there were 20 lesions in each group. Lesion volumes were measured with the help of intelligent automation software Jazz (AI Medical, Zollikon, Switzerland) by two readers. The Kruskal-Wallis test was used to compare volumetric differences. RESULTS Our study included 180 patients. The agreement for volumetric measurements was excellent between the two readers. The volumes of the following groups were not significantly different: 1-2 mm, 1-3 mm, 1-4 mm, 2-3 mm, 2-4 mm, 3-4 mm, 3-5 mm, 4-5 mm, 5-6 mm, 5-7 mm, 6-7 mm, 6-8 mm, 6-9 mm, 7-8 mm, 7-9 mm, 8-9 mm. CONCLUSION Our findings indicate that the largest diameter of a lesion may not accurately represent its volume. Additional research is required to determine which method is superior for measuring radiologic response to therapy and which parameter correlates best with clinical improvement or deterioration.
Collapse
Affiliation(s)
- Burak B Ozkara
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Christian Federau
- Faculty of Medicine, University of Zurich, Pestalozzistrasse 3, Zurich, CH-8032, Switzerland
| | - Samir A Dagher
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Debajani Pattnaik
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - F Eymen Ucisik
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Melissa M Chen
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Max Wintermark
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA.
| |
Collapse
|
10
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
11
|
Ottesen JA, Yi D, Tong E, Iv M, Latysheva A, Saxhaug C, Jacobsen KD, Helland Å, Emblem KE, Rubin DL, Bjørnerud A, Zaharchuk G, Grøvik E. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform 2023; 16:1056068. [PMID: 36743439 PMCID: PMC9889663 DOI: 10.3389/fninf.2022.1056068] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023] Open
Abstract
Introduction Management of patients with brain metastases is often based on manual lesion detection and segmentation by an expert reader. This is a time- and labor-intensive process, and to that end, this work proposes an end-to-end deep learning segmentation network for a varying number of available MRI available sequences. Methods We adapt and evaluate a 2.5D and a 3D convolution neural network trained and tested on a retrospective multinational study from two independent centers, in addition, nnU-Net was adapted as a comparative benchmark. Segmentation and detection performance was evaluated by: (1) the dice similarity coefficient, (2) a per-metastases and the average detection sensitivity, and (3) the number of false positives. Results The 2.5D and 3D models achieved similar results, albeit the 2.5D model had better detection rate, whereas the 3D model had fewer false positive predictions, and nnU-Net had fewest false positives, but with the lowest detection rate. On MRI data from center 1, the 2.5D, 3D, and nnU-Net detected 79%, 71%, and 65% of all metastases; had an average per patient sensitivity of 0.88, 0.84, and 0.76; and had on average 6.2, 3.2, and 1.7 false positive predictions per patient, respectively. For center 2, the 2.5D, 3D, and nnU-Net detected 88%, 86%, and 78% of all metastases; had an average per patient sensitivity of 0.92, 0.91, and 0.85; and had on average 1.0, 0.4, and 0.1 false positive predictions per patient, respectively. Discussion/Conclusion Our results show that deep learning can yield highly accurate segmentations of brain metastases with few false positives in multinational data, but the accuracy degrades for metastases with an area smaller than 0.4 cm2.
Collapse
Affiliation(s)
- Jon André Ottesen
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway,*Correspondence: Jon André Ottesen ✉
| | - Darvin Yi
- Department of Ophthalmology, University of Illinois, Chicago, IL, United States
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Cathrine Saxhaug
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Kyrre Eeg Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Atle Bjørnerud
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
12
|
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:cancers15020334. [PMID: 36672286 PMCID: PMC9857123 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
13
|
Fauzi A, Yueniwati Y, Naba A, Rahayu RF. Performance of deep learning in classifying malignant primary and metastatic brain tumors using different MRI sequences: A medical analysis study. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:893-914. [PMID: 37355932 DOI: 10.3233/xst-230046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2023]
Abstract
BACKGROUND Malignant Primary Brain Tumor (MPBT) and Metastatic Brain Tumor (MBT) are the most common types of brain tumors, which require different management approaches. Magnetic Resonance Imaging (MRI) is the most frequently used modality for assessing the presence of these tumors. The utilization of Deep Learning (DL) is expected to assist clinicians in classifying MPBT and MBT more effectively. OBJECTIVE This study aims to examine the influence of MRI sequences on the classification performance of DL techniques for distinguishing between MPBT and MBT and analyze the results from a medical perspective. METHODS Total 1,360 images performed from 4 different MRI sequences were collected and preprocessed. VGG19 and ResNet101 models were trained and evaluated using consistent parameters. The performance of the models was assessed using accuracy, sensitivity, and other precision metrics based on a confusion matrix analysis. RESULTS The ResNet101 model achieves the highest accuracy of 83% for MPBT classification, correctly identifying 90 out of 102 images. The VGG19 model achieves an accuracy of 81% for MBT classification, accurately classifying 86 out of 102 images. T2 sequence shows the highest sensitivity for MPBT, while T1C and T1 sequences exhibit the highest sensitivity for MBT. CONCLUSIONS DL models, particularly ResNet101 and VGG19, demonstrate promising performance in classifying MPBT and MBT based on MRI images. The choice of MRI sequence can impact the sensitivity of tumor detection. These findings contribute to the advancement of DL-based brain tumor classification and its potential in improving patient outcomes and healthcare efficiency.
Collapse
Affiliation(s)
- Adam Fauzi
- Study Program of Master in Biomedical Science, Faculty of Medicine, Universitas Brawijaya Malang, Malang, Indonesia
| | - Yuyun Yueniwati
- Department of Radiology, Faculty of Medicine, Universitas Brawijaya, Saiful Anwar General Hospital, Malang, Indonesia
| | - Agus Naba
- Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Brawijaya, Malang, Indonesia
| | - Rachmi Fauziah Rahayu
- Department of Radiology, Faculty of Medicine, Sebelas Maret University, Dr. Moewardi Hospital, Surakarta, Indonesia
| |
Collapse
|
14
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
15
|
Zhou JX, Yang Z, Xi DH, Dai SJ, Feng ZQ, Li JY, Xu W, Wang H. Enhanced segmentation of gastrointestinal polyps from capsule endoscopy images with artifacts using ensemble learning. World J Gastroenterol 2022; 28:5931-5943. [PMID: 36405108 PMCID: PMC9669827 DOI: 10.3748/wjg.v28.i41.5931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/31/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Endoscopy artifacts are widespread in real capsule endoscopy (CE) images but not in high-quality standard datasets.
AIM To improve the segmentation performance of polyps from CE images with artifacts based on ensemble learning.
METHODS We collected 277 polyp images with CE artifacts from 5760 h of videos from 480 patients at Guangzhou First People’s Hospital from January 2016 to December 2019. Two public high-quality standard external datasets were retrieved and used for the comparison experiments. For each dataset, we randomly segmented the data into training, validation, and testing sets for model training, selection, and testing. We compared the performance of the base models and the ensemble model in segmenting polyps from images with artifacts.
RESULTS The performance of the semantic segmentation model was affected by artifacts in the sample images, which also affected the results of polyp detection by CE using a single model. The evaluation based on real datasets with artifacts and standard datasets showed that the ensemble model of all state-of-the-art models performed better than the best corresponding base learner on the real dataset with artifacts. Compared with the corresponding optimal base learners, the intersection over union (IoU) and dice of the ensemble learning model increased to different degrees, ranging from 0.08% to 7.01% and 0.61% to 4.93%, respectively. Moreover, in the standard datasets without artifacts, most of the ensemble models were slightly better than the base learner, as demonstrated by the IoU and dice increases ranging from -0.28% to 1.20% and -0.61% to 0.76%, respectively.
CONCLUSION Ensemble learning can improve the segmentation accuracy of polyps from CE images with artifacts. Our results demonstrated an improvement in the detection rate of polyps with interference from artifacts.
Collapse
Affiliation(s)
- Jun-Xiao Zhou
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhan Yang
- School of Information, Renmin University of China, Beijing 100872, China
| | - Ding-Hao Xi
- School of Information, Renmin University of China, Beijing 100872, China
| | - Shou-Jun Dai
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhi-Qiang Feng
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Jun-Yan Li
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Wei Xu
- School of Information, Renmin University of China, Beijing 100872, China
| | - Hong Wang
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| |
Collapse
|
16
|
Liang Y, Lee K, Bovi JA, Palmer JD, Brown PD, Gondi V, Tomé WA, Benzinger TLS, Mehta MP, Li XA. Deep Learning-Based Automatic Detection of Brain Metastases in Heterogenous Multi-Institutional Magnetic Resonance Imaging Sets: An Exploratory Analysis of NRG-CC001. Int J Radiat Oncol Biol Phys 2022; 114:529-536. [PMID: 35787927 PMCID: PMC9641965 DOI: 10.1016/j.ijrobp.2022.06.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 06/09/2022] [Accepted: 06/21/2022] [Indexed: 10/31/2022]
Abstract
PURPOSE Deep learning-based algorithms have been shown to be able to automatically detect and segment brain metastases (BMs) in magnetic resonance imaging, mostly based on single-institutional data sets. This work aimed to investigate the use of deep convolutional neural networks (DCNN) for BM detection and segmentation on a highly heterogeneous multi-institutional data set. METHODS AND MATERIALS A total of 407 patients from 98 institutions were randomly split into 326 patients from 78 institutions for training/validation and 81 patients from 20 institutions for unbiased testing. The data set contained T1-weighted gadolinium and T2-weighted fluid-attenuated inversion recovery magnetic resonance imaging acquired on diverse scanners using different pulse sequences and various acquisition parameters. Several variants of 3-dimensional U-Net based DCNN models were trained and tuned using 5-fold cross validation on the training set. Performances of different models were compared based on Dice similarity coefficient for segmentation and sensitivity and false positive rate (FPR) for detection. The best performing model was evaluated on the test set. RESULTS A DCNN with an input size of 64 × 64 × 64 and an equal number of 128 kernels for all convolutional layers using instance normalization was identified as the best performing model (Dice similarity coefficient 0.73, sensitivity 0.86, and FPR 1.9) in the 5-fold cross validation experiments. The best performing model demonstrated consistent behavior on the test set (Dice similarity coefficient 0.73, sensitivity 0.91, and FPR 1.7) and successfully detected 7 BMs (out of 327) that were missed during manual delineation. For large BMs with diameters greater than 12 mm, the sensitivity and FPR improved to 0.98 and 0.3, respectively. CONCLUSIONS The DCNN model developed can automatically detect and segment brain metastases with reasonable accuracy, high sensitivity, and low FPR on a multi-institutional data set with nonprespecified and highly variable magnetic resonance imaging sequences. For large BMs, the model achieved clinically relevant results. The model is robust and may be potentially used in real-world situations.
Collapse
Affiliation(s)
- Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Karen Lee
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joseph A Bovi
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joshua D Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute at the Ohio State University, Columbus, Ohio
| | - Paul D Brown
- Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota
| | - Vinai Gondi
- Department of Radiation Oncology, Northwestern Medicine Cancer Center and Proton Center, Warrenville, Illinois
| | - Wolfgang A Tomé
- Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York
| | - Tammie L S Benzinger
- Department of Radiology, Washington University School of Medicine, St Louis, Missouri
| | | | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin.
| |
Collapse
|
17
|
Gurney-Champion OJ, Landry G, Redalen KR, Thorwarth D. Potential of Deep Learning in Quantitative Magnetic Resonance Imaging for Personalized Radiotherapy. Semin Radiat Oncol 2022; 32:377-388. [DOI: 10.1016/j.semradonc.2022.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
18
|
Wu J, Kang T, Lan X, Chen X, Wu Z, Wang J, Lin L, Cai C, Lin J, Ding X, Cai S. IMPULSED model based cytological feature estimation with U-Net: Application to human brain tumor at 3T. Magn Reson Med 2022; 89:411-422. [PMID: 36063493 DOI: 10.1002/mrm.29429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 07/06/2022] [Accepted: 08/08/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE This work introduces and validates a deep-learning-based fitting method, which can rapidly provide accurate and robust estimation of cytological features of brain tumor based on the IMPULSED (imaging microstructural parameters using limited spectrally edited diffusion) model fitting with diffusion-weighted MRI data. METHODS The U-Net was applied to rapidly quantify extracellular diffusion coefficient (Dex ), cell size (d), and intracellular volume fraction (vin ) of brain tumor. At the training stage, the image-based training data, synthesized by randomizing quantifiable microstructural parameters within specific ranges, was used to train U-Net. At the test stage, the pre-trained U-Net was applied to estimate the microstructural parameters from simulated data and the in vivo data acquired on patients at 3T. The U-Net was compared with conventional non-linear least-squares (NLLS) fitting in simulations in terms of estimation accuracy and precision. RESULTS Our results confirm that the proposed method yields better fidelity in simulations and is more robust to noise than the NLLS fitting. For in vivo data, the U-Net yields obvious quality improvement in parameter maps, and the estimations of all parameters are in good agreement with the NLLS fitting. Moreover, our method is several orders of magnitude faster than the NLLS fitting (from about 5 min to <1 s). CONCLUSION The image-based training scheme proposed herein helps to improve the quality of the estimated parameters. Our deep-learning-based fitting method can estimate the cell microstructural parameters fast and accurately.
Collapse
Affiliation(s)
- Jian Wu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| | - Taishan Kang
- Department of Radiology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Xinli Lan
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| | - Xinran Chen
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| | - Zhigang Wu
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Jiazheng Wang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Liangjie Lin
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Congbo Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| | - Jianzhong Lin
- Department of Radiology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Xin Ding
- Department of Pathology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Shuhui Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| |
Collapse
|
19
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
20
|
Bouget D, Pedersen A, Jakola AS, Kavouridis V, Emblem KE, Eijgelaar RS, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sciortino T, Van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, De Witt Hamer PC, Solheim O, Reinertsen I. Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting. Front Neurol 2022; 13:932219. [PMID: 35968292 PMCID: PMC9364874 DOI: 10.3389/fneur.2022.932219] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/23/2022] [Indexed: 11/23/2022] Open
Abstract
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16-54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5-15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asgeir S. Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Vasileios Kavouridis
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Kyrre E. Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, Tilburg, Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, United Kingdom
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Wien, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | | | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Netherlands
| | | | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Marnix G. Witte
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
21
|
Zhou T, Vera P, Canu S, Ruan S. Missing Data Imputation via Conditional Generator and Correlation Learning for Multimodal Brain Tumor Segmentation. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Lotan E, Zhang B, Dogra S, Wang W, Carbone D, Fatterpekar G, Oermann E, Lui Y. Development and Practical Implementation of a Deep Learning-Based Pipeline for Automated Pre- and Postoperative Glioma Segmentation. AJNR Am J Neuroradiol 2022; 43:24-32. [PMID: 34857514 PMCID: PMC8757542 DOI: 10.3174/ajnr.a7363] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 09/22/2021] [Indexed: 01/03/2023]
Abstract
BACKGROUND AND PURPOSE Quantitative volumetric segmentation of gliomas has important implications for diagnosis, treatment, and prognosis. We present a deep-learning model that accommodates automated preoperative and postoperative glioma segmentation with a pipeline for clinical implementation. Developed and engineered in concert, the work seeks to accelerate clinical realization of such tools. MATERIALS AND METHODS A deep learning model, autoencoder regularization-cascaded anisotropic, was developed, trained, and tested fusing key elements of autoencoder regularization with a cascaded anisotropic convolutional neural network. We constructed a dataset consisting of 437 cases with 40 cases reserved as a held-out test and the remainder split 80:20 for training and validation. We performed data augmentation and hyperparameter optimization and used a mean Dice score to evaluate against baseline models. To facilitate clinical adoption, we developed the model with an end-to-end pipeline including routing, preprocessing, and end-user interaction. RESULTS The autoencoder regularization-cascaded anisotropic model achieved median and mean Dice scores of 0.88/0.83 (SD, 0.09), 0.89/0.84 (SD, 0.08), and 0.81/0.72 (SD, 0.1) for whole-tumor, tumor core/resection cavity, and enhancing tumor subregions, respectively, including both preoperative and postoperative follow-up cases. The overall total processing time per case was ∼10 minutes, including data routing (∼1 minute), preprocessing (∼6 minute), segmentation (∼1-2 minute), and postprocessing (∼1 minute). Implementation challenges were discussed. CONCLUSIONS We show the feasibility and advantages of building a coordinated model with a clinical pipeline for the rapid and accurate deep learning segmentation of both preoperative and postoperative gliomas. The ability of the model to accommodate cases of postoperative glioma is clinically important for follow-up. An end-to-end approach, such as used here, may lead us toward successful clinical translation of tools for quantitative volume measures for glioma.
Collapse
Affiliation(s)
- E. Lotan
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - B. Zhang
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - S. Dogra
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | | | - D. Carbone
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - G. Fatterpekar
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - E.K. Oermann
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.),Neurosurgery, School of Medicine (E.K.O.), NYU Langone Health, New York, New York
| | - Y.W. Lui
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| |
Collapse
|
23
|
Rahimpour M, Bertels J, Radwan A, Vandermeulen H, Sunaert S, Vandermeulen D, Maes F, Goffin K, Koole M. Cross-modal distillation to improve MRI-based brain tumor segmentation with missing MRI sequences. IEEE Trans Biomed Eng 2021; 69:2153-2164. [PMID: 34941496 DOI: 10.1109/tbme.2021.3137561] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Convolutional neural networks (CNNs) for brain tumor segmentation are generally developed using complete sets of magnetic resonance imaging (MRI) sequences for both training and inference. As such, these algorithms are not trained for realistic, clinical scenarios where parts of the MRI sequences which were used for training, are missing during inference. To increase clinical applicability, we proposed a cross-modal distillation approach to leverage the availability of multi-sequence MRI data for training and generate an enriched CNN model which uses only single-sequence MRI data for inference but outperforms a single-sequence CNN model. We assessed the performance of the proposed method for whole tumor and tumor core segmentation with multi-sequence MRI data available for training but only T1- weighted (T1w) sequence data available for inference, using both BraTS 2018, and in-house datasets. Results showed that cross-modal distillation significantly improved the Dice score for both whole tumor and tumor core segmentation when only T1w sequence data were available for inference. For the evaluation using the in-house dataset, cross-modal distillation achieved an average Dice score of 79.04% and 69.39% for whole tumor and tumor core segmentation, respectively, while a single-sequence U-Net model using T1w sequence data for both training and inference achieved an average Dice score of 73.60% and 62.62%, respectively. These findings confirmed cross-modal distillation as an effective method to increase the potential of single-sequence CNN models such that segmentation performance is less compromised by missing MRI sequences or having only one MRI sequence available for segmentation.
Collapse
|