1
|
Yalcinkaya DM, Youssef K, Heydari B, Wei J, Merz NB, Judd R, Dharmakumar R, Simonetti OP, Weinsaft JW, Raman SV, Sharif B. Improved Robustness for Deep Learning-based Segmentation of Multi-Center Myocardial Perfusion MRI Datasets Using Data Adaptive Uncertainty-guided Space-time Analysis. ARXIV 2024:arXiv:2408.04805v1. [PMID: 39148930 PMCID: PMC11326424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Background Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.
Collapse
Affiliation(s)
- Dilek M. Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, Department of Cardiac Sciences, University of Calgary, Alberta, Canada
| | - Janet Wei
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Noel Bairey Merz
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Robert Judd
- Division of Cardiology, Department of Medicine, Duke University, Durham, NC, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Orlando P. Simonetti
- Department of Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Jonathan W. Weinsaft
- Division of Cardiology at NY Presbyterian Hospital, Weill Cornell Medical Center, New York, NY, USA
| | - Subha V. Raman
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- OhioHealth, Columbus, OH, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
3
|
Rudie JD, Saluja R, Weiss DA, Nedelec P, Calabrese E, Colby JB, Laguna B, Mongan J, Braunstein S, Hess CP, Rauschecker AM, Sugrue LP, Villanueva-Meyer JE. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset. Radiol Artif Intell 2024; 6:e230126. [PMID: 38381038 PMCID: PMC10982817 DOI: 10.1148/ryai.230126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 01/11/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024]
Abstract
Supplemental material is available for this article.
Collapse
Affiliation(s)
- Jeffrey D. Rudie
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | | | - David A. Weiss
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Pierre Nedelec
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Evan Calabrese
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - John B. Colby
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Benjamin Laguna
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - John Mongan
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Steve Braunstein
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Christopher P. Hess
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Andreas M. Rauschecker
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Leo P. Sugrue
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Javier E. Villanueva-Meyer
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| |
Collapse
|
4
|
Fairchild A, Salama JK, Godfrey D, Wiggins WF, Ackerson BG, Oyekunle T, Niedzwiecki D, Fecci PE, Kirkpatrick JP, Floyd SR. Incidence and imaging characteristics of difficult to detect retrospectively identified brain metastases in patients receiving repeat courses of stereotactic radiosurgery. J Neurooncol 2024:10.1007/s11060-024-04594-6. [PMID: 38340295 DOI: 10.1007/s11060-024-04594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
PURPOSE During stereotactic radiosurgery (SRS) planning for brain metastases (BM), brain MRIs are reviewed to select appropriate targets based on radiographic characteristics. Some BM are difficult to detect and/or definitively identify and may go untreated initially, only to become apparent on future imaging. We hypothesized that in patients receiving multiple courses of SRS, reviewing the initial planning MRI would reveal early evidence of lesions that developed into metastases requiring SRS. METHODS Patients undergoing two or more courses of SRS to BM within 6 months between 2016 and 2018 were included in this single-institution, retrospective study. Brain MRIs from the initial course were reviewed for lesions at the same location as subsequently treated metastases; if present, this lesion was classified as a "retrospectively identified metastasis" or RIM. RIMs were subcategorized as meeting or not meeting diagnostic imaging criteria for BM (+ DC or -DC, respectively). RESULTS Among 683 patients undergoing 923 SRS courses, 98 patients met inclusion criteria. There were 115 repeat courses of SRS, with 345 treated metastases in the subsequent course, 128 of which were associated with RIMs found in a prior MRI. 58% of RIMs were + DC. 17 (15%) of subsequent courses consisted solely of metastases associated with + DC RIMs. CONCLUSION Radiographic evidence of brain metastases requiring future treatment was occasionally present on brain MRIs from prior SRS treatments. Most RIMs were + DC, and some subsequent SRS courses treated only + DC RIMs. These findings suggest enhanced BM detection might enable earlier treatment and reduce the need for additional SRS.
Collapse
Affiliation(s)
- Andrew Fairchild
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.
- Piedmont Radiation Oncology, 3333 Silas Creek Parkway, Winston Salem, NC, 27103, USA.
| | - Joseph K Salama
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Radiation Oncology Service, Durham VA Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Walter F Wiggins
- Deartment of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Bradley G Ackerson
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Taofik Oyekunle
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Peter E Fecci
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - John P Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - Scott R Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
5
|
Wang TW, Hsu MS, Lee WK, Pan HC, Yang HC, Lee CC, Wu YT. Brain metastasis tumor segmentation and detection using deep learning algorithms: A systematic review and meta-analysis. Radiother Oncol 2024; 190:110007. [PMID: 37967585 DOI: 10.1016/j.radonc.2023.110007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2023] [Accepted: 11/08/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Manual detection of brain metastases is both laborious and inconsistent, driving the need for more efficient solutions. Accordingly, our systematic review and meta-analysis assessed the efficacy of deep learning algorithms in detecting and segmenting brain metastases from various primary origins in MRI images. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science up to May 24, 2023, which yielded 42 relevant studies for our analysis. We assessed the quality of these studies using the QUADAS-2 and CLAIM tools. Using a random-effect model, we calculated the pooled lesion-wise dice score as well as patient-wise and lesion-wise sensitivity. We performed subgroup analyses to investigate the influence of factors such as publication year, study design, training center of the model, validation methods, slice thickness, model input dimensions, MRI sequences fed to the model, and the specific deep learning algorithms employed. Additionally, meta-regression analyses were carried out considering the number of patients in the studies, count of MRI manufacturers, count of MRI models, training sample size, and lesion number. RESULTS Our analysis highlighted that deep learning models, particularly the U-Net and its variants, demonstrated superior segmentation accuracy. Enhanced detection sensitivity was observed with an increased diversity in MRI hardware, both in terms of manufacturer and model variety. Furthermore, slice thickness was identified as a significant factor influencing lesion-wise detection sensitivity. Overall, the pooled results indicated a lesion-wise dice score of 79%, with patient-wise and lesion-wise sensitivities at 86% and 87%, respectively. CONCLUSIONS The study underscores the potential of deep learning in improving brain metastasis diagnostics and treatment planning. Still, more extensive cohorts and larger meta-analysis are needed for more practical and generalizable algorithms. Future research should prioritize these areas to advance the field. This study was funded by the Gen. & Mrs. M.C. Peng Fellowship and registered under PROSPERO (CRD42023427776).
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ming-Sheng Hsu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan; National Yang Ming Chiao Tung University, College Medical Device Innovation and Translation Center, Taiwan.
| |
Collapse
|
6
|
Prezelski K, Hsu DG, del Balzo L, Heller E, Ma J, Pike LRG, Ballangrud Å, Aristophanous M. Artificial-intelligence-driven measurements of brain metastases' response to SRS compare favorably with current manual standards of assessment. Neurooncol Adv 2024; 6:vdae015. [PMID: 38464949 PMCID: PMC10924534 DOI: 10.1093/noajnl/vdae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024] Open
Abstract
Background Evaluation of treatment response for brain metastases (BMs) following stereotactic radiosurgery (SRS) becomes complex as the number of treated BMs increases. This study uses artificial intelligence (AI) to track BMs after SRS and validates its output compared with manual measurements. Methods Patients with BMs who received at least one course of SRS and followed up with MRI scans were retrospectively identified. A tool for automated detection, segmentation, and tracking of intracranial metastases on longitudinal imaging, MEtastasis Tracking with Repeated Observations (METRO), was applied to the dataset. The longest three-dimensional (3D) diameter identified with METRO was compared with manual measurements of maximum axial BM diameter, and their correlation was analyzed. Change in size of the measured BM identified with METRO after SRS treatment was used to classify BMs as responding, or not responding, to treatment, and its accuracy was determined relative to manual measurements. Results From 71 patients, 176 BMs were identified and measured with METRO and manual methods. Based on a one-to-one correlation analysis, the correlation coefficient was R2 = 0.76 (P = .0001). Using modified BM response classifications of BM change in size, the longest 3D diameter data identified with METRO had a sensitivity of 0.72 and a specificity of 0.95 in identifying lesions that responded to SRS, when using manual axial diameter measurements as the ground truth. Conclusions Using AI to automatically measure and track BM volumes following SRS treatment, this study showed a strong correlation between AI-driven measurements and the current clinically used method: manual axial diameter measurements.
Collapse
Affiliation(s)
- Kayla Prezelski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Saint Louis University School of Medicine, St. Louis, Missouri, USA
| | - Dylan G Hsu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Luke del Balzo
- Medical College of Georgia, Athens, Georgia, USA
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Erica Heller
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jennifer Ma
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Luke R G Pike
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Biomarker Development Program, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
7
|
Yalcinkaya DM, Youssef K, Heydari B, Simonetti O, Dharmakumar R, Raman S, Sharif B. Temporal Uncertainty Localization to Enable Human-in-the-loop Analysis of Dynamic Contrast-enhanced Cardiac MRI Datasets. ARXIV 2023:arXiv:2308.13488v2. [PMID: 37664410 PMCID: PMC10473819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Dynamic contrast-enhanced (DCE) cardiac magnetic resonance imaging (CMRI) is a widely used modality for diagnosing myocardial blood flow (perfusion) abnormalities. During a typical free-breathing DCE-CMRI scan, close to 300 time-resolved images of myocardial perfusion are acquired at various contrast "wash in/out" phases. Manual segmentation of myocardial contours in each time-frame of a DCE image series can be tedious and time-consuming, particularly when non-rigid motion correction has failed or is unavailable. While deep neural networks (DNNs) have shown promise for analyzing DCE-CMRI datasets, a "dynamic quality control" (dQC) technique for reliably detecting failed segmentations is lacking. Here we propose a new space-time uncertainty metric as a dQC tool for DNN-based segmentation of free-breathing DCE-CMRI datasets by validating the proposed metric on an external dataset and establishing a human-in-the-loop framework to improve the segmentation results. In the proposed approach, we referred the top 10% most uncertain segmentations as detected by our dQC tool to the human expert for refinement. This approach resulted in a significant increase in the Dice score ( p < 0.001 ) and a notable decrease in the number of images with failed segmentation (16.2% to 11.3%) whereas the alternative approach of randomly selecting the same number of segmentations for human referral did not achieve any significant improvement. Our results suggest that the proposed dQC framework has the potential to accurately identify poor-quality segmentations and may enable efficient DNN-based analysis of DCE-CMRI in a human-in-the-loop pipeline for clinical interpretation and reporting of dynamic CMRI datasets.
Collapse
Affiliation(s)
- Dilek M Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Elmore Family School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, University of Calgary, Alberta, Canada
| | - Orlando Simonetti
- Department of Internal Medicine, Division of Cardiovascular Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Eng., Purdue University, West Lafayette, IN, USA
| | - Subha Raman
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Eng., Purdue University, West Lafayette, IN, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Eng., Purdue University, West Lafayette, IN, USA
| |
Collapse
|
8
|
Wahlig SG, Nedelec P, Weiss DA, Rudie JD, Sugrue LP, Rauschecker AM. 3D U-Net for automated detection of multiple sclerosis lesions: utility of transfer learning from other pathologies. Front Neurosci 2023; 17:1188336. [PMID: 37965219 PMCID: PMC10641790 DOI: 10.3389/fnins.2023.1188336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 09/26/2023] [Indexed: 11/16/2023] Open
Abstract
Background and purpose Deep learning algorithms for segmentation of multiple sclerosis (MS) plaques generally require training on large datasets. This manuscript evaluates the effect of transfer learning from segmentation of another pathology to facilitate use of smaller MS-specific training datasets. That is, a model trained for detection of one type of pathology was re-trained to identify MS lesions and active demyelination. Materials and methods In this retrospective study using MRI exams from 149 patients spanning 4/18/2014 to 7/8/2021, 3D convolutional neural networks were trained with a variable number of manually-segmented MS studies. Models were trained for FLAIR lesion segmentation at a single timepoint, new FLAIR lesion segmentation comparing two timepoints, and enhancing (actively demyelinating) lesion segmentation on T1 post-contrast imaging. Models were trained either de-novo or fine-tuned with transfer learning applied to a pre-existing model initially trained on non-MS data. Performance was evaluated with lesionwise sensitivity and positive predictive value (PPV). Results For single timepoint FLAIR lesion segmentation with 10 training studies, a fine-tuned model demonstrated improved performance [lesionwise sensitivity 0.55 ± 0.02 (mean ± standard error), PPV 0.66 ± 0.02] compared to a de-novo model (sensitivity 0.49 ± 0.02, p = 0.001; PPV 0.32 ± 0.02, p < 0.001). For new lesion segmentation with 30 training studies and their prior comparisons, a fine-tuned model demonstrated similar sensitivity (0.49 ± 0.05) and significantly improved PPV (0.60 ± 0.05) compared to a de-novo model (sensitivity 0.51 ± 0.04, p = 0.437; PPV 0.43 ± 0.04, p = 0.002). For enhancement segmentation with 20 training studies, a fine-tuned model demonstrated significantly improved overall performance (sensitivity 0.74 ± 0.06, PPV 0.69 ± 0.05) compared to a de-novo model (sensitivity 0.44 ± 0.09, p = 0.001; PPV 0.37 ± 0.05, p = 0.001). Conclusion By fine-tuning models trained for other disease pathologies with MS-specific data, competitive models identifying existing MS plaques, new MS plaques, and active demyelination can be built with substantially smaller datasets than would otherwise be required to train new models.
Collapse
Affiliation(s)
- Stephen G. Wahlig
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Pierre Nedelec
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - David A. Weiss
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Jeffrey D. Rudie
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
- Department of Radiology, University of California, San Diego, San Diego, CA, United States
| | - Leo P. Sugrue
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Andreas M. Rauschecker
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
9
|
Yalcinkaya DM, Youssef K, Heydari B, Simonetti O, Dharmakumar R, Raman S, Sharif B. Temporal Uncertainty Localization to Enable Human-in-the-Loop Analysis of Dynamic Contrast-Enhanced Cardiac MRI Datasets. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14222:453-462. [PMID: 38204763 PMCID: PMC10775176 DOI: 10.1007/978-3-031-43898-1_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
Dynamic contrast-enhanced (DCE) cardiac magnetic resonance imaging (CMRI) is a widely used modality for diagnosing myocardial blood flow (perfusion) abnormalities. During a typical free-breathing DCE-CMRI scan, close to 300 time-resolved images of myocardial perfusion are acquired at various contrast "wash in/out" phases. Manual segmentation of myocardial contours in each time-frame of a DCE image series can be tedious and time-consuming, particularly when non-rigid motion correction has failed or is unavailable. While deep neural networks (DNNs) have shown promise for analyzing DCE-CMRI datasets, a "dynamic quality control" (dQC) technique for reliably detecting failed segmentations is lacking. Here we propose a new space-time uncertainty metric as a dQC tool for DNN-based segmentation of free-breathing DCE-CMRI datasets by validating the proposed metric on an external dataset and establishing a human-in-the-loop framework to improve the segmentation results. In the proposed approach, we referred the top 10% most uncertain segmentations as detected by our dQC tool to the human expert for refinement. This approach resulted in a significant increase in the Dice score (p < 0.001) and a notable decrease in the number of images with failed segmentation (16.2% to 11.3%) whereas the alternative approach of randomly selecting the same number of segmentations for human referral did not achieve any significant improvement. Our results suggest that the proposed dQC framework has the potential to accurately identify poor-quality segmentations and may enable efficient DNN-based analysis of DCE-CMRI in a human-in-the-loop pipeline for clinical interpretation and reporting of dynamic CMRI datasets.
Collapse
Affiliation(s)
- Dilek M Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, University of Calgary, Alberta, Canada
| | - Orlando Simonetti
- Department of Internal Medicine, Division of Cardiovascular Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Subha Raman
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
10
|
Heyn C, Moody AR, Tseng CL, Wong E, Kang T, Kapadia A, Howard P, Maralani P, Symons S, Goubran M, Martel A, Chen H, Myrehaug S, Detsky J, Sahgal A, Soliman H. Segmentation of Brain Metastases Using Background Layer Statistics (BLAST). AJNR Am J Neuroradiol 2023; 44:1135-1143. [PMID: 37735088 PMCID: PMC10549939 DOI: 10.3174/ajnr.a7998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/16/2023] [Indexed: 09/23/2023]
Abstract
BACKGROUND AND PURPOSE Accurate segmentation of brain metastases is important for treatment planning and evaluating response. The aim of this study was to assess the performance of a semiautomated algorithm for brain metastases segmentation using Background Layer Statistics (BLAST). MATERIALS AND METHODS Nineteen patients with 48 parenchymal and dural brain metastases were included. Segmentation was performed by 4 neuroradiologists and 1 radiation oncologist. K-means clustering was used to identify normal gray and white matter (background layer) in a 2D parameter space of signal intensities from postcontrast T2 FLAIR and T1 MPRAGE sequences. The background layer was subtracted and operator-defined thresholds were applied in parameter space to segment brain metastases. The remaining voxels were back-projected to visualize segmentations in image space and evaluated by the operators. Segmentation performance was measured by calculating the Dice-Sørensen coefficient and Hausdorff distance using ground truth segmentations made by the investigators. Contours derived from the segmentations were evaluated for clinical acceptance using a 5-point Likert scale. RESULTS The median Dice-Sørensen coefficient was 0.82 for all brain metastases and 0.9 for brain metastases of ≥10 mm. The median Hausdorff distance was 1.4 mm. Excellent interreader agreement for brain metastases volumes was found with an intraclass correlation coefficient = 0.9978. The median segmentation time was 2.8 minutes/metastasis. Forty-five contours (94%) had a Likert score of 4 or 5, indicating that the contours were acceptable for treatment, requiring no changes or minor edits. CONCLUSIONS We show accurate and reproducible segmentation of brain metastases using BLAST and demonstrate its potential as a tool for radiation planning and evaluating treatment response.
Collapse
Affiliation(s)
- Chris Heyn
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Alan R Moody
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Chia-Lin Tseng
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Erin Wong
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Tony Kang
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Anish Kapadia
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Peter Howard
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Pejman Maralani
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sean Symons
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Maged Goubran
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Anne Martel
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Hanbo Chen
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sten Myrehaug
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Jay Detsky
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Arjun Sahgal
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Hany Soliman
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Mahajan A, Burrewar M, Agarwal U, Kss B, Mlv A, Guha A, Sahu A, Choudhari A, Pawar V, Punia V, Epari S, Sahay A, Gupta T, Chinnaswamy G, Shetty P, Moiyadi A. Deep learning based clinico-radiological model for paediatric brain tumor detection and subtype prediction. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2023; 4:669-684. [PMID: 37720352 PMCID: PMC10501890 DOI: 10.37349/etat.2023.00159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/13/2023] [Indexed: 09/19/2023] Open
Abstract
Aim Early diagnosis of paediatric brain tumors significantly improves the outcome. The aim is to study magnetic resonance imaging (MRI) features of paediatric brain tumors and to develop an automated segmentation (AS) tool which could segment and classify tumors using deep learning methods and compare with radiologist assessment. Methods This study included 94 cases, of which 75 were diagnosed cases of ependymoma, medulloblastoma, brainstem glioma, and pilocytic astrocytoma and 19 were normal MRI brain cases. The data was randomized into training data, 64 cases; test data, 21 cases and validation data, 9 cases to devise a deep learning algorithm to segment the paediatric brain tumor. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the deep learning model were compared with radiologist's findings. Performance evaluation of AS was done based on Dice score and Hausdorff95 distance. Results Analysis of MRI semantic features was done with necrosis and haemorrhage as predicting features for ependymoma, diffusion restriction and cystic changes were predictors for medulloblastoma. The accuracy of detecting abnormalities was 90%, with a specificity of 100%. Further segmentation of the tumor into enhancing and non-enhancing components was done. The segmentation results for whole tumor (WT), enhancing tumor (ET), and non-enhancing tumor (NET) have been analyzed by Dice score and Hausdorff95 distance. The accuracy of prediction of all MRI features was compared with experienced radiologist's findings. Substantial agreement observed between the classification by model and the radiologist's given classification [K-0.695 (K is Cohen's kappa score for interrater reliability)]. Conclusions The deep learning model had very high accuracy and specificity for predicting the magnetic resonance (MR) characteristics and close to 80% accuracy in predicting tumor type. This model can serve as a potential tool to make a timely and accurate diagnosis for radiologists not trained in neuroradiology.
Collapse
Affiliation(s)
- Abhishek Mahajan
- Clatterbridge Centre for Oncology NHS Foundation Trust, L7 8YA, Liverpool, UK
| | - Mayur Burrewar
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Ujjwal Agarwal
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | | | - Apparao Mlv
- Endimension Technology Pvt Ltd, Maharashtra, India
| | - Amrita Guha
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Arpita Sahu
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Amit Choudhari
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Vivek Pawar
- Endimension Technology Pvt Ltd, Maharashtra, India
| | - Vivek Punia
- Endimension Technology Pvt Ltd, Maharashtra, India
| | - Sridhar Epari
- Department of Pathology, Tata Memorial Hospital, Parel, Mumbai 400012, India
| | - Ayushi Sahay
- Department of Pathology, Tata Memorial Hospital, Parel, Mumbai 400012, India
| | - Tejpal Gupta
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Girish Chinnaswamy
- Department of Paediatric Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, India
| | - Prakash Shetty
- Department of Surgical Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, India
| | - Aliasgar Moiyadi
- Department of Surgical Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, India
| |
Collapse
|
12
|
Hsu DG, Ballangrud Å, Prezelski K, Swinburne NC, Young R, Beal K, Deasy JO, Cerviño L, Aristophanous M. Automatically tracking brain metastases after stereotactic radiosurgery. Phys Imaging Radiat Oncol 2023; 27:100452. [PMID: 37720463 PMCID: PMC10500025 DOI: 10.1016/j.phro.2023.100452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/12/2023] [Accepted: 05/26/2023] [Indexed: 09/19/2023] Open
Abstract
Background and purpose Patients with brain metastases (BMs) are surviving longer and returning for multiple courses of stereotactic radiosurgery. BMs are monitored after radiation with follow-up magnetic resonance (MR) imaging every 2-3 months. This study investigated whether it is possible to automatically track BMs on longitudinal imaging and quantify the tumor response after radiotherapy. Methods The METRO process (MEtastasis Tracking with Repeated Observations was developed to automatically process patient data and track BMs. A longitudinal intrapatient registration method for T1 MR post-Gd was conceived and validated on 20 patients. Detections and volumetric measurements of BMs were obtained from a deep learning model. BM tracking was validated on 32 separate patients by comparing results with manual measurements of BM response and radiologists' assessments of new BMs. Linear regression and residual analysis were used to assess accuracy in determining tumor response and size change. Results A total of 123 irradiated BMs and 38 new BMs were successfully tracked. 66 irradiated BMs were visible on follow-up imaging 3-9 months after radiotherapy. Comparing their longest diameter changes measured manually vs. METRO, the Pearson correlation coefficient was 0.88 (p < 0.001); the mean residual error was -8 ± 17%. The mean registration error was 1.5 ± 0.2 mm. Conclusions Automatic, longitudinal tracking of BMs using deep learning methods is feasible. In particular, the software system METRO fulfills a need to automatically track and quantify volumetric changes of BMs prior to, and in response to, radiation therapy.
Collapse
Affiliation(s)
- Dylan G. Hsu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Kayla Prezelski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nathaniel C. Swinburne
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Robert Young
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Kathryn Beal
- Department of Radiation Oncology, Weill Cornell Medicine, New York, NY 10065, United States
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Laura Cerviño
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
13
|
Kim DY, Woo S, Roh JY, Choi JY, Kim KA, Cha JY, Kim N, Kim SJ. Subregional pharyngeal changes after orthognathic surgery in skeletal Class III patients analyzed by convolutional neural networks-based segmentation. J Dent 2023:104565. [PMID: 37308053 DOI: 10.1016/j.jdent.2023.104565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/14/2023] Open
Abstract
OBJECTIVES To evaluate the accuracy of fully automatic segmentation of pharyngeal volume of interests (VOIs) before and after orthognathic surgery in skeletal Class III patients using a convolutional neural network (CNN) model and to investigate the clinical applicability of artificial intelligence for quantitative evaluation of treatment changes in pharyngeal VOIs. METHODS 310 cone-beam computed tomography (CBCT) images were divided into a training set (n=150), validation set (n=40), and test set (n=120). The test datasets comprised matched pairs of pre- and posttreatment images of 60 skeletal Class III patients (mean age 23.1±5.0 years; ANB<-2⁰) who underwent bimaxillary orthognathic surgery with orthodontic treatment. A 3D U-Net CNNs model was applied for fully automatic segmentation and measurement of subregional pharyngeal volumes of pretreatment (T0) and posttreatment (T1) scans. The model's accuracy was compared to semi-automatic segmentation outcomes by humans using the dice similarity coefficient (DSC) and volume similarity (VS). The correlation between surgical skeletal changes and model accuracy was obtained. RESULTS The proposed model achieved high performance of subregional pharyngeal segmentation on both T0 and T1 images, representing a significant T1-T0 difference of DSC only in the nasopharynx. Region-specific differences among pharyngeal VOIs, which were observed at T0, disappeared on the T1 images. The decreased DSC of nasopharyngeal segmentation after treatment was weakly correlated with the amount of maxillary advancement. There was no correlation between the mandibular setback amount and model accuracy. CONCLUSIONS The proposed model offers fast and accurate subregional pharyngeal segmentation on both pretreatment and posttreatment CBCT images in skeletal Class III patients. CLINICAL SIGNIFICANCE We elucidated the clinical applicability of the CNNs model to quantitatively evaluate subregional pharyngeal changes after surgical-orthodontic treatment, which offers a basis for developing a fully integrated multiclass CNNs model to predict pharyngeal responses after dentoskeletal treatments.
Collapse
Affiliation(s)
- Dong-Yul Kim
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Seoyeon Woo
- Department of Convergence Medicine, Asan Medical Institute of Convergence, Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jae-Yon Roh
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jin-Young Choi
- Department of Orthodontics, Kyung Hee University Dental Hospital, 23, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Kyung-A Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jung-Yul Cha
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, 50-1 Yonseiro, Seodaemun-gu, Seoul, 03722, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Su-Jung Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea.
| |
Collapse
|
14
|
Avesta A, Hui Y, Aboian M, Duncan J, Krumholz HM, Aneja S. 3D Capsule Networks for Brain Image Segmentation. AJNR Am J Neuroradiol 2023; 44:562-568. [PMID: 37080721 PMCID: PMC10171390 DOI: 10.3174/ajnr.a7845] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 03/11/2023] [Indexed: 04/22/2023]
Abstract
BACKGROUND AND PURPOSE Current autosegmentation models such as UNets and nnUNets have limitations, including the inability to segment images that are not represented during training and lack of computational efficiency. 3D capsule networks have the potential to address these limitations. MATERIALS AND METHODS We used 3430 brain MRIs, acquired in a multi-institutional study, to train and validate our models. We compared our capsule network with standard alternatives, UNets and nnUNets, on the basis of segmentation efficacy (Dice scores), segmentation performance when the image is not well-represented in the training data, performance when the training data are limited, and computational efficiency including required memory and computational speed. RESULTS The capsule network segmented the third ventricle, thalamus, and hippocampus with Dice scores of 95%, 94%, and 92%, respectively, which were within 1% of the Dice scores of UNets and nnUNets. The capsule network significantly outperformed UNets in segmenting images that were not well-represented in the training data, with Dice scores 30% higher. The computational memory required for the capsule network is less than one-tenth of the memory required for UNets or nnUNets. The capsule network is also >25% faster to train compared with UNet and nnUNet. CONCLUSIONS We developed and validated a capsule network that is effective in segmenting brain images, can segment images that are not well-represented in the training data, and is computationally efficient compared with alternatives.
Collapse
Affiliation(s)
- A Avesta
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
| | - Y Hui
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
| | - M Aboian
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
| | - J Duncan
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
- Departments of Statistics and Data Science (J.D.)
- Biomedical Engineering (J.D., S.A.), Yale University, New Haven, Connecticut
| | - H M Krumholz
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
- Division of Cardiovascular Medicine (H.M.K.), Yale School of Medicine, New Haven, Connecticut
| | - S Aneja
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
- Biomedical Engineering (J.D., S.A.), Yale University, New Haven, Connecticut
| |
Collapse
|
15
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
16
|
Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering (Basel) 2023; 10:bioengineering10020181. [PMID: 36829675 PMCID: PMC9952534 DOI: 10.3390/bioengineering10020181] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 02/04/2023] Open
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Collapse
Affiliation(s)
- Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sajid Hossain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Visage Imaging, Inc., San Diego, CA 92130, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Division of Cardiovascular Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06510, USA
- Correspondence: ; Tel.: +1-203-200-2100; Fax: +1-203-737-1467
| |
Collapse
|
17
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
18
|
Li R, Guo Y, Zhao Z, Chen M, Liu X, Gong G, Wang L. MRI-based two-stage deep learning model for automatic detection and segmentation of brain metastases. Eur Radiol 2023; 33:3521-3531. [PMID: 36695903 DOI: 10.1007/s00330-023-09420-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 12/12/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023]
Abstract
OBJECTIVES To develop and validate a two-stage deep learning model for automatic detection and segmentation of brain metastases (BMs) in MRI images. METHODS In this retrospective study, T1-weighted (T1) and T1-weighted contrast-enhanced (T1ce) MRI images of 649 patients who underwent radiotherapy from August 2019 to January 2022 were included. A total of 5163 metastases were manually annotated by neuroradiologists. A two-stage deep learning model was developed for automatic detection and segmentation of BMs, which consisted of a lightweight segmentation network for generating metastases proposals and a multi-scale classification network for false-positive suppression. Its performance was evaluated by sensitivity, precision, F1-score, dice, and relative volume difference (RVD). RESULTS Six hundred forty-nine patients were randomly divided into training (n = 295), validation (n = 99), and testing (n = 255) sets. The proposed two-stage model achieved a sensitivity of 90% (1463/1632) and a precision of 56% (1463/2629) on the testing set, outperforming one-stage methods based on a single-shot detector, 3D U-Net, and nnU-Net, whose sensitivities were 78% (1276/1632), 79% (1290/1632), and 87% (1426/1632), and the precisions were 40% (1276/3222), 51% (1290/2507), and 53% (1426/2688), respectively. Particularly for BMs smaller than 5 mm, the proposed model achieved a sensitivity of 66% (116/177), far superior to one-stage models (21% (37/177), 36% (64/177), and 53% (93/177)). Furthermore, it also achieved high segmentation performance with an average dice of 81% and an average RVD of 20%. CONCLUSION A two-stage deep learning model can detect and segment BMs with high sensitivity and low volume error. KEY POINTS • A two-stage deep learning model based on triple-channel MRI images identified brain metastases with 90% sensitivity and 56% precision. • For brain metastases smaller than 5 mm, the proposed two-stage model achieved 66% sensitivity and 22% precision. • For segmentation of brain metastases, the proposed two-stage model achieved a dice of 81% and a relative volume difference (RVD) of 20%.
Collapse
Affiliation(s)
- Ruikun Li
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yujie Guo
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | - Zhongchen Zhao
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Mingming Chen
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | | | - Guanzhong Gong
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China. .,Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.
| | - Lisheng Wang
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
19
|
Ottesen JA, Yi D, Tong E, Iv M, Latysheva A, Saxhaug C, Jacobsen KD, Helland Å, Emblem KE, Rubin DL, Bjørnerud A, Zaharchuk G, Grøvik E. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform 2023; 16:1056068. [PMID: 36743439 PMCID: PMC9889663 DOI: 10.3389/fninf.2022.1056068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023] Open
Abstract
Introduction Management of patients with brain metastases is often based on manual lesion detection and segmentation by an expert reader. This is a time- and labor-intensive process, and to that end, this work proposes an end-to-end deep learning segmentation network for a varying number of available MRI available sequences. Methods We adapt and evaluate a 2.5D and a 3D convolution neural network trained and tested on a retrospective multinational study from two independent centers, in addition, nnU-Net was adapted as a comparative benchmark. Segmentation and detection performance was evaluated by: (1) the dice similarity coefficient, (2) a per-metastases and the average detection sensitivity, and (3) the number of false positives. Results The 2.5D and 3D models achieved similar results, albeit the 2.5D model had better detection rate, whereas the 3D model had fewer false positive predictions, and nnU-Net had fewest false positives, but with the lowest detection rate. On MRI data from center 1, the 2.5D, 3D, and nnU-Net detected 79%, 71%, and 65% of all metastases; had an average per patient sensitivity of 0.88, 0.84, and 0.76; and had on average 6.2, 3.2, and 1.7 false positive predictions per patient, respectively. For center 2, the 2.5D, 3D, and nnU-Net detected 88%, 86%, and 78% of all metastases; had an average per patient sensitivity of 0.92, 0.91, and 0.85; and had on average 1.0, 0.4, and 0.1 false positive predictions per patient, respectively. Discussion/Conclusion Our results show that deep learning can yield highly accurate segmentations of brain metastases with few false positives in multinational data, but the accuracy degrades for metastases with an area smaller than 0.4 cm2.
Collapse
Affiliation(s)
- Jon André Ottesen
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway,*Correspondence: Jon André Ottesen ✉
| | - Darvin Yi
- Department of Ophthalmology, University of Illinois, Chicago, IL, United States
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Cathrine Saxhaug
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Kyrre Eeg Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Atle Bjørnerud
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
20
|
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:cancers15020334. [PMID: 36672286 PMCID: PMC9857123 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
21
|
Tran CBN, Nedelec P, Weiss DA, Rudie JD, Kini L, Sugrue LP, Glenn OA, Hess CP, Rauschecker AM. Development of Gestational Age-Based Fetal Brain and Intracranial Volume Reference Norms Using Deep Learning. AJNR Am J Neuroradiol 2023; 44:82-90. [PMID: 36549845 PMCID: PMC9835919 DOI: 10.3174/ajnr.a7747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/04/2022] [Indexed: 12/24/2022]
Abstract
BACKGROUND AND PURPOSE Fetal brain MR imaging interpretations are subjective and require subspecialty expertise. We aimed to develop a deep learning algorithm for automatically measuring intracranial and brain volumes of fetal brain MRIs across gestational ages. MATERIALS AND METHODS This retrospective study included 246 patients with singleton pregnancies at 19-38 weeks gestation. A 3D U-Net was trained to segment the intracranial contents of 2D fetal brain MRIs in the axial, coronal, and sagittal planes. An additional 3D U-Net was trained to segment the brain from the output of the first model. Models were tested on MRIs of 10 patients (28 planes) via Dice coefficients and volume comparison with manual reference segmentations. Trained U-Nets were applied to 200 additional MRIs to develop normative reference intracranial and brain volumes across gestational ages and then to 9 pathologic fetal brains. RESULTS Fetal intracranial and brain compartments were automatically segmented in a mean of 6.8 (SD, 1.2) seconds with median Dices score of 0.95 and 0.90, respectively (interquartile ranges, 0.91-0.96/0.89-0.91) on the test set. Correlation with manual volume measurements was high (Pearson r = 0.996, P < .001). Normative samples of intracranial and brain volumes across gestational ages were developed. Eight of 9 pathologic fetal intracranial volumes were automatically predicted to be >2 SDs from this age-specific reference mean. There were no effects of fetal sex, maternal diabetes, or maternal age on intracranial or brain volumes across gestational ages. CONCLUSIONS Deep learning techniques can quickly and accurately quantify intracranial and brain volumes on clinical fetal brain MRIs and identify abnormal volumes on the basis of a normative reference standard.
Collapse
Affiliation(s)
- C B N Tran
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - P Nedelec
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - D A Weiss
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - J D Rudie
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - L Kini
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - L P Sugrue
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - O A Glenn
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - C P Hess
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - A M Rauschecker
- From the Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, California
| |
Collapse
|
22
|
Buchner JA, Kofler F, Etzel L, Mayinger M, Christ SM, Brunner TB, Wittig A, Menze B, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus J, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Ferentinos K, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Peeken JC. Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study. Radiother Oncol 2023; 178:109425. [PMID: 36442609 DOI: 10.1016/j.radonc.2022.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Stereotactic radiotherapy is a standard treatment option for patients with brain metastases. The planning target volume is based on gross tumor volume (GTV) segmentation. The aim of this work is to develop and validate a neural network for automatic GTV segmentation to accelerate clinical daily routine practice and minimize interobserver variability. METHODS We analyzed MRIs (T1-weighted sequence ± contrast-enhancement, T2-weighted sequence, and FLAIR sequence) from 348 patients with at least one brain metastasis from different cancer primaries treated in six centers. To generate reference segmentations, all GTVs and the FLAIR hyperintense edematous regions were segmented manually. A 3D-U-Net was trained on a cohort of 260 patients from two centers to segment the GTV and the surrounding FLAIR hyperintense region. During training varying degrees of data augmentation were applied. Model validation was performed using an independent international multicenter test cohort (n = 88) including four centers. RESULTS Our proposed U-Net reached a mean overall Dice similarity coefficient (DSC) of 0.92 ± 0.08 and a mean individual metastasis-wise DSC of 0.89 ± 0.11 in the external test cohort for GTV segmentation. Data augmentation improved the segmentation performance significantly. Detection of brain metastases was effective with a mean F1-Score of 0.93 ± 0.16. The model performance was stable independent of the center (p = 0.3). There was no correlation between metastasis volume and DSC (Pearson correlation coefficient 0.07). CONCLUSION Reliable automated segmentation of brain metastases with neural networks is possible and may support radiotherapy planning by providing more objective GTV definitions.
Collapse
Affiliation(s)
- Josef A Buchner
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany; Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Michael Mayinger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Sebastian M Christ
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Thomas B Brunner
- Department of Radiation Oncology, University Hospital Magdeburg, Magdeburg, Germany
| | - Andrea Wittig
- Department of Radiotherapy and Radiation Oncology, University Hospital Jena, Friedrich-Schiller University, Jena, Germany
| | - Björn Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Bernhard Meyer
- Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Rami A El Shafie
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany; Department of Radiation Oncology, University Medical Center Göttingen, Göttingen, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany
| | - Susanne Rogers
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Oliver Riesterer
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Katrin Schulze
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Horst J Feldmann
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig Holstein, Kiel, Germany
| | - Constantinos Zamboglou
- Department of Radiation Oncology, University of Freiburg - Medical Center, Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany; Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Robert Wolff
- Saphir Radiosurgery Center Frankfurt and Northern Germany, Guestrow, Germany; Department of Neurosurgery, University Hospital Frankfurt, Frankfurt, Germany
| | - Kerstin A Eitz
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Stephanie E Combs
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| |
Collapse
|
23
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
24
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
25
|
Chen Z, Ye N, Teng C, Li X. Alternations and Applications of the Structural and Functional Connectome in Gliomas: A Mini-Review. Front Neurosci 2022; 16:856808. [PMID: 35478847 PMCID: PMC9035851 DOI: 10.3389/fnins.2022.856808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 02/28/2022] [Indexed: 12/12/2022] Open
Abstract
In the central nervous system, gliomas are the most common, but complex primary tumors. Genome-based molecular and clinical studies have revealed different classifications and subtypes of gliomas. Neuroradiological approaches have non-invasively provided a macroscopic view for surgical resection and therapeutic effects. The connectome is a structural map of a physical object, the brain, which raises issues of spatial scale and definition, and it is calculated through diffusion magnetic resonance imaging (MRI) and functional MRI. In this study, we reviewed the basic principles and attributes of the structural and functional connectome, followed by the alternations of connectomes and their influences on glioma. To extend the applications of connectome, we demonstrated that a series of multi-center projects still need to be conducted to systemically investigate the connectome and the structural-functional coupling of glioma. Additionally, the brain-computer interface based on accurate connectome could provide more precise structural and functional data, which are significant for surgery and postoperative recovery. Besides, integrating the data from different sources, including connectome and other omics information, and their processing with artificial intelligence, together with validated biological and clinical findings will be significant for the development of a personalized surgical strategy.
Collapse
Affiliation(s)
- Ziyan Chen
- Department of Neurosurgery, Xiangya Hospital, Central South University, Hunan, China
- Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Ningrong Ye
- Department of Neurosurgery, Xiangya Hospital, Central South University, Hunan, China
- Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Chubei Teng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Hunan, China
- Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
- Department of Neurosurgery, The First Affiliated Hospital, University of South China, Hengyang, China
| | - Xuejun Li
- Department of Neurosurgery, Xiangya Hospital, Central South University, Hunan, China
- Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
26
|
Deep-learning 2.5-dimensional single-shot detector improves the performance of automated detection of brain metastases on contrast-enhanced CT. Neuroradiology 2022; 64:1511-1518. [PMID: 35064786 DOI: 10.1007/s00234-022-02902-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/15/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Collapse
|
27
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Timmerman R, Dan T, Wardak Z, Lu W, Gu X. Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4667. [PMID: 34952535 PMCID: PMC8858586 DOI: 10.1088/1361-6560/ac4667] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/24/2021] [Indexed: 01/21/2023]
Abstract
Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305
| |
Collapse
|
28
|
Pflüger I, Wald T, Isensee F, Schell M, Meredig H, Schlamp K, Bernhardt D, Brugnara G, Heußel CP, Debus J, Wick W, Bendszus M, Maier-Hein KH, Vollmuth P. Automated detection and quantification of brain metastases on clinical MRI data using artificial neural networks. Neurooncol Adv 2022; 4:vdac138. [PMID: 36105388 PMCID: PMC9466273 DOI: 10.1093/noajnl/vdac138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Reliable detection and precise volumetric quantification of brain metastases (BM) on MRI are essential for guiding treatment decisions. Here we evaluate the potential of artificial neural networks (ANN) for automated detection and quantification of BM.
Methods
A consecutive series of 308 patients with BM was used for developing an ANN (with a 4:1 split for training/testing) for automated volumetric assessment of contrast-enhancing tumors (CE) and non-enhancing FLAIR signal abnormality including edema (NEE). An independent consecutive series of 30 patients was used for external testing. Performance was assessed case-wise for CE and NEE and lesion-wise for CE using the case-wise/lesion-wise DICE-coefficient (C/L-DICE), positive predictive value (L-PPV) and sensitivity (C/L-Sensitivity).
Results
The performance of detecting CE lesions on the validation dataset was not significantly affected when evaluating different volumetric thresholds (0.001–0.2 cm3; P = .2028). The median L-DICE and median C-DICE for CE lesions were 0.78 (IQR = 0.6–0.91) and 0.90 (IQR = 0.85–0.94) in the institutional as well as 0.79 (IQR = 0.67–0.82) and 0.84 (IQR = 0.76–0.89) in the external test dataset. The corresponding median L-Sensitivity and median L-PPV were 0.81 (IQR = 0.63–0.92) and 0.79 (IQR = 0.63–0.93) in the institutional test dataset, as compared to 0.85 (IQR = 0.76–0.94) and 0.76 (IQR = 0.68–0.88) in the external test dataset. The median C-DICE for NEE was 0.96 (IQR = 0.92–0.97) in the institutional test dataset as compared to 0.85 (IQR = 0.72–0.91) in the external test dataset.
Conclusion
The developed ANN-based algorithm (publicly available at www.github.com/NeuroAI-HD/HD-BM) allows reliable detection and precise volumetric quantification of CE and NEE compartments in patients with BM.
Collapse
Affiliation(s)
- Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Tassilo Wald
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Hagen Meredig
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Kai Schlamp
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University Munich , Munich , Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Claus Peter Heußel
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
- Member of the Cerman Center for Lung Research (DZL), Translational Lung Research Center (TLRC) , Heidelberg , Germany
| | - Juergen Debus
- Department of Radiation Oncology, Heidelberg University Hospital , Heidelberg , Germany
- Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg University Hospital , Heidelberg , Germany
- German Cancer Consotium (DKTK), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ) , Heidelberg , Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Wolfgang Wick
- Neurology Clinic, Heidelberg University Hospital , Heidelberg , Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Klaus H Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| |
Collapse
|
29
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning single-shot detector for automatic detection of brain metastases with the combined use of contrast-enhanced and non-enhanced computed tomography images. Eur J Radiol 2021; 144:110015. [PMID: 34742108 DOI: 10.1016/j.ejrad.2021.110015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/10/2021] [Accepted: 10/27/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE To develop a deep-learning object detection model for automatic detection of brain metastases that simultaneously uses contrast-enhanced and non-enhanced images as inputs, and to compare its performance with that of a model that uses only contrast-enhanced images. METHOD A total of 116 computed tomography (CT) scans of 116 patients with brain metastases were included in this study. They showed a total of 659 metastases, 428 of which were used for training and validation (mean size, 11.3 ± 9.9 mm) and 231 were used for testing (mean size, 9.0 ± 7.0 mm). Single-shot detector (SSD) models were constructed with a feature fusion module, and their results were compared per lesion at a confidence threshold of 50%. RESULTS The sensitivity was 88.7% for the model that used both contrast-enhanced and non-enhanced CT images (the CE + NECT model) and 87.6% for the model that used only contrast-enhanced CT images (the CECT model). The positive predictive value (PPV) was 44.0% for the CE + NECT model and 37.2% for the CECT model. The number of false positives per patient was 9.9 for the CE + NECT model and 13.6 for the CECT model. The CE + NECT model had a significantly higher PPV (t test, p < 0.001), significantly fewer false positives (t test, p < 0.001), and a tendency to be more sensitive (t test, p = 0.14). CONCLUSIONS The results indicate that the information on true contrast enhancement obtained by comparing the contrast-enhanced and non-enhanced images may prevent the detection of pseudolesions, suppress false positives, and improve the performance of deep-learning object detection models.
Collapse
Affiliation(s)
- Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
30
|
Li S, Zheng J, Li D. Precise segmentation of non-enhanced computed tomography in patients with ischemic stroke based on multi-scale U-Net deep network model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106278. [PMID: 34274610 DOI: 10.1016/j.cmpb.2021.106278] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 07/04/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Acute ischemic stroke requires timely diagnosis and thrombolytic therapy, but it is difficult to locate and quantify the lesion site manually. The purpose of this study was to explore a more rapid and effective method for automatic image segmentation of acute ischemic stroke. METHODS The image features of 30 stroke patients were segmented from non-enhanced computed tomography (CT) images using a multi-scale U-Net deep network model. The Dice loss function training model was used to counter the similar imbalance problem in the data. The difference was compared between manual segmentation and automatic segmentation. RESULTS The Dice similarity coefficient based on multi-scale convolution U-Net network segmentation was 0.86±0.04, higher than the Dice based on classic U-Net (0.81±0.07, P=0.001). The lesion contour of automatic segmentation based on multi-scale U-Net was very close to manual segmentation. The error of lesion area is 1.28±0.59 mm2, and the Pearson correlation coefficient was r=0.986 (P<0.01). The motion time of automatic segmentation is less than 20 ms. CONCLUSIONS Multi-scale U-Net deep network model can effectively segment ischemic stroke lesions in non-enhanced CT and meet real-time clinical requirements.
Collapse
Affiliation(s)
- Shaoquan Li
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China.
| | - Jianye Zheng
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China
| | - Dongjiao Li
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China
| |
Collapse
|
31
|
Hsu DG, Ballangrud Å, Shamseddine A, Deasy JO, Veeraraghavan H, Cervino L, Beal K, Aristophanous M. Automatic segmentation of brain metastases using T1 magnetic resonance and computed tomography images. Phys Med Biol 2021; 66. [PMID: 34315148 DOI: 10.1088/1361-6560/ac1835] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
An increasing number of patients with multiple brain metastases are being treated with stereotactic radiosurgery (SRS). Manually identifying and contouring all metastatic lesions is difficult and time-consuming, and a potential source of variability. Hence, we developed a 3D deep learning approach for segmenting brain metastases on MR and CT images. Five-hundred eleven patients treated with SRS were retrospectively identified for this study. Prior to radiotherapy, the patients were imaged with 3D T1 spoiled-gradient MR post-Gd (T1 + C) and contrast-enhanced CT (CECT), which were co-registered by a treatment planner. The gross tumor volume contours, authored by the attending radiation oncologist, were taken as the ground truth. There were 3 ± 4 metastases per patient, with volume up to 57 ml. We produced a multi-stage model that automatically performs brain extraction, followed by detection and segmentation of brain metastases using co-registered T1 + C and CECT. Augmented data from 80% of these patients were used to train modified 3D V-Net convolutional neural networks for this task. We combined a normalized boundary loss function with soft Dice loss to improve the model optimization, and employed gradient accumulation to stabilize the training. The average Dice similarity coefficient (DSC) for brain extraction was 0.975 ± 0.002 (95% CI). The detection sensitivity per metastasis was 90% (329/367), with moderate dependence on metastasis size. Averaged across 102 test patients, our approach had metastasis detection sensitivity 95 ± 3%, 2.4 ± 0.5 false positives, DSC of 0.76 ± 0.03, and 95th-percentile Hausdorff distance of 2.5 ± 0.3 mm (95% CIs). The volumes of automatic and manual segmentations were strongly correlated for metastases of volume up to 20 ml (r=0.97,p<0.001). This work expounds a fully 3D deep learning approach capable of automatically detecting and segmenting brain metastases using co-registered T1 + C and CECT.
Collapse
Affiliation(s)
- Dylan G Hsu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Achraf Shamseddine
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Kathryn Beal
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| |
Collapse
|