1
|
Ruffle JK, Mohinta S, Baruteau KP, Rajiah R, Lee F, Brandner S, Nachev P, Hyare H. VASARI-auto: Equitable, efficient, and economical featurisation of glioma MRI. Neuroimage Clin 2024; 44:103668. [PMID: 39265321 DOI: 10.1016/j.nicl.2024.103668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 08/26/2024] [Accepted: 09/04/2024] [Indexed: 09/14/2024]
Abstract
The VASARI MRI feature set is a quantitative system designed to standardise glioma imaging descriptions. Though effective, deriving VASARI is time-consuming and seldom used clinically. We sought to resolve this problem with software automation and machine learning. Using glioma data from 1172 patients, we developed VASARI-auto, an automated labelling software applied to open-source lesion masks and an openly available tumour segmentation model. Consultant neuroradiologists independently quantified VASARI features in 100 held-out glioblastoma cases. We quantified 1) agreement across neuroradiologists and VASARI-auto, 2) software equity, 3) an economic workforce analysis, and 4) fidelity in predicting survival. Tumour segmentation was compatible with the current state of the art and equally performant regardless of age or sex. A modest inter-rater variability between in-house neuroradiologists was comparable to between neuroradiologists and VASARI-auto, with far higher agreement between VASARI-auto methods. The time for neuroradiologists to derive VASARI was substantially higher than VASARI-auto (mean time per case 317 vs. 3 s). A UK hospital workforce analysis forecast that three years of VASARI featurisation would demand 29,777 consultant neuroradiologist workforce hours and >£1.5 ($1.9) million, reducible to 332 hours of computing time (and £146 of power) with VASARI-auto. The best-performing survival model utilised VASARI-auto features instead of those derived by neuroradiologists. VASARI-auto is a highly efficient and equitable automated labelling system, a favourable economic profile if used as a decision support tool, and non-inferior survival prediction. Future work should iterate upon and integrate such tools to enhance patient care.
Collapse
Affiliation(s)
- James K Ruffle
- Queen Square Institute of Neurology, University College London, London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, UK.
| | - Samia Mohinta
- Queen Square Institute of Neurology, University College London, London, UK
| | - Kelly Pegoretti Baruteau
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Rebekah Rajiah
- Queen Square Institute of Neurology, University College London, London, UK
| | - Faith Lee
- Queen Square Institute of Neurology, University College London, London, UK
| | - Sebastian Brandner
- Division of Neuropathology and Department of Neurodegenerative Disease, Queen Square Institute of Neurology, University College London, London, UK
| | - Parashkev Nachev
- Queen Square Institute of Neurology, University College London, London, UK
| | - Harpreet Hyare
- Queen Square Institute of Neurology, University College London, London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, UK
| |
Collapse
|
2
|
Baker CR, Pease M, Sexton DP, Abumoussa A, Chambless LB. Artificial intelligence innovations in neurosurgical oncology: a narrative review. J Neurooncol 2024; 169:489-496. [PMID: 38958849 PMCID: PMC11341589 DOI: 10.1007/s11060-024-04757-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024]
Abstract
PURPOSE Artificial Intelligence (AI) has become increasingly integrated clinically within neurosurgical oncology. This report reviews the cutting-edge technologies impacting tumor treatment and outcomes. METHODS A rigorous literature search was performed with the aid of a research librarian to identify key articles referencing AI and related topics (machine learning (ML), computer vision (CV), augmented reality (AR), virtual reality (VR), etc.) for neurosurgical care of brain or spinal tumors. RESULTS Treatment of central nervous system (CNS) tumors is being improved through advances across AI-such as AL, CV, and AR/VR. AI aided diagnostic and prognostication tools can influence pre-operative patient experience, while automated tumor segmentation and total resection predictions aid surgical planning. Novel intra-operative tools can rapidly provide histopathologic tumor classification to streamline treatment strategies. Post-operative video analysis, paired with rich surgical simulations, can enhance training feedback and regimens. CONCLUSION While limited generalizability, bias, and patient data security are current concerns, the advent of federated learning, along with growing data consortiums, provides an avenue for increasingly safe, powerful, and effective AI platforms in the future.
Collapse
Affiliation(s)
- Clayton R Baker
- Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Matthew Pease
- Department of Neurosurgery, Indiana University, Indianapolis, IN, USA
| | - Daniel P Sexton
- Department of Neurosurgery, Duke University, Durham, NC, USA
| | - Andrew Abumoussa
- Department of Neurosurgery, University of North Carolina at Chapel Hill Hospitals, Chapel Hill, NC, USA
| | - Lola B Chambless
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
3
|
Knill C, Halford R, Sandhu R, Loughery B, Shamim S, Junn F, Lee K, Almahariq M, Seymour Z. Evaluating stereotactic accuracy with patient-specific MRI distortion corrections for frame-based radiosurgery. J Appl Clin Med Phys 2024:e14472. [PMID: 39042450 DOI: 10.1002/acm2.14472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 04/15/2024] [Accepted: 06/15/2024] [Indexed: 07/24/2024] Open
Abstract
PURPOSE This study examines how MRI distortions affect frame-based SRS treatments and assesses the need for clinical distortion corrections. METHODS The study included 18 patients with 80 total brain targets treated using frame-based radiosurgery. Distortion within patients' MRIs were corrected using Cranial Distortion Correction (CDC) software, which utilizes the patient's CT to alter planning MRIs to reduce inherent intra-cranial distortion. Distortion was evaluated by comparing the original planning target volumes (PTVORIG) to targets contoured on corrected MRIs (PTVCORR). To provide an internal control, targets were also re-contoured on uncorrected (PTVRECON) MRIs. Additional analysis was done to assess if 1 mm expansions to PTVORIG targets would compensate for patient-specific distortions. Changes in target volumes, DICE and JACCARD similarity coefficients, minimum PTV dose (Dmin), dose to 95% of the PTV (D95%), and normal tissue receiving 12 Gy (V12Gy), 10 Gy (V10Gy), and 5 Gy (V5Gy) were calculated and evaluated. Student's t-tests were used to determine if changes in PTVCORR were significantly different than intra-contouring variability quantified by PTVRECON. RESULTS PTVRECON and PTVCORR relative changes in volume were 6.19% ± 10.95% and 1.48% ± 32.92%. PTVRECON and PTVCORR similarity coefficients were 0.90 ± 0.08 and 0.73 ± 0.16 for DICE and 0.82 ± 0.12 and 0.60 ± 0.18 for JACCARD. PTVRECON and PTVCORR changes in Dmin were -0.88% ± 8.77% and -12.9 ± 17.3%. PTVRECON and PTVCORR changes in D95% were -0.34% ± 5.89 and -8.68% ± 13.21%. The 1 mm expanded PTVORIG targets did not entirely cover 14 of the 80 PTVCORR targets. Normal tissue changes (V12Gy, V10Gy, V5Gy) calculated with PTVRECON were (-0.09% ± 7.39%, -0.38% ± 5.67%, -0.08% ± 2.04%) and PTVCORR were (-2.14% ± 7.34%, -1.42% ± 5.45%, -0.61% ± 1.93%). Except for V10Gy, all PTVCORR changes were significantly different (p < 0.05) than PTVRECON. CONCLUSION MRIs used for SRS target delineation exhibit notable geometric distortions that may compromise optimal dosimetric accuracy. A uniform 1 mm expansion may result in geometric misses; however, the CDC algorithm provides a feasible solution for rectifying distortions, thereby enhancing treatment precision.
Collapse
Affiliation(s)
- Cory Knill
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Robert Halford
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Raminder Sandhu
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Brian Loughery
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Sharjil Shamim
- William Beaumont School of Medicine, Oakland University, Rochester, Michigan, USA
| | - Fred Junn
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Kuei Lee
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Muayad Almahariq
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| | - Zachary Seymour
- Department of Radiation Oncology, Corewell Health William Beaumont University Hospital, Royal Oak, Michigan, USA
| |
Collapse
|
4
|
Awuah WA, Adebusoye FT, Wellington J, David L, Salam A, Weng Yee AL, Lansiaux E, Yarlagadda R, Garg T, Abdul-Rahman T, Kalmanovich J, Miteu GD, Kundu M, Mykolaivna NI. Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery. World Neurosurg X 2024; 23:100301. [PMID: 38577317 PMCID: PMC10992893 DOI: 10.1016/j.wnsx.2024.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/23/2023] [Accepted: 02/21/2024] [Indexed: 04/06/2024] Open
Abstract
Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.
Collapse
Affiliation(s)
| | | | - Jack Wellington
- Cardiff University School of Medicine, Cardiff University, Wales, United Kingdom
| | - Lian David
- Norwich Medical School, University of East Anglia, United Kingdom
| | - Abdus Salam
- Department of Surgery, Khyber Teaching Hospital, Peshawar, Pakistan
| | | | | | - Rohan Yarlagadda
- Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Tulika Garg
- Government Medical College and Hospital Chandigarh, India
| | | | | | | | - Mrinmoy Kundu
- Institute of Medical Sciences and SUM Hospital, Bhubaneswar, India
| | | |
Collapse
|
5
|
Yun S, Park JE, Kim N, Park SY, Kim HS. Reducing false positives in deep learning-based brain metastasis detection by using both gradient-echo and spin-echo contrast-enhanced MRI: validation in a multi-center diagnostic cohort. Eur Radiol 2024; 34:2873-2884. [PMID: 37891415 DOI: 10.1007/s00330-023-10318-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/08/2023] [Accepted: 08/18/2023] [Indexed: 10/29/2023]
Abstract
OBJECTIVES To develop a deep learning (DL) for detection of brain metastasis (BM) that incorporates both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced DL) and evaluate it in a clinical cohort in comparison with human readers and DL using gradient-echo-based imaging only (GRE DL). MATERIALS AND METHODS DL detection was developed using data from 200 patients with BM (training set) and tested in 62 (internal) and 48 (external) consecutive patients who underwent stereotactic radiosurgery and diagnostic dual-enhanced imaging (dual-enhanced DL) and later guide GRE imaging (GRE DL). The detection sensitivity and positive predictive value (PPV) were compared between two DLs. Two neuroradiologists independently analyzed BM and reference standards for BM were separately drawn by another neuroradiologist. The relative differences (RDs) from the reference standard BM numbers were compared between the DLs and neuroradiologists. RESULTS Sensitivity was similar between GRE DL (93%, 95% confidence interval [CI]: 90-96%) and dual-enhanced DL (92% [89-94%]). The PPV of the dual-enhanced DL was higher (89% [86-92%], p < .001) than that of GRE DL (76%, [72-80%]). GRE DL significantly overestimated the number of metastases (false positives; RD: 0.05, 95% CI: 0.00-0.58) compared with neuroradiologists (RD: 0.00, 95% CI: - 0.28, 0.15, p < .001), whereas dual-enhanced DL (RD: 0.00, 95% CI: 0.00-0.15) did not show a statistically significant difference from neuroradiologists (RD: 0.00, 95% CI: - 0.20-0.10, p = .913). CONCLUSION The dual-enhanced DL showed improved detection of BM and reduced overestimation compared with GRE DL, achieving similar performance to neuroradiologists. CLINICAL RELEVANCE STATEMENT The use of deep learning-based brain metastasis detection with turbo spin-echo imaging reduces false positive detections, aiding in the guidance of stereotactic radiosurgery when gradient-echo imaging alone is employed. KEY POINTS •Deep learning for brain metastasis detection improved by using both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced deep learning). •Dual-enhanced deep learning increased true positive detections and reduced overestimation. •Dual-enhanced deep learning achieved similar performance to neuroradiologists for brain metastasis counts.
Collapse
Affiliation(s)
- Suyoung Yun
- Department of Radiology, Busan Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | | | - Seo Young Park
- Department of Statistics and Data Science, Korea National Open University, Seoul, Republic of Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea
| |
Collapse
|
6
|
Cobanaj M, Corti C, Dee EC, McCullum L, Boldrini L, Schlam I, Tolaney SM, Celi LA, Curigliano G, Criscitiello C. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer 2024; 198:113504. [PMID: 38141549 PMCID: PMC11362966 DOI: 10.1016/j.ejca.2023.113504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/25/2023]
Abstract
Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.
Collapse
Affiliation(s)
- Marisa Cobanaj
- National Center for Radiation Research in Oncology, OncoRay, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Chiara Corti
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy.
| | - Edward C Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucas McCullum
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Boldrini
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Ilana Schlam
- Department of Hematology and Oncology, Tufts Medical Center, Boston, MA, USA; Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sara M Tolaney
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Leo A Celi
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| |
Collapse
|
7
|
Frood R, Willaime JMY, Miles B, Chambers G, Al-Chalabi H, Ali T, Hougham N, Brooks N, Petrides G, Naylor M, Ward D, Sulkin T, Chaytor R, Strouhal P, Patel C, Scarsbrook AF. Comparative effectiveness of standard vs. AI-assisted PET/CT reading workflow for pre-treatment lymphoma staging: a multi-institutional reader study evaluation. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2024; 3:1327186. [PMID: 39355039 PMCID: PMC11440880 DOI: 10.3389/fnume.2023.1327186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/27/2023] [Indexed: 10/03/2024]
Abstract
Background Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography/computed tomography (PET/CT) is widely used for staging high-grade lymphoma, with the time to evaluate such studies varying depending on the complexity of the case. Integrating artificial intelligence (AI) within the reporting workflow has the potential to improve quality and efficiency. The aims of the present study were to evaluate the influence of an integrated research prototype segmentation tool implemented within diagnostic PET/CT reading software on the speed and quality of reporting with variable levels of experience, and to assess the effect of the AI-assisted workflow on reader confidence and whether this tool influenced reporting behaviour. Methods Nine blinded reporters (three trainees, three junior consultants and three senior consultants) from three UK centres participated in a two-part reader study. A total of 15 lymphoma staging PET/CT scans were evaluated twice: first, using a standard PET/CT reporting workflow; then, after a 6-week gap, with AI assistance incorporating pre-segmentation of disease sites within the reading software. An even split of PET/CT segmentations with gold standard (GS), false-positive (FP) over-contour or false-negative (FN) under-contour were provided. The read duration was calculated using file logs, while the report quality was independently assessed by two radiologists with >15 years of experience. Confidence in AI assistance and identification of disease was assessed via online questionnaires for each case. Results There was a significant decrease in time between non-AI and AI-assisted reads (median 15.0 vs. 13.3 min, p < 0.001). Sub-analysis confirmed this was true for both junior (14.5 vs. 12.7 min, p = 0.03) and senior consultants (15.1 vs. 12.2 min, p = 0.03) but not for trainees (18.1 vs. 18.0 min, p = 0.2). There was no significant difference between report quality between reads. AI assistance provided a significant increase in confidence of disease identification (p < 0.001). This held true when splitting the data into FN, GS and FP. In 19/88 cases, participants did not identify either FP (31.8%) or FN (11.4%) segmentations. This was significantly greater for trainees (13/30, 43.3%) than for junior (3/28, 10.7%, p = 0.05) and senior consultants (3/30, 10.0%, p = 0.05). Conclusions The study findings indicate that an AI-assisted workflow achieves comparable performance to humans, demonstrating a marginal enhancement in reporting speed. Less experienced readers were more influenced by segmentation errors. An AI-assisted PET/CT reading workflow has the potential to increase reporting efficiency without adversely affecting quality, which could reduce costs and report turnaround times. These preliminary findings need to be confirmed in larger studies.
Collapse
Affiliation(s)
- Russell Frood
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
- Leeds Institute of Health Research, University of Leeds, Leeds, United Kingdom
| | | | - Brad Miles
- Alliance Medical Ltd., Warwick, United Kingdom
| | - Greg Chambers
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - H’ssein Al-Chalabi
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
- Department of Radiology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, United Kingdom
| | - Tamir Ali
- Department of Radiology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, United Kingdom
| | - Natasha Hougham
- Department of Radiology, Royal Cornwall Hospitals NHS Trust, Truro, United Kingdom
| | | | - George Petrides
- Department of Radiology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, United Kingdom
| | - Matthew Naylor
- Department of Radiology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, United Kingdom
| | - Daniel Ward
- Department of Radiology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, United Kingdom
| | - Tom Sulkin
- Department of Radiology, Royal Cornwall Hospitals NHS Trust, Truro, United Kingdom
| | - Richard Chaytor
- Department of Radiology, Royal Cornwall Hospitals NHS Trust, Truro, United Kingdom
| | | | - Chirag Patel
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - Andrew F. Scarsbrook
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
- Leeds Institute of Health Research, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
8
|
Hoebel KV, Bridge CP, Ahmed S, Akintola O, Chung C, Huang RY, Johnson JM, Kim A, Ly KI, Chang K, Patel J, Pinho M, Batchelor TT, Rosen BR, Gerstner ER, Kalpathy-Cramer J. Expert-centered Evaluation of Deep Learning Algorithms for Brain Tumor Segmentation. Radiol Artif Intell 2024; 6:e220231. [PMID: 38197800 PMCID: PMC10831514 DOI: 10.1148/ryai.220231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 09/13/2023] [Accepted: 11/01/2023] [Indexed: 01/11/2024]
Abstract
Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. Keywords: Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 Supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
- Katharina V. Hoebel
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Christopher P. Bridge
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Sara Ahmed
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Oluwatosin Akintola
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Caroline Chung
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Raymond Y. Huang
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Jason M. Johnson
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Albert Kim
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - K. Ina Ly
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Ken Chang
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Jay Patel
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Marco Pinho
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Tracy T. Batchelor
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Bruce R. Rosen
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Elizabeth R. Gerstner
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| | - Jayashree Kalpathy-Cramer
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., C.P.B., A.K., K.I.L., K.C., J.P., B.R.R., E.R.G., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (O.A., A.K., K.I.L., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Mass (K.V.H., K.C., J.P.); MGH and BWH Center for Clinical Data Science, Boston, Mass (C.P.B., J.K.C.); Department of Radiation Oncology, Division of Radiation Oncology (S.A., C.C.); Department of Diagnostic Radiology, Division of Diagnostic Imaging (C.C.), and Department of Neuroradiology (J.M.J.), Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, Tex; Departments of Radiology (R.Y.H.) and Neurology (T.T.B.), Brigham and Women’s Hospital, Boston, Mass; Department of Radiology and Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, Tex (M.P.); and Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colo (J.K.C.)
| |
Collapse
|
9
|
Shanbhag NM, Bin Sumaida A, Binz T, Hasnain SM, El-Koha O, Al Kaabi K, Saleh M, Al Qawasmeh K, Balaraj K. Integrating Artificial Intelligence Into Radiation Oncology: Can Humans Spot AI? Cureus 2023; 15:e50486. [PMID: 38098735 PMCID: PMC10719429 DOI: 10.7759/cureus.50486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2023] [Indexed: 12/17/2023] Open
Abstract
Introduction Artificial intelligence (AI) is transforming healthcare, particularly in radiation oncology. AI-based contouring tools like Limbus are designed to delineate Organs at Risk (OAR) and Target Volumes quickly. This study evaluates the accuracy and efficiency of AI contouring compared to human radiation oncologists and the ability of professionals to differentiate between AI-generated and human-generated contours. Methods At a recent AI conference in Abu Dhabi, a blind comparative analysis was performed to assess AI's performance in radiation oncology. Participants included four human radiation oncologists and the Limbus® AI software. They contoured specific regions from CT scans of a breast cancer patient. The audience, consisting of healthcare professionals and AI experts, was challenged to identify the AI-generated contours. The exercise was repeated twice to observe any learning effects. Time taken for contouring and audience identification accuracy were recorded. Results Initially, only 28% of the audience correctly identified the AI contours, which slightly increased to 31% in the second attempt. This indicated a difficulty in distinguishing between AI and human expertise. The AI completed contouring in up to 60 seconds, significantly faster than the human average of 8 minutes. Discussion The results indicate that AI can perform radiation contouring comparably to human oncologists but much faster. The challenge faced by professionals in identifying AI versus human contours highlights AI's advanced capabilities in medical tasks. Conclusion AI shows promise in enhancing radiation oncology workflow by reducing contouring time without quality compromise. Further research is needed to confirm AI contouring's clinical efficacy and its integration into routine practice.
Collapse
Affiliation(s)
- Nandan M Shanbhag
- Oncology/Palliative Care, Tawam Hospital, Al Ain, ARE
- Oncology/Radiation Oncolgy, Tawam Hospital, Al Ain, ARE
| | | | - Theresa Binz
- Radiotherapy Technology, Tawam Hospital, Al Ain, ARE
| | | | | | | | | | | | - Khalid Balaraj
- Oncology/Radiation Oncology, Tawam Hospital, Al Ain, ARE
| |
Collapse
|
10
|
Heyn C, Moody AR, Tseng CL, Wong E, Kang T, Kapadia A, Howard P, Maralani P, Symons S, Goubran M, Martel A, Chen H, Myrehaug S, Detsky J, Sahgal A, Soliman H. Segmentation of Brain Metastases Using Background Layer Statistics (BLAST). AJNR Am J Neuroradiol 2023; 44:1135-1143. [PMID: 37735088 PMCID: PMC10549939 DOI: 10.3174/ajnr.a7998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/16/2023] [Indexed: 09/23/2023]
Abstract
BACKGROUND AND PURPOSE Accurate segmentation of brain metastases is important for treatment planning and evaluating response. The aim of this study was to assess the performance of a semiautomated algorithm for brain metastases segmentation using Background Layer Statistics (BLAST). MATERIALS AND METHODS Nineteen patients with 48 parenchymal and dural brain metastases were included. Segmentation was performed by 4 neuroradiologists and 1 radiation oncologist. K-means clustering was used to identify normal gray and white matter (background layer) in a 2D parameter space of signal intensities from postcontrast T2 FLAIR and T1 MPRAGE sequences. The background layer was subtracted and operator-defined thresholds were applied in parameter space to segment brain metastases. The remaining voxels were back-projected to visualize segmentations in image space and evaluated by the operators. Segmentation performance was measured by calculating the Dice-Sørensen coefficient and Hausdorff distance using ground truth segmentations made by the investigators. Contours derived from the segmentations were evaluated for clinical acceptance using a 5-point Likert scale. RESULTS The median Dice-Sørensen coefficient was 0.82 for all brain metastases and 0.9 for brain metastases of ≥10 mm. The median Hausdorff distance was 1.4 mm. Excellent interreader agreement for brain metastases volumes was found with an intraclass correlation coefficient = 0.9978. The median segmentation time was 2.8 minutes/metastasis. Forty-five contours (94%) had a Likert score of 4 or 5, indicating that the contours were acceptable for treatment, requiring no changes or minor edits. CONCLUSIONS We show accurate and reproducible segmentation of brain metastases using BLAST and demonstrate its potential as a tool for radiation planning and evaluating treatment response.
Collapse
Affiliation(s)
- Chris Heyn
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Alan R Moody
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Chia-Lin Tseng
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Erin Wong
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Tony Kang
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Anish Kapadia
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Peter Howard
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Pejman Maralani
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sean Symons
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Maged Goubran
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Anne Martel
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Hanbo Chen
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sten Myrehaug
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Jay Detsky
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Arjun Sahgal
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Hany Soliman
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Wang H, Qu T, Bernstein K, Barbee D, Kondziolka D. Automatic segmentation of vestibular schwannomas from T1-weighted MRI with a deep neural network. Radiat Oncol 2023; 18:78. [PMID: 37158968 PMCID: PMC10169364 DOI: 10.1186/s13014-023-02263-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 04/12/2023] [Indexed: 05/10/2023] Open
Abstract
BACKGROUND Long-term follow-up using volumetric measurement could significantly assist in the management of vestibular schwannomas (VS). Manual segmentation of VS from MRI for treatment planning and follow-up assessment is labor-intensive and time-consuming. This study aims to develop a deep learning technique to fully automatically segment VS from MRI. METHODS This study retrospectively analyzed MRI data of 737 patients who received gamma knife radiosurgery for VS. Treatment planning T1-weighted isotropic MR and manually contoured gross tumor volumes (GTV) were used for model development. A 3D convolutional neural network (CNN) was built on ResNet blocks. Spatial attenuation and deep supervision modules were integrated in each decoder level to enhance the training for the small tumor volume on brain MRI. The model was trained and tested on 587 and 150 patient data, respectively, from this institution (n = 495) and a publicly available dataset (n = 242). The model performance were assessed by the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), average symmetric surface (ASSD) and relative absolute volume difference (RAVD) of the model segmentation results against the GTVs. RESULTS Measured on combined testing data from two institutions, the proposed method achieved mean DSC of 0.91 ± 0.08, ASSD of 0.3 ± 0.4 mm, HD95 of 1.3 ± 1.6 mm, and RAVD of 0.09 ± 0.15. The DSCs were 0.91 ± 0.09 and 0.92 ± 0.06 on 100 testing patients of this institution and 50 of the public data, respectively. CONCLUSIONS A CNN model was developed for fully automated segmentation of VS on T1-Weighted isotropic MRI. The model achieved good performance compared with physician clinical delineations on a sizeable dataset from two institutions. The proposed method potentially facilitates clinical workflow of radiosurgery for VS patient management.
Collapse
Affiliation(s)
- Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA.
| | - Tanxia Qu
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - Kenneth Bernstein
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - David Barbee
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - Douglas Kondziolka
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, NY, 10016, USA
| |
Collapse
|
12
|
Ruffle JK, Mohinta S, Gray R, Hyare H, Nachev P. Brain tumour segmentation with incomplete imaging data. Brain Commun 2023; 5:fcad118. [PMID: 37124946 PMCID: PMC10144694 DOI: 10.1093/braincomms/fcad118] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/22/2023] [Accepted: 04/08/2023] [Indexed: 05/02/2023] Open
Abstract
Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity-genetic, pathological, and clinical-of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characteristics, then large-scale, fully-inclusive, richly phenotyped data-including imaging-will be needed to predict them at the individual level. Such data can realistically be acquired only in the routine clinical stream, where its quality is inevitably degraded by the constraints of real-world clinical care. Although contemporary machine learning could theoretically provide a solution to this task, especially in the domain of imaging, its ability to cope with realistic, incomplete, low-quality data is yet to be determined. In the largest and most comprehensive study of its kind, applying state-of-the-art brain tumour segmentation models to large scale, multi-site MRI data of 1251 individuals, here we quantify the comparative fidelity of automated segmentation models drawn from MR data replicating the various levels of completeness observed in real life. We demonstrate that models trained on incomplete data can segment lesions very well, often equivalently to those trained on the full completement of images, exhibiting Dice coefficients of 0.907 (single sequence) to 0.945 (complete set) for whole tumours and 0.701 (single sequence) to 0.891 (complete set) for component tissue types. This finding opens the door both to the application of segmentation models to large-scale historical data, for the purpose of building treatment and outcome predictive models, and their application to real-world clinical care. We further ascertain that segmentation models can accurately detect enhancing tumour in the absence of contrast-enhancing imaging, quantifying the burden of enhancing tumour with an R 2 > 0.97, varying negligibly with lesion morphology. Such models can quantify enhancing tumour without the administration of intravenous contrast, inviting a revision of the notion of tumour enhancement if the same information can be extracted without contrast-enhanced imaging. Our analysis includes validation on a heterogeneous, real-world 50 patient sample of brain tumour imaging acquired over the last 15 years at our tertiary centre, demonstrating maintained accuracy even on non-isotropic MRI acquisitions, or even on complex post-operative imaging with tumour recurrence. This work substantially extends the translational opportunity for quantitative analysis to clinical situations where the full complement of sequences is not available and potentially enables the characterization of contrast-enhanced regions where contrast administration is infeasible or undesirable.
Collapse
Affiliation(s)
- James K Ruffle
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Samia Mohinta
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Robert Gray
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Harpreet Hyare
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Parashkev Nachev
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
13
|
Wang JY, Qu V, Hui C, Sandhu N, Mendoza MG, Panjwani N, Chang YC, Liang CH, Lu JT, Wang L, Kovalchuk N, Gensheimer MF, Soltys SG, Pollom EL. Stratified assessment of an FDA-cleared deep learning algorithm for automated detection and contouring of metastatic brain tumors in stereotactic radiosurgery. Radiat Oncol 2023; 18:61. [PMID: 37016416 PMCID: PMC10074777 DOI: 10.1186/s13014-023-02246-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 03/14/2023] [Indexed: 04/06/2023] Open
Abstract
PURPOSE Artificial intelligence-based tools can be leveraged to improve detection and segmentation of brain metastases for stereotactic radiosurgery (SRS). VBrain by Vysioneer Inc. is a deep learning algorithm with recent FDA clearance to assist in brain tumor contouring. We aimed to assess the performance of this tool by various demographic and clinical characteristics among patients with brain metastases treated with SRS. MATERIALS AND METHODS We randomly selected 100 patients with brain metastases who underwent initial SRS on the CyberKnife from 2017 to 2020 at a single institution. Cases with resection cavities were excluded from the analysis. Computed tomography (CT) and axial T1-weighted post-contrast magnetic resonance (MR) image data were extracted for each patient and uploaded to VBrain. A brain metastasis was considered "detected" when the VBrain- "predicted" contours overlapped with the corresponding physician contours ("ground-truth" contours). We evaluated performance of VBrain against ground-truth contours using the following metrics: lesion-wise Dice similarity coefficient (DSC), lesion-wise average Hausdorff distance (AVD), false positive count (FP), and lesion-wise sensitivity (%). Kruskal-Wallis tests were performed to assess the relationships between patient characteristics including sex, race, primary histology, age, and size and number of brain metastases, and performance metrics such as DSC, AVD, FP, and sensitivity. RESULTS We analyzed 100 patients with 435 intact brain metastases treated with SRS. Our cohort consisted of patients with a median number of 2 brain metastases (range: 1 to 52), median age of 69 (range: 19 to 91), and 50% male and 50% female patients. The primary site breakdown was 56% lung, 10% melanoma, 9% breast, 8% gynecological, 5% renal, 4% gastrointestinal, 2% sarcoma, and 6% other, while the race breakdown was 60% White, 18% Asian, 3% Black/African American, 2% Native Hawaiian or other Pacific Islander, and 17% other/unknown/not reported. The median tumor size was 0.112 c.c. (range: 0.010-26.475 c.c.). We found mean lesion-wise DSC to be 0.723, mean lesion-wise AVD to be 7.34% of lesion size (0.704 mm), mean FP count to be 0.72 tumors per case, and lesion-wise sensitivity to be 89.30% for all lesions. Moreover, mean sensitivity was found to be 99.07%, 97.59%, and 96.23% for lesions with diameter equal to and greater than 10 mm, 7.5 mm, and 5 mm, respectively. No other significant differences in performance metrics were observed across demographic or clinical characteristic groups. CONCLUSION In this study, a commercial deep learning algorithm showed promising results in segmenting brain metastases, with 96.23% sensitivity for metastases with diameters of 5 mm or higher. As the software is an assistive AI, future work of VBrain integration into the clinical workflow can provide further clinical and research insights.
Collapse
Affiliation(s)
- Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Vera Qu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Caressa Hui
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Navjot Sandhu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Maria G Mendoza
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Neil Panjwani
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | | | | | | | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Michael F Gensheimer
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Scott G Soltys
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Erqi L Pollom
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
14
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
15
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
16
|
Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is 'Clinically Acceptable'? Diagnostics (Basel) 2023; 13:diagnostics13040667. [PMID: 36832155 PMCID: PMC9955359 DOI: 10.3390/diagnostics13040667] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/21/2023] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.
Collapse
Affiliation(s)
- Hana Baroudi
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kristy K. Brock
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Correspondence:
| | - Mohammad D. El Basha
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Maguy Farhat
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Mary P. Gronberg
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Aashish Chandra Gupta
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kai Huang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - David A. Jaffray
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rebecca Lim
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Barbara Marquez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kelly Nealon
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Callistus M. Nguyen
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandon Reber
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramon M. Salazar
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mihir D. Shanker
- The University of Queensland, Saint Lucia 4072, Australia
- The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Physics, University of Houston, Houston, TX 77004, USA
| | - McKell Woodland
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Computer Science, Rice University, Houston, TX 77005, USA
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Zhao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
17
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
18
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
19
|
Yu H, Zhang Z, Xia W, Liu Y, Liu L, Luo W, Zhou J, Zhang Y. DeSeg: auto detector-based segmentation for brain metastases. Phys Med Biol 2023; 68. [PMID: 36535028 DOI: 10.1088/1361-6560/acace7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Delineation of brain metastases (BMs) is a paramount step in stereotactic radiosurgery treatment. Clinical practice has specific expectation on BM auto-delineation that the method is supposed to avoid missing of small lesions and yield accurate contours for large lesions. In this study, we propose a novel coarse-to-fine framework, named detector-based segmentation (DeSeg), to incorporate object-level detection into pixel-wise segmentation so as to meet the clinical demand. DeSeg consists of three components: a center-point-guided single-shot detector to localize the potential lesion regions, a multi-head U-Net segmentation model to refine contours, and a data cascade unit to connect both tasks smoothly. Performance on tiny lesions is measured by the object-based sensitivity and positive predictive value (PPV), while that on large lesions is quantified by dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95). Besides, computational complexity is also considered to study the potential of method in real-time processing. This study retrospectively collected 240 BM patients with Gadolinium injected contrast-enhanced T1-weighted magnetic resonance imaging (T1c-MRI), which were randomly split into training, validating and testing datasets (192, 24 and 24 scans, respectively). The lesions in the testing dataset were further divided into two groups based on the volume size (smallS: ≤1.5 cc,N= 88; largeL: > 1.5 cc,N= 15). On average, DeSeg yielded a sensitivity of 0.91 and a PPV of 0.77 on S group, and a DSC of 0.86, an ASSD 0f 0.76 mm and a HD95 of 2.31 mm onLgroup. The results indicated that DeSeg achieved leading sensitivity and PPV for tiny lesions as well as segmentation metrics for large ones. After our clinical validation, DeSeg showed competitive segmentation performance while kept faster processing speed comparing with existing 3D models.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhongzhou Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wenjun Xia
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, 610044, People's Republic of China
| | - Wuman Luo
- School of Applied Sciences, Macao Polytechnic University, Macao, 999078, People's Republic of China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
20
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
21
|
Ann CN, Luo N, Pandit AS. Letter: Image Segmentation in Neurosurgery: An Undervalued Skill Set? Neurosurgery 2022; 91:e31-e32. [PMID: 35471495 DOI: 10.1227/neu.0000000000002018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 03/13/2022] [Indexed: 11/19/2022] Open
Affiliation(s)
- Chu Ning Ann
- University College London, Institute of Cognitive Neuroscience, London, UK
| | - Nianhe Luo
- University College London Medical School, Bloomsbury, London, UK
| | - Anand S Pandit
- Victor Horsley Department of Neurosurgery, The National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| |
Collapse
|
22
|
Machine learning in neuro-oncology: toward novel development fields. J Neurooncol 2022; 159:333-346. [PMID: 35761160 DOI: 10.1007/s11060-022-04068-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/11/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Artificial Intelligence (AI) involves several and different techniques able to elaborate a large amount of data responding to a specific planned outcome. There are several possible applications of this technology in neuro-oncology. METHODS We reviewed, according to PRISMA guidelines, available studies adopting AI in different fields of neuro-oncology including neuro-radiology, pathology, surgery, radiation therapy, and systemic treatments. RESULTS Neuro-radiology presented the major number of studies assessing AI. However, this technology is being successfully tested also in other operative settings including surgery and radiation therapy. In this context, AI shows to significantly reduce resources and costs maintaining an elevated qualitative standard. Pathological diagnosis and development of novel systemic treatments are other two fields in which AI showed promising preliminary data. CONCLUSION It is likely that AI will be quickly included in some aspects of daily clinical practice. Possible applications of these techniques are impressive and cover all aspects of neuro-oncology.
Collapse
|
23
|
Bredfeldt JS, Miao X, Kaza E, Schneider M, Requardt M, Feiweier T, Aizer A, Tanguturi S, Haas-Kogan D, Rahman R, Cagney DN, Sudhyadhom A. Patient specific distortion detection and mitigation in MR images used for stereotactic radiosurgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac508e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/31/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. In MRI-based radiation therapy planning, mitigating patient-specific distortion with standard high bandwidth scans can result in unnecessary sacrifices of signal to noise ratio. This study investigates a technique for distortion detection and mitigation on a patient specific basis. Approach. Fast B0 mapping was performed using a previously developed technique for high-resolution, large dynamic range field mapping without the need for phase unwrapping algorithms. A phantom study was performed to validate the method. Distortion mitigation was validated by reducing geometric distortion with increased acquisition bandwidth and confirmed by both the B0 mapping technique and manual measurements. Images and contours from 25 brain stereotactic radiosurgery patients and 95 targets were analyzed to estimate the range of geometric distortions expected in the brain and to estimate bandwidth required to keep all treatment targets within the ±0.5 mm iso-distortion contour. Main Results. The phantom study showed, at 3 T, the technique can measure distortions with a mean absolute error of 0.12 mm (0.18 ppm), and a maximum error of 0.37 mm (0.6 ppm). For image acquisition at 3 T and 1.0 mm resolution, mean absolute distortion under 0.5 mm in patients required bandwidths from 109 to 200 Hz px−1 for patients with the least and most distortion, respectively. Maximum absolute distortion under 0.5 mm required bandwidths from 120 to 390 Hz px−1. Significance. The method for B0 mapping was shown to be valid and may be applied to assess distortion clinically. Future work will adapt the readout bandwidth to prospectively mitigate distortion with the goal to improve radiosurgery treatment outcomes by reducing healthy tissue exposure.
Collapse
|
24
|
Shirokikh B, Dalechina A, Shevtsov A, Krivov E, Kostjuchenko V, Durgaryan A, Galkin M, Golanov A, Belyaev M. Systematic Clinical Evaluation of A Deep Learning Method for Medical Image Segmentation: Radiosurgery Application. IEEE J Biomed Health Inform 2022; 26:3037-3046. [PMID: 35213318 DOI: 10.1109/jbhi.2022.3153394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We systematically evaluate a Deep Learning model in a 3D medical image segmentation task. With our model, we address the flaws of manual segmentation: high inter-rater contouring variability and time consumption of the contouring process. The main extension over the existing evaluations is the careful and detailed analysis that could be further generalized on other medical image segmentation tasks. Firstly, we analyze the changes in the inter-rater detection agreement. We show that the model reduces the number of detection disagreements by 48% (p < 0.05). Secondly, we show that the model improves the inter-rater contouring agreement from 0.845 to 0.871 surface Dice Score (p < 0.05). Thirdly, we show that the model accelerates the delineation process between 1.6 and 2.0 times (p < 0.05). Finally, we design the setup of the clinical experiment to either exclude or estimate the evaluation biases; thus, preserving the significance of the results. Besides the clinical evaluation, we also share intuitions and practical ideas for building an efficient DL-based model for 3D medical image segmentation.
Collapse
|
25
|
Yin S, Luo X, Yang Y, Shao Y, Ma L, Lin C, Yang Q, Wang D, Luo Y, Mai Z, Fan W, Zheng D, Li J, Cheng F, Zhang Y, Zhong X, Shen F, Shao G, Wu J, Sun Y, Luo H, Li C, Gao Y, Shen D, Zhang R, Xie C. OUP accepted manuscript. Neuro Oncol 2022; 24:1559-1570. [PMID: 35100427 PMCID: PMC9435500 DOI: 10.1093/neuonc/noac025] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
| | | | | | - Ying Shao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Qiuxia Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Deling Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhijun Mai
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Fengyan Cheng
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Yuhui Zhang
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Xinwei Zhong
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Fangmin Shen
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Guohua Shao
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Jiahao Wu
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Huiyan Luo
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chaofeng Li
- Department of Artificial Intelligence Laboratory, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Rong Zhang
- Rong Zhang, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| | - Chuanmiao Xie
- Corresponding Authors: Chuanmiao Xie, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| |
Collapse
|
26
|
Deep Learning-Based Segmentation of Various Brain Lesions for Radiosurgery. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11199180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Semantic segmentation of medical images with deep learning models is rapidly being developed. In this study, we benchmarked state-of-the-art deep learning segmentation algorithms on our clinical stereotactic radiosurgery dataset. The dataset consists of 1688 patients with various brain lesions (pituitary tumors, meningioma, schwannoma, brain metastases, arteriovenous malformation, and trigeminal neuralgia), and we divided the dataset into a training set (1557 patients) and test set (131 patients). This study demonstrates the strengths and weaknesses of deep-learning algorithms in a fairly practical scenario. We compared the model performances concerning their sampling method, model architecture, and the choice of loss functions, identifying suitable settings for their applications and shedding light on the possible improvements. Evidence from this study led us to conclude that deep learning could be promising in assisting the segmentation of brain lesions even if the training dataset was of high heterogeneity in lesion types and sizes.
Collapse
|
27
|
Kotecha R, Aneja S. Opportunities for Integration of Artificial Intelligence into Stereotactic Radiosurgery Practice. Neuro Oncol 2021; 23:1629-1630. [PMID: 34244803 DOI: 10.1093/neuonc/noab169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Rupesh Kotecha
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, FL, USA.,Herbert Wertheim College of Medicine, Florida International University, Miami, FL, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA.,Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|