1
|
Wu Y, Li Y, Liu Y, Zhu D, Xing S, Lambert N, Weisbecker H, Liu S, Davis B, Zhang L, Wang M, Yuan G, You CZ, Zhang A, Duncan C, Xie W, Wang Y, Wang Y, Kanamurlapudi S, Evert GG, Putcha A, Dickey MD, Huang K, Bai W. Orbit symmetry breaking in MXene implements enhanced soft bioelectronic implants. SCIENCE ADVANCES 2024; 10:eadp8866. [PMID: 39356763 DOI: 10.1126/sciadv.adp8866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 08/28/2024] [Indexed: 10/04/2024]
Abstract
Bioelectronic implants featuring soft mechanics, excellent biocompatibility, and outstanding electrical performance hold promising potential to revolutionize implantable technology. These biomedical implants can record electrophysiological signals and execute direct therapeutic interventions within internal organs, offering transformative potential in the diagnosis, monitoring, and treatment of various pathological conditions. However, challenges remain in improving excessive impedance at the bioelectronic-tissue interface and thus the efficacy of electrophysiological signaling and intervention. Here, we devise orbit symmetry breaking in MXene (a low-cost scalability, biocompatible, and conductive two dimensionally layered material, which we refer to as OBXene), which exhibits low bioelectronic-tissue impedance, originating from the out-of-plane charge transfer. Furthermore, the Schottky-induced piezoelectricity stemming from the asymmetric orbital configuration of OBXene facilitates interlayered charge transport in the device. We report an OBXene-based cardiac patch applied on the left ventricular epicardium of both rodent and porcine models to enable spatiotemporal epicardium mapping and pacing while coupling the wireless and battery-free operation for long-term real-time recording and closed-loop stimulation.
Collapse
Affiliation(s)
- Yizhang Wu
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Yuan Li
- Department of Biomedical Engineering, Columbia University, NY, New York 10032, USA
| | - Yihan Liu
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Dashuai Zhu
- Department of Biomedical Engineering, Columbia University, NY, New York 10032, USA
| | - Sicheng Xing
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Noah Lambert
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Hannah Weisbecker
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Siyuan Liu
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Brayden Davis
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Lin Zhang
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Meixiang Wang
- Department of Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, NC 27606, USA
| | - Gongkai Yuan
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | | | - Anran Zhang
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Cate Duncan
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Wanrong Xie
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Yihang Wang
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Yong Wang
- Wide Bandgap Semiconductor Technology Disciplines State Key Laboratory, School of Microelectronics, Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an 710071, China
| | - Sreya Kanamurlapudi
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27607, USA
| | - Garcia-Guzman Evert
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Arjun Putcha
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Michael D Dickey
- Department of Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, NC 27606, USA
| | - Ke Huang
- Department of Biomedical Engineering, Columbia University, NY, New York 10032, USA
| | - Wubin Bai
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| |
Collapse
|
2
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024; 17:725-738. [PMID: 39048847 PMCID: PMC11341643 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
3
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
4
|
Kikuchi K, Togao O, Kikuchi Y, Yamashita K, Momosaka D, Fukasawa K, Nishimura S, Toyoda H, Obara M, Hiwatashi A, Ishigami K. Artificial intelligence-assisted volume isotropic simultaneous interleaved bright- and black-blood examination for brain metastases. Neuroradiology 2024:10.1007/s00234-024-03454-4. [PMID: 39172167 DOI: 10.1007/s00234-024-03454-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 08/14/2024] [Indexed: 08/23/2024]
Abstract
PURPOSE To verify the effectiveness of artificial intelligence-assisted volume isotropic simultaneous interleaved bright-/black-blood examination (AI-VISIBLE) for detecting brain metastases. METHODS This retrospective study was approved by our institutional review board and the requirement for written informed consent was waived. Forty patients were included: 20 patients with and without brain metastases each. Seven independent observers (three radiology residents and four neuroradiologists) participated in two reading sessions: in the first, brain metastases were detected using VISIBLE only; in the second, the results of the first session were comprehensively evaluated by adding AI-VISIBLE information. Sensitivity, diagnostic performance, and false positives/case were evaluated. Diagnostic performance was assessed using a figure-of-merit (FOM). Sensitivity and false positives/case were evaluated using McNemar and paired t-tests, respectively. RESULTS The McNemar test revealed a significant difference between VISIBLE with/without AI information (P < 0.0001). Significantly higher sensitivity (94.9 ± 1.7% vs. 88.3 ± 5.1%, P = 0.0028) and FOM (0.983 ± 0.009 vs. 0.972 ± 0.013, P = 0.0063) were achieved using VISIBLE with AI information vs. without. No significant difference was observed in false positives/case with and without AI information (0.23 ± 0.19 vs. 0.18 ± 0.15, P = 0.250). AI-assisted results of radiology residents became comparable to results of neuroradiologists (sensitivity, FOM: 85.9 ± 3.4% vs. 90.0 ± 5.9%, 0.969 ± 0.016 vs. 0.974 ± 0.012 without AI information; 94.8 ± 1.3% vs. 95.0 ± 2.1%, 0.977 ± 0.010 vs. 0.988 ± 0.005 with AI information, respectively). CONCLUSION AI-VISIBLE improved the sensitivity and performance for diagnosing brain metastases.
Collapse
Affiliation(s)
- Kazufumi Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Osamu Togao
- Department of Molecular Imaging & Diagnosis, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshitomo Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Koji Yamashita
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Daichi Momosaka
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Kazunori Fukasawa
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Shunsuke Nishimura
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Hiroyuki Toyoda
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Makoto Obara
- Philips Japan Ltd., 2-13-37, Konan, Minato-ku, Tokyo, 108-8507, Japan
| | - Akio Hiwatashi
- Department of Radiology, Graduate School of Medical Sciences, Nagoya City University, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, 467-8601, Japan
| | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
5
|
Chukwujindu E, Faiz H, Ai-Douri S, Faiz K, De Sequeira A. Role of artificial intelligence in brain tumour imaging. Eur J Radiol 2024; 176:111509. [PMID: 38788610 DOI: 10.1016/j.ejrad.2024.111509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/29/2024] [Accepted: 05/13/2024] [Indexed: 05/26/2024]
Abstract
Artificial intelligence (AI) is a rapidly evolving field with many neuro-oncology applications. In this review, we discuss how AI can assist in brain tumour imaging, focusing on machine learning (ML) and deep learning (DL) techniques. We describe how AI can help in lesion detection, differential diagnosis, anatomic segmentation, molecular marker identification, prognostication, and pseudo-progression evaluation. We also cover AI applications in non-glioma brain tumours, such as brain metastasis, posterior fossa, and pituitary tumours. We highlight the challenges and limitations of AI implementation in radiology, such as data quality, standardization, and integration. Based on the findings in the aforementioned areas, we conclude that AI can potentially improve the diagnosis and treatment of brain tumours and provide a path towards personalized medicine and better patient outcomes.
Collapse
Affiliation(s)
| | | | | | - Khunsa Faiz
- McMaster University, Department of Radiology, L8S 4L8, Canada.
| | | |
Collapse
|
6
|
Moawad AW, Janas A, Baid U, Ramakrishnan D, Saluja R, Ashraf N, Jekel L, Amiruddin R, Adewole M, Albrecht J, Anazodo U, Aneja S, Anwar SM, Bergquist T, Calabrese E, Chiang V, Chung V, Conte GMM, Dako F, Eddy J, Ezhov I, Familiar A, Farahani K, Iglesias JE, Jiang Z, Johanson E, Kazerooni AF, Kofler F, Krantchev K, LaBella D, Van Leemput K, Li HB, Linguraru MG, Link KE, Liu X, Maleki N, Meier Z, Menze BH, Moy H, Osenberg K, Piraud M, Reitman Z, Shinohara RT, Tahon NH, Nada A, Velichko YS, Wang C, Wiestler B, Wiggins W, Shafique U, Willms K, Avesta A, Bousabarah K, Chakrabarty S, Gennaro N, Holler W, Kaur M, LaMontagne P, Lin M, Lost J, Marcus DS, Maresca R, Merkaj S, Nada A, Pedersen GC, von Reppert M, Sotiras A, Teytelboym O, Tillmans N, Westerhoff M, Youssef A, Godfrey D, Floyd S, Rauschecker A, Villanueva-Meyer J, Pflüger I, Cho J, Bendszus M, Brugnara G, Cramer J, Perez-Carillo GJG, Johnson DR, Kam A, Kwan BYM, Lai L, Lall NU, Memon F, Patro SN, Petrovic B, So TY, Thompson G, Wu L, Schrickel EB, Bansal A, Barkhof F, Besada C, Chu S, Druzgal J, Dusoi A, Farage L, Feltrin F, Fong A, Fung SH, Gray RI, Ikuta I, Iv M, Postma AA, Mahajan A, Joyner D, Krumpelman C, Letourneau-Guillon L, Lincoln CM, Maros ME, Miller E, Morón F, Nimchinsky EA, Ozsarlak O, Patel U, Rohatgi S, Saha A, Sayah A, Schwartz ED, Shih R, Shiroishi MS, Small JE, Tanwar M, Valerie J, Weinberg BD, White ML, Young R, Zohrabian VM, Azizova A, Brüßeler MMT, Fehringer P, Ghonim M, Ghonim M, Gkampenis A, Okar A, Pasquini L, Sharifi Y, Singh G, Sollmann N, Soumala T, Taherzadeh M, Yordanov N, Vollmuth P, Foltyn-Dumitru M, Malhotra A, Abayazeed AH, Dellepiane F, Lohmann P, Pérez-García VM, Elhalawani H, Al-Rubaiey S, Armindo RD, Ashraf K, Asla MM, Badawy M, Bisschop J, Lomer NB, Bukatz J, Chen J, Cimflova P, Corr F, Crawley A, Deptula L, Elakhdar T, Shawali IH, Faghani S, Frick A, Gulati V, Haider MA, Hierro F, Dahl RH, Jacobs SM, Hsieh KCJ, Kandemirli SG, Kersting K, Kida L, Kollia S, Koukoulithras I, Li X, Abouelatta A, Mansour A, Maria-Zamfirescu RC, Marsiglia M, Mateo-Camacho YS, McArthur M, McDonnell O, McHugh M, Moassefi M, Morsi SM, Muntenu A, Nandolia KK, Naqvi SR, Nikanpour Y, Alnoury M, Nouh AMA, Pappafava F, Patel MD, Petrucci S, Rawie E, Raymond S, Roohani B, Sabouhi S, Sanchez-Garcia LM, Shaked Z, Suthar PP, Altes T, Isufi E, Dhermesh Y, Gass J, Thacker J, Tarabishy AR, Turner B, Vacca S, Vilanilam GK, Warren D, Weiss D, Willms K, Worede F, Yousry S, Lerebo W, Aristizabal A, Karargyris A, Kassem H, Pati S, Sheller M, Bakas S, Rudie JD, Aboian M. The Brain Tumor Segmentation - Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. ARXIV 2024:arXiv:2306.00838v2. [PMID: 37396600 PMCID: PMC10312806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The translation of AI-generated brain metastases (BM) segmentation into clinical practice relies heavily on diverse, high-quality annotated medical imaging datasets. The BraTS-METS 2023 challenge has gained momentum for testing and benchmarking algorithms using rigorously annotated internationally compiled real-world datasets. This study presents the results of the segmentation challenge and characterizes the challenging cases that impacted the performance of the winning algorithms. Untreated brain metastases on standard anatomic MRI sequences (T1, T2, FLAIR, T1PG) from eight contributed international datasets were annotated in stepwise method: published UNET algorithms, student, neuroradiologist, final approver neuroradiologist. Segmentations were ranked based on lesion-wise Dice and Hausdorff distance (HD95) scores. False positives (FP) and false negatives (FN) were rigorously penalized, receiving a score of 0 for Dice and a fixed penalty of 374 for HD95. The mean scores for the teams were calculated. Eight datasets comprising 1303 studies were annotated, with 402 studies (3076 lesions) released on Synapse as publicly available datasets to challenge competitors. Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing. Segmentation accuracy was measured as rank across subjects, with the winning team achieving a LesionWise mean score of 7.9. The Dice score for the winning team was 0.65 ± 0.25. Common errors among the leading teams included false negatives for small lesions and misregistration of masks in space. The Dice scores and lesion detection rates of all algorithms diminished with decreasing tumor size, particularly for tumors smaller than 100 mm3. In conclusion, algorithms for BM segmentation require further refinement to balance high sensitivity in lesion detection with the minimization of false positives and negatives. The BraTS-METS 2023 challenge successfully curated well-annotated, diverse datasets and identified common errors, facilitating the translation of BM segmentation across varied clinical environments and providing personalized volumetric reports to patients undergoing BM treatment.
Collapse
Affiliation(s)
| | - Anastasia Janas
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Ujjwal Baid
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Divya Ramakrishnan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Rachit Saluja
- Department of Electical and Computer Engineering, Cornell University and Cornell Tech, New York, NY, USA
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
| | - Nader Ashraf
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Leon Jekel
- DKFZ Division of Translational Neurooncology at the WTZ, German Cancer Consortium, DKTK Partner Site, University Hospital Essen, Essen, Germany
| | - Raisa Amiruddin
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Maruf Adewole
- Medical Artificial Intelligence Lab, Crestview Radiology, Lagos, Nigeria
| | | | - Udunna Anazodo
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Medical Artificial Intelligence (MAI) lab, Crestview Radiology, Lagos, Nigeria
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Syed Muhammad Anwar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
| | | | - Evan Calabrese
- Department of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Veronica Chiang
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Farouk Dako
- Center for Global Health, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | | | - Ivan Ezhov
- Department of Informatics, Technical University Munich, Germany
| | - Ariana Familiar
- Children’s Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Zhifan Jiang
- Children’s National Hospital, Washington, D.C., USA
| | - Elaine Johanson
- PrecisionFDA, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Anahita Fathi Kazerooni
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
- Division of Neurosurgery, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Center for Data-Driven Discovery in Biomedicine, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Kiril Krantchev
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Dominic LaBella
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
| | - Hongwei Bran Li
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
- Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Washington, D.C., USA
| | | | - Xinyang Liu
- Children’s National Hospital, Washington, D.C., USA
| | - Nazanin Maleki
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Bjoern H Menze
- Biomedical Image Analysis & Machine Learning, Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Harrison Moy
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Osenberg
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Russel Takeshi Shinohara
- Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Yuri S. Velichko
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | - Chunhao Wang
- Duke University School of Medicine, Durham, NC, USA
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich, Munich, Germany
| | | | - Umber Shafique
- Department of Radiology and Imaging Sciences, Indiana University, Indianapolis, IN, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA
- GE HealthCare, San Ramon, CA, USA
| | - Nicolo Gennaro
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | | | - Manpreet Kaur
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Jan Lost
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Daniel S. Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Ryan Maresca
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Sarah Merkaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Marc von Reppert
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Institute for Informatics, Data Science & Biostatistics, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Niklas Tillmans
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | | | - Scott Floyd
- Duke University Medical Center, Durham, NC, USA
| | - Andreas Rauschecker
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Javier Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Jaeyoung Cho
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Justin Cramer
- Department of Radiology, Mayo Clinic, Phoenix, AZ, USA
| | | | | | - Anthony Kam
- Loyola University Medical Center, Hines, IL, USA
| | | | - Lillian Lai
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Fatima Memon
- Carolina Radiology Associates, Myrtle Beach, SC, USA
- McLeod Regional Medical Center, Florence, SC, USA
- Medical University of South Carolina, Charleston, SC, USA
| | | | | | - Tiffany Y. So
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Gerard Thompson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Department of Clinical Neurosciences, NHS Lothian, Edinburgh, United Kingdom
| | - Lei Wu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - E. Brooke Schrickel
- Department of Radiology, Ohio State University College of medicine, Columbus, OH, USA
| | - Anu Bansal
- Albert Einstein Medical Center, Hartford, CT, USA
| | - Frederik Barkhof
- Amsterdam UMC, location Vrije Universiteir, the Netherlands
- University College London, United Kingdom
| | | | - Sammy Chu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Jason Druzgal
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | | | - Luciano Farage
- Centro Universitario Euro-Americana (UNIEURO), Brasília, DF, Brazil
| | - Fabricio Feltrin
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Amy Fong
- Southern District Health Board, Dunedin, New Zealand
| | - Steve H. Fung
- Department of Radiology, Houston Methodist, Houston, TX, USA
| | - R. Ian Gray
- University of Tennessee medical Center, Knoxville, TN, USA
| | - Ichiro Ikuta
- Mayo Clinic, Department of Radiology, Section of Neuroradiology, Phoenix, AZ, USA
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Alida A. Postma
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands
- Mental Health and Neuroscience research institute, Maastricht University, Maastricht, the Netherlands
| | - Amit Mahajan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - David Joyner
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA
| | - Chase Krumpelman
- Department of Radiology, University of Northwestern, Chicago, IL, USA
| | | | | | - Mate E. Maros
- Departments of Neuroradiology & Biomedical Informatics, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Elka Miller
- Department of Diagnostic and Interventional Radiology, SickKids Hospital, University of Toronto, Canada
| | - Fanny Morón
- Department of Radiology, Baylor College of medicine, Houston, TX, USA
| | | | - Ozkan Ozsarlak
- Department of Radiology, AZ Monica, Antwerp Area, Belgium
| | - Uresh Patel
- Medicolegal Imaging Experts LLC, Mercer Island, WA, USA
| | - Saurabh Rohatgi
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Atin Saha
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Weill Cornell Medical College, New York, NY, USA
| | - Anousheh Sayah
- MedStar Georgetown University Hospital, Washington, D.C., USA
| | - Eric D. Schwartz
- Department of Radiology, St.Elizabeth’s Medical Center, Boston, MA, USA
- Department of Radiology, Tufts University School of Medicine, Boston, MA, USA
| | - Robert Shih
- Walter Reed National Military Medical Center, Bethesda, MD, USA
| | | | | | | | - Jewels Valerie
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Brent D. Weinberg
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | | | - Robert Young
- George Washington University, Washington, D.C., USA
| | - Vahe M. Zohrabian
- Northwell Health, Zucker Hofstra School of Medicine at Northwell, North Shore University Hospital, Hempstead, New York, NY, USA
| | - Aynur Azizova
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | | | - Pascal Fehringer
- Faculty of Medicine, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | - Mohanad Ghonim
- Department of Radiology, Ain Shams University, Cairo, Egypt
| | - Mohamed Ghonim
- Department of Radiology, Ain Shams University, Cairo - Egypt
| | | | | | - Luca Pasquini
- Radiology Department, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| | | | - Gagandeep Singh
- Columbia University Irving Medical Center, New York, NY, USA
| | - Nico Sollmann
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | | | | | - Nikolay Yordanov
- Faculty of Medicine, Medical University - Sofia, Sofia, Bulgaria
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Department of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Francesco Dellepiane
- Functional and Interventional Neuroradiology Unit, Bambino Gesù Children’s Hospital, Rome, Italy
| | - Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-4), Research Center Juelich, Juelich, Germany
- Department of Nuclear Medicine, University Hospital RWTH Aachen, Aachen, Germany
| | - Víctor M. Pérez-García
- Mathematical Oncology Laboratory & Department of Mathematics, University of Castilla-La Mancha, Spain
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanaria Al-Rubaiey
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Rui Duarte Armindo
- Department of Neuroradiology, Western Lisbon Hospital Centre (CHLO), Portugal
| | | | | | - Mohamed Badawy
- Diagnostic Radiology Department, Wayne State University, Detroit, MI
| | - Jeroen Bisschop
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | | | - Jan Bukatz
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Jim Chen
- Department of Radiology/Division of Neuroradiology, San Diego Veterans Administration Medical Center/UC San Diego Health System, San Diego, CA, USA
| | - Petra Cimflova
- Department of Radiology, University of Calgary, Calgary, Canada
| | - Felix Corr
- EDU Institute of Higher Education, Villa Bighi, Chaplain’s House, Kalkara, Malta
| | | | - Lisa Deptula
- Ross University School of Medicine, Bridgetown, Barbados
| | | | | | | | - Alexandra Frick
- Department of Neurosurgery, Vivantes Klinikum Neukölln, Berlin, Germany
| | | | | | - Fátima Hierro
- Neuroradiology Department, Pedro Hispano Hospital, Matosinhos, Portugal
| | - Rasmus Holmboe Dahl
- Department of Radiology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Sarah Maria Jacobs
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Sedat G. Kandemirli
- Department of Radiology, University of Iowa Hospital and Clinics, Iowa City, IA, USA
| | - Katharina Kersting
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Laura Kida
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Sofia Kollia
- National and Kapodistrian University of Athens, School of Medicine, Athens, Greece
| | | | - Xiao Li
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Ahmed Abouelatta
- Department of Diagnostic and Interventional Radiology, Cairo University, Cairo, Egypt
| | | | - Ruxandra-Catrinel Maria-Zamfirescu
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Marcela Marsiglia
- Department of Radiology, Brigham and Women’s Hospital, Massachusetts General Hospital, Boston, MA, USA
| | | | - Mark McArthur
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | | | - Maire McHugh
- Department of Radiology Manchester NHS Foundation Trust, North West School of Radiology, Manchester, United Kingdom
| | - Mana Moassefi
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | - Khanak K. Nandolia
- Department of Radiodiagnosis, All India Institute of Medical Sciences Rishikesh, India
| | - Syed Raza Naqvi
- Windsor Regional Hospital, Western University, Ontario, Canada
| | - Yalda Nikanpour
- Artificial Intelligence & Informatics, Mayo Clinic, Rochester, MN, USA
| | - Mostafa Alnoury
- Department of Radiology, University of Pennsylvania, PA, USA
| | | | - Francesca Pappafava
- Department of Medicine and Surgery, Università degli Studi di Perugia, Italy
| | - Markand D. Patel
- Department of Neuroradiology, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Samantha Petrucci
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Eric Rawie
- Department of Radiology, Michigan Medicine, Ann Arbor, MI, USA
| | - Scott Raymond
- Department of Radiology, University of Vermont Medical Center, Burlington, VT, USA
| | - Borna Roohani
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Sadeq Sabouhi
- Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Zoe Shaked
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | | | - Talissa Altes
- Radiology Department, University of Missouri, Columbia, MO, USA
| | | | | | | | | | - Abdul Rahman Tarabishy
- Department of NeuroRadiology, Rockefeller Neuroscience Institute, West Virginia University. Morgantown, WV, USA
| | | | - Sebastiano Vacca
- University of Cagliari, School of Medicine and Surgery, Cagliari, Italy
| | - George K. Vilanilam
- Department of Radiology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Daniel Warren
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - David Weiss
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Fikadu Worede
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Wondwossen Lerebo
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | | | | | - Sarthak Pati
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Center For Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA
- Medical Working Group, MLCommons, San Fransisco, CA, USA
| | | | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Neurological Surgery, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Jeffrey D. Rudie
- Department of Radiology, University of California San Diego, CA, USA
- Department of Radiology, Scripps Clinic Medical Group, CA, USA
| | - Mariam Aboian
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
7
|
Du S, Gong G, Liu R, Meng K, Yin Y. Advances in determining the gross tumor target volume for radiotherapy of brain metastases. Front Oncol 2024; 14:1338225. [PMID: 38779095 PMCID: PMC11109437 DOI: 10.3389/fonc.2024.1338225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 04/19/2024] [Indexed: 05/25/2024] Open
Abstract
Brain metastases (BMs) are the most prevalent intracranial malignant tumors in adults and are the leading cause of mortality attributed to malignant brain diseases. Radiotherapy (RT) plays a critical role in the treatment of BMs, with local RT techniques such as stereotactic radiosurgery (SRS)/stereotactic body radiotherapy (SBRT) showing remarkable therapeutic effectiveness. The precise determination of gross tumor target volume (GTV) is crucial for ensuring the effectiveness of SRS/SBRT. Multimodal imaging techniques such as CT, MRI, and PET are extensively used for the diagnosis of BMs and GTV determination. With the development of functional imaging and artificial intelligence (AI) technology, there are more innovative ways to determine GTV for BMs, which significantly improve the accuracy and efficiency of the determination. This article provides an overview of the progress in GTV determination for RT in BMs.
Collapse
Affiliation(s)
- Shanshan Du
- Department of Oncology, Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Guanzhong Gong
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Rui Liu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Kangning Meng
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Oncology, Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
8
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
9
|
Yun S, Park JE, Kim N, Park SY, Kim HS. Reducing false positives in deep learning-based brain metastasis detection by using both gradient-echo and spin-echo contrast-enhanced MRI: validation in a multi-center diagnostic cohort. Eur Radiol 2024; 34:2873-2884. [PMID: 37891415 DOI: 10.1007/s00330-023-10318-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/08/2023] [Accepted: 08/18/2023] [Indexed: 10/29/2023]
Abstract
OBJECTIVES To develop a deep learning (DL) for detection of brain metastasis (BM) that incorporates both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced DL) and evaluate it in a clinical cohort in comparison with human readers and DL using gradient-echo-based imaging only (GRE DL). MATERIALS AND METHODS DL detection was developed using data from 200 patients with BM (training set) and tested in 62 (internal) and 48 (external) consecutive patients who underwent stereotactic radiosurgery and diagnostic dual-enhanced imaging (dual-enhanced DL) and later guide GRE imaging (GRE DL). The detection sensitivity and positive predictive value (PPV) were compared between two DLs. Two neuroradiologists independently analyzed BM and reference standards for BM were separately drawn by another neuroradiologist. The relative differences (RDs) from the reference standard BM numbers were compared between the DLs and neuroradiologists. RESULTS Sensitivity was similar between GRE DL (93%, 95% confidence interval [CI]: 90-96%) and dual-enhanced DL (92% [89-94%]). The PPV of the dual-enhanced DL was higher (89% [86-92%], p < .001) than that of GRE DL (76%, [72-80%]). GRE DL significantly overestimated the number of metastases (false positives; RD: 0.05, 95% CI: 0.00-0.58) compared with neuroradiologists (RD: 0.00, 95% CI: - 0.28, 0.15, p < .001), whereas dual-enhanced DL (RD: 0.00, 95% CI: 0.00-0.15) did not show a statistically significant difference from neuroradiologists (RD: 0.00, 95% CI: - 0.20-0.10, p = .913). CONCLUSION The dual-enhanced DL showed improved detection of BM and reduced overestimation compared with GRE DL, achieving similar performance to neuroradiologists. CLINICAL RELEVANCE STATEMENT The use of deep learning-based brain metastasis detection with turbo spin-echo imaging reduces false positive detections, aiding in the guidance of stereotactic radiosurgery when gradient-echo imaging alone is employed. KEY POINTS •Deep learning for brain metastasis detection improved by using both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced deep learning). •Dual-enhanced deep learning increased true positive detections and reduced overestimation. •Dual-enhanced deep learning achieved similar performance to neuroradiologists for brain metastasis counts.
Collapse
Affiliation(s)
- Suyoung Yun
- Department of Radiology, Busan Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | | | - Seo Young Park
- Department of Statistics and Data Science, Korea National Open University, Seoul, Republic of Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea
| |
Collapse
|
10
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
11
|
Park YW, Park JE, Ahn SS, Han K, Kim N, Oh JY, Lee DH, Won SY, Shin I, Kim HS, Lee SK. Deep learning-based metastasis detection in patients with lung cancer to enhance reproducibility and reduce workload in brain metastasis screening with MRI: a multi-center study. Cancer Imaging 2024; 24:32. [PMID: 38429843 PMCID: PMC10905821 DOI: 10.1186/s40644-024-00669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024] Open
Abstract
OBJECTIVES To assess whether a deep learning-based system (DLS) with black-blood imaging for brain metastasis (BM) improves the diagnostic workflow in a multi-center setting. MATERIALS AND METHODS In this retrospective study, a DLS was developed in 101 patients and validated on 264 consecutive patients (with lung cancer) having newly developed BM from two tertiary university hospitals, which performed black-blood imaging between January 2020 and April 2021. Four neuroradiologists independently evaluated BM either with segmented masks and BM counts provided (with DLS) or not provided (without DLS) on a clinical trial imaging management system (CTIMS). To assess reading reproducibility, BM count agreement between the readers and the reference standard were calculated using limits of agreement (LoA). Readers' workload was assessed with reading time, which was automatically measured on CTIMS, and were compared between with and without DLS using linear mixed models considering the imaging center. RESULTS In the validation cohort, the detection sensitivity and positive predictive value of the DLS were 90.2% (95% confidence interval [CI]: 88.1-92.2) and 88.2% (95% CI: 85.7-90.4), respectively. The difference between the readers and the reference counts was larger without DLS (LoA: -0.281, 95% CI: -2.888, 2.325) than with DLS (LoA: -0.163, 95% CI: -2.692, 2.367). The reading time was reduced from mean 66.9 s (interquartile range: 43.2-90.6) to 57.3 s (interquartile range: 33.6-81.0) (P <.001) in the with DLS group, regardless of the imaging center. CONCLUSION Deep learning-based BM detection and counting with black-blood imaging improved reproducibility and reduced reading time, on multi-center validation.
Collapse
Affiliation(s)
- Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea.
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea.
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | | | - Joo Young Oh
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Da Hyun Lee
- Department of Radiology, Ajou University Medical Center, Suwon, Korea
| | - So Yeon Won
- Department of Radiology, Samsung Seoul Hospital, Seoul, Korea
| | - Ilah Shin
- Department of Radiology, The Catholic University of Korea, Seoul St. Mary's hospital, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| |
Collapse
|
12
|
Jeong H, Park JE, Kim N, Yoon SK, Kim HS. Deep learning-based detection and quantification of brain metastases on black-blood imaging can provide treatment suggestions: a clinical cohort study. Eur Radiol 2024; 34:2062-2071. [PMID: 37658885 PMCID: PMC10873231 DOI: 10.1007/s00330-023-10120-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 06/25/2023] [Accepted: 07/01/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVES We aimed to evaluate whether deep learning-based detection and quantification of brain metastasis (BM) may suggest treatment options for patients with BMs. METHODS The deep learning system (DLS) for detection and quantification of BM was developed in 193 patients and applied to 112 patients that were newly detected on black-blood contrast-enhanced T1-weighted imaging. Patients were assigned to one of 3 treatment suggestion groups according to the European Association of Neuro-Oncology (EANO)-European Society for Medical Oncology (ESMO) recommendations using number and volume of the BMs detected by the DLS: short-term imaging follow-up without treatment (group A), surgery or stereotactic radiosurgery (limited BM, group B), or whole-brain radiotherapy or systemic chemotherapy (extensive BM, group C). The concordance between the DLS-based groups and clinical decisions was analyzed with or without consideration of targeted agents. The performance of distinguishing high-risk (B + C) was calculated. RESULTS Among 112 patients (mean age 64.3 years, 63 men), group C had the largest number and volume of BM, followed by group B (4.4 and 851.6 mm3) and A (1.5 and 15.5 mm3). The DLS-based groups were concordant with the actual clinical decisions, with an accuracy of 76.8% (86 of 112). Modified accuracy considering targeted agents was 81.3% (91 of 112). The DLS showed 95% (82/86) sensitivity and 81% (21/26) specificity for distinguishing the high risk. CONCLUSION DLS-based detection and quantification of BM have the potential to be helpful in the determination of treatment options for both low- and high-risk groups of limited and extensive BMs. CLINICAL RELEVANCE STATEMENT For patients with newly diagnosed brain metastasis, deep learning-based detection and quantification may be used in clinical settings where prompt and accurate treatment decisions are required, which can lead to better patient outcomes. KEY POINTS • Deep learning-based brain metastasis detection and quantification showed excellent agreement with ground-truth classifications. • By setting an algorithm to suggest treatment based on the number and volume of brain metastases detected by the deep learning system, the concordance was 81.3%. • When dividing patients into low- and high-risk groups, the sensitivity for detecting the latter was 95%.
Collapse
Affiliation(s)
- Hana Jeong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea.
| | | | - Shin-Kyo Yoon
- Department of Oncology, Asan Medical Center, Seoul, South Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| |
Collapse
|
13
|
Fairchild A, Salama JK, Godfrey D, Wiggins WF, Ackerson BG, Oyekunle T, Niedzwiecki D, Fecci PE, Kirkpatrick JP, Floyd SR. Incidence and imaging characteristics of difficult to detect retrospectively identified brain metastases in patients receiving repeat courses of stereotactic radiosurgery. J Neurooncol 2024:10.1007/s11060-024-04594-6. [PMID: 38340295 DOI: 10.1007/s11060-024-04594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
PURPOSE During stereotactic radiosurgery (SRS) planning for brain metastases (BM), brain MRIs are reviewed to select appropriate targets based on radiographic characteristics. Some BM are difficult to detect and/or definitively identify and may go untreated initially, only to become apparent on future imaging. We hypothesized that in patients receiving multiple courses of SRS, reviewing the initial planning MRI would reveal early evidence of lesions that developed into metastases requiring SRS. METHODS Patients undergoing two or more courses of SRS to BM within 6 months between 2016 and 2018 were included in this single-institution, retrospective study. Brain MRIs from the initial course were reviewed for lesions at the same location as subsequently treated metastases; if present, this lesion was classified as a "retrospectively identified metastasis" or RIM. RIMs were subcategorized as meeting or not meeting diagnostic imaging criteria for BM (+ DC or -DC, respectively). RESULTS Among 683 patients undergoing 923 SRS courses, 98 patients met inclusion criteria. There were 115 repeat courses of SRS, with 345 treated metastases in the subsequent course, 128 of which were associated with RIMs found in a prior MRI. 58% of RIMs were + DC. 17 (15%) of subsequent courses consisted solely of metastases associated with + DC RIMs. CONCLUSION Radiographic evidence of brain metastases requiring future treatment was occasionally present on brain MRIs from prior SRS treatments. Most RIMs were + DC, and some subsequent SRS courses treated only + DC RIMs. These findings suggest enhanced BM detection might enable earlier treatment and reduce the need for additional SRS.
Collapse
Affiliation(s)
- Andrew Fairchild
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.
- Piedmont Radiation Oncology, 3333 Silas Creek Parkway, Winston Salem, NC, 27103, USA.
| | - Joseph K Salama
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Radiation Oncology Service, Durham VA Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Walter F Wiggins
- Deartment of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Bradley G Ackerson
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Taofik Oyekunle
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Peter E Fecci
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - John P Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - Scott R Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
14
|
Son S, Joo B, Park M, Suh SH, Oh HS, Kim JW, Lee S, Ahn SJ, Lee JM. Development of RLK-Unet: a clinically favorable deep learning algorithm for brain metastasis detection and treatment response assessment. Front Oncol 2024; 13:1273013. [PMID: 38288101 PMCID: PMC10823345 DOI: 10.3389/fonc.2023.1273013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 12/27/2023] [Indexed: 01/31/2024] Open
Abstract
Purpose/objectives Previous deep learning (DL) algorithms for brain metastasis (BM) detection and segmentation have not been commonly used in clinics because they produce false-positive findings, require multiple sequences, and do not reflect physiological properties such as necrosis. The aim of this study was to develop a more clinically favorable DL algorithm (RLK-Unet) using a single sequence reflecting necrosis and apply it to automated treatment response assessment. Methods and materials A total of 128 patients with 1339 BMs, who underwent BM magnetic resonance imaging using the contrast-enhanced 3D T1 weighted (T1WI) turbo spin-echo black blood sequence, were included in the development of the DL algorithm. Fifty-eight patients with 629 BMs were assessed for treatment response. The detection sensitivity, precision, Dice similarity coefficient (DSC), and agreement of treatment response assessments between neuroradiologists and RLK-Unet were assessed. Results RLK-Unet demonstrated a sensitivity of 86.9% and a precision of 79.6% for BMs and had a DSC of 0.663. Segmentation performance was better in the subgroup with larger BMs (DSC, 0.843). The agreement in the response assessment for BMs between the radiologists and RLK-Unet was excellent (intraclass correlation, 0.84). Conclusion RLK-Unet yielded accurate detection and segmentation of BM and could assist clinicians in treatment response assessment.
Collapse
Affiliation(s)
- Seungyeon Son
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
| | - Bio Joo
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Mina Park
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Sang Hyun Suh
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Hee Sang Oh
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Jun Won Kim
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Seoyoung Lee
- Division of Medical Oncology, Department of Internal Medicine, Gangnam Severance Hospital, College of Medicine, Yonsei University, Seoul, Republic of Korea
| | - Sung Jun Ahn
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, Seoul, Republic of Korea
| | - Jong-Min Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| |
Collapse
|
15
|
Chen J, Meng L, Bu C, Zhang C, Wu P. Feature pyramid network-based computer-aided detection and monitoring treatment response of brain metastases on contrast-enhanced MRI. Clin Radiol 2023; 78:e808-e814. [PMID: 37573242 DOI: 10.1016/j.crad.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 08/14/2023]
Abstract
AIM To investigate the value of feature pyramid network (FPN)-based computer-aided detection (CAD) of brain metastases (BMs) before and after non-surgical treatment, and to evaluate its performance in monitoring treatment response of BM on contrast-enhanced (CE) magnetic resonance imaging (MRI). MATERIAL AND METHODS Eighty-five cancer patients newly diagnosed with BM who had undergone initial and follow-up three-dimensional (3D) CE MRI at Liaocheng People's Hospital were included retrospectively in this study. Manual detection (MD) was performed by reviewer 1. Computer-aided detection (CAD) was performed by reviewer 2 using uAI Discover-BMs software. The treatment response was assessed by the two reviewers for each patient separately. A paired chi-square test was used to compare the differences in the detection of BM between MD and CAD. Agreement between MD and CAD in monitoring treatment response was assessed by kappa test. RESULTS The sensitivities of MD and CAD on initial 3D CE MRI were 78.65% and 99.13%, respectively. The sensitivities of MD and CAD on follow-up 3D CE MRI were 76.32% and 98.24%, respectively. There was a very good agreement between Reviewer 1 and Reviewer 2 in evaluating the treatment response of BM. CONCLUSION FPN-based CAD has a higher sensitivity of close to 100% and lower false negatives (FNs) for BM detection, compared to MD. Although CAD had a few shortcomings in reflecting changes of BMs after treatment, it had high performance in monitoring treatment response of BM on CE MRI.
Collapse
Affiliation(s)
- J Chen
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China.
| | - L Meng
- Department of Radiotherapy, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Bu
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Zhang
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - P Wu
- Philips Healthcare, Shanghai, 200072, China
| |
Collapse
|
16
|
Sun H, Yang S, Chen L, Liao P, Liu X, Liu Y, Wang N. Brain tumor image segmentation based on improved FPN. BMC Med Imaging 2023; 23:172. [PMID: 37904116 PMCID: PMC10617057 DOI: 10.1186/s12880-023-01131-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 10/19/2023] [Indexed: 11/01/2023] Open
Abstract
PURPOSE Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. MATERIALS AND METHODS Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. RESULTS Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. CONCLUSIONS The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.
Collapse
Affiliation(s)
- Haitao Sun
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Shuai Yang
- Department of Radiotherapy and Minimally Invasive Surgery, The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519020, China
| | - Lijuan Chen
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Pingyan Liao
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Xiangping Liu
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Ying Liu
- Department of the Radiotherapy, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510060, China
| | - Ning Wang
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China.
| |
Collapse
|
17
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
18
|
Zhou Z, Qiu Q, Liu H, Ge X, Li T, Xing L, Yang R, Yin Y. Automatic Detection of Brain Metastases in T1-Weighted Construct-Enhanced MRI Using Deep Learning Model. Cancers (Basel) 2023; 15:4443. [PMID: 37760413 PMCID: PMC10526374 DOI: 10.3390/cancers15184443] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/03/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023] Open
Abstract
As a complication of malignant tumors, brain metastasis (BM) seriously threatens patients' survival and quality of life. Accurate detection of BM before determining radiation therapy plans is a paramount task. Due to the small size and heterogeneous number of BMs, their manual diagnosis faces enormous challenges. Thus, MRI-based artificial intelligence-assisted BM diagnosis is significant. Most of the existing deep learning (DL) methods for automatic BM detection try to ensure a good trade-off between precision and recall. However, due to the objective factors of the models, higher recall is often accompanied by higher number of false positive results. In real clinical auxiliary diagnosis, radiation oncologists are required to spend much effort to review these false positive results. In order to reduce false positive results while retaining high accuracy, a modified YOLOv5 algorithm is proposed in this paper. First, in order to focus on the important channels of the feature map, we add a convolutional block attention model to the neck structure. Furthermore, an additional prediction head is introduced for detecting small-size BMs. Finally, to distinguish between cerebral vessels and small-size BMs, a Swin transformer block is embedded into the smallest prediction head. With the introduction of the F2-score index to determine the most appropriate confidence threshold, the proposed method achieves a precision of 0.612 and recall of 0.904. Compared with existing methods, our proposed method shows superior performance with fewer false positive results. It is anticipated that the proposed method could reduce the workload of radiation oncologists in real clinical auxiliary diagnosis.
Collapse
Affiliation(s)
- Zichun Zhou
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Qingtao Qiu
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
| | - Huiling Liu
- Department of Oncology, Binzhou People’s Hospital, Binzhou 256610, China
- Third Clinical Medical College, Xinjiang Medical University, Urumqi 830011, China
| | - Xuanchu Ge
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Tengxiang Li
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Ligang Xing
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Runtao Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Yong Yin
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| |
Collapse
|
19
|
Oh JH, Kim HG, Lee KM. Developing and Evaluating Deep Learning Algorithms for Object Detection: Key Points for Achieving Superior Model Performance. Korean J Radiol 2023; 24:698-714. [PMID: 37404112 DOI: 10.3348/kjr.2022.0765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 04/29/2023] [Accepted: 05/16/2023] [Indexed: 07/06/2023] Open
Abstract
In recent years, artificial intelligence, especially object detection-based deep learning in computer vision, has made significant advancements, driven by the development of computing power and the widespread use of graphic processor units. Object detection-based deep learning techniques have been applied in various fields, including the medical imaging domain, where remarkable achievements have been reported in disease detection. However, the application of deep learning does not always guarantee satisfactory performance, and researchers have been employing trial-and-error to identify the factors contributing to performance degradation and enhance their models. Moreover, due to the black-box problem, the intermediate processes of a deep learning network cannot be comprehended by humans; as a result, identifying problems in a deep learning model that exhibits poor performance can be challenging. This article highlights potential issues that may cause performance degradation at each deep learning step in the medical imaging domain and discusses factors that must be considered to improve the performance of deep learning models. Researchers who wish to begin deep learning research can reduce the required amount of trial-and-error by understanding the issues discussed in this study.
Collapse
Affiliation(s)
- Jang-Hoon Oh
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Korea
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Korea
| | - Kyung Mi Lee
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Korea.
| |
Collapse
|
20
|
Dikici E, Nguyen XV, Takacs N, Prevedello LM. Prediction of model generalizability for unseen data: Methodology and case study in brain metastases detection in T1-Weighted contrast-enhanced 3D MRI. Comput Biol Med 2023; 159:106901. [PMID: 37068317 DOI: 10.1016/j.compbiomed.2023.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/08/2023] [Accepted: 04/09/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND AND PURPOSE A medical AI system's generalizability describes the continuity of its performance acquired from varying geographic, historical, and methodologic settings. Previous literature on this topic has mostly focused on "how" to achieve high generalizability (e.g., via larger datasets, transfer learning, data augmentation, model regularization schemes), with limited success. Instead, we aim to understand "when" the generalizability is achieved: Our study presents a medical AI system that could estimate its generalizability status for unseen data on-the-fly. MATERIALS AND METHODS We introduce a latent space mapping (LSM) approach utilizing Fréchet distance loss to force the underlying training data distribution into a multivariate normal distribution. During the deployment, a given test data's LSM distribution is processed to detect its deviation from the forced distribution; hence, the AI system could predict its generalizability status for any previously unseen data set. If low model generalizability is detected, then the user is informed by a warning message integrated into a sample deployment workflow. While the approach is applicable for most classification deep neural networks (DNNs), we demonstrate its application to a brain metastases (BM) detector for T1-weighted contrast-enhanced (T1c) 3D MRI. The BM detection model was trained using 175 T1c studies acquired internally (from the authors' institution) and tested using (1) 42 internally acquired exams and (2) 72 externally acquired exams from the publicly distributed Brain Mets dataset provided by the Stanford University School of Medicine. Generalizability scores, false positive (FP) rates, and sensitivities of the BM detector were computed for the test datasets. RESULTS AND CONCLUSION The model predicted its generalizability to be low for 31% of the testing data (i.e., two of the internally and 33 of the externally acquired exams), where it produced (1) ∼13.5 false positives (FPs) at 76.1% BM detection sensitivity for the low and (2) ∼10.5 FPs at 89.2% BM detection sensitivity for the high generalizability groups respectively. These results suggest that the proposed formulation enables a model to predict its generalizability for unseen data.
Collapse
Affiliation(s)
- Engin Dikici
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA.
| | - Xuan V Nguyen
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Noah Takacs
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| |
Collapse
|
21
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
22
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
23
|
Chakrabarty N, Mahajan A, Patil V, Noronha V, Prabhash K. Imaging of brain metastasis in non-small-cell lung cancer: indications, protocols, diagnosis, post-therapy imaging, and implications regarding management. Clin Radiol 2023; 78:175-186. [PMID: 36503631 DOI: 10.1016/j.crad.2022.09.134] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 09/09/2022] [Accepted: 09/29/2022] [Indexed: 12/14/2022]
Abstract
Increased survival (due to the use of targeted therapies based on genomic profiling) has resulted in the increased incidence of brain metastasis during the course of disease, and thus, made it essential to have proper imaging guidelines in place for brain metastasis from non-small-cell lung cancer (NSCLC). Brain parenchymal metastases can have varied imaging appearances, and it is pertinent to be aware of the various molecular risk factors for brain metastasis from NSCLC along with their suggestive imaging appearances, so as to identify them early. Leptomeningeal metastasis requires additional imaging of the spine and an early cerebrospinal fluid (CSF) analysis. Differentiation of post-therapy change from recurrence on imaging has a bearing on the management, hence the need for its awareness. This article will provide in-depth literature review of the epidemiology, aetiopathogenesis, screening, detection, diagnosis, post-therapy imaging, and implications regarding the management of brain metastasis from NSCLC. In addition, we will also briefly highlight the role of artificial intelligence (AI) in brain metastasis screening.
Collapse
Affiliation(s)
- N Chakrabarty
- Department of Radiodiagnosis, Tata Memorial Hospital, Tata Memorial Centre, Homi Bhabha National Institute (HBNI), Mumbai, 400 012, Maharashtra, India
| | - A Mahajan
- Department of Radiodiagnosis, Tata Memorial Hospital, Tata Memorial Centre, Homi Bhabha National Institute (HBNI), Mumbai, 400 012, Maharashtra, India.
| | - V Patil
- Department of Medical Oncology, Tata Memorial Hospital, Tata Memorial Centre, Homi Bhabha National Institute (HBNI), Mumbai, 400 012, Maharashtra, India
| | - V Noronha
- Department of Medical Oncology, Tata Memorial Hospital, Tata Memorial Centre, Homi Bhabha National Institute (HBNI), Mumbai, 400 012, Maharashtra, India
| | - K Prabhash
- Department of Medical Oncology, Tata Memorial Hospital, Tata Memorial Centre, Homi Bhabha National Institute (HBNI), Mumbai, 400 012, Maharashtra, India
| |
Collapse
|
24
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
25
|
Li R, Guo Y, Zhao Z, Chen M, Liu X, Gong G, Wang L. MRI-based two-stage deep learning model for automatic detection and segmentation of brain metastases. Eur Radiol 2023; 33:3521-3531. [PMID: 36695903 DOI: 10.1007/s00330-023-09420-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 12/12/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023]
Abstract
OBJECTIVES To develop and validate a two-stage deep learning model for automatic detection and segmentation of brain metastases (BMs) in MRI images. METHODS In this retrospective study, T1-weighted (T1) and T1-weighted contrast-enhanced (T1ce) MRI images of 649 patients who underwent radiotherapy from August 2019 to January 2022 were included. A total of 5163 metastases were manually annotated by neuroradiologists. A two-stage deep learning model was developed for automatic detection and segmentation of BMs, which consisted of a lightweight segmentation network for generating metastases proposals and a multi-scale classification network for false-positive suppression. Its performance was evaluated by sensitivity, precision, F1-score, dice, and relative volume difference (RVD). RESULTS Six hundred forty-nine patients were randomly divided into training (n = 295), validation (n = 99), and testing (n = 255) sets. The proposed two-stage model achieved a sensitivity of 90% (1463/1632) and a precision of 56% (1463/2629) on the testing set, outperforming one-stage methods based on a single-shot detector, 3D U-Net, and nnU-Net, whose sensitivities were 78% (1276/1632), 79% (1290/1632), and 87% (1426/1632), and the precisions were 40% (1276/3222), 51% (1290/2507), and 53% (1426/2688), respectively. Particularly for BMs smaller than 5 mm, the proposed model achieved a sensitivity of 66% (116/177), far superior to one-stage models (21% (37/177), 36% (64/177), and 53% (93/177)). Furthermore, it also achieved high segmentation performance with an average dice of 81% and an average RVD of 20%. CONCLUSION A two-stage deep learning model can detect and segment BMs with high sensitivity and low volume error. KEY POINTS • A two-stage deep learning model based on triple-channel MRI images identified brain metastases with 90% sensitivity and 56% precision. • For brain metastases smaller than 5 mm, the proposed two-stage model achieved 66% sensitivity and 22% precision. • For segmentation of brain metastases, the proposed two-stage model achieved a dice of 81% and a relative volume difference (RVD) of 20%.
Collapse
Affiliation(s)
- Ruikun Li
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yujie Guo
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | - Zhongchen Zhao
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Mingming Chen
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | | | - Guanzhong Gong
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China. .,Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.
| | - Lisheng Wang
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
26
|
Ottesen JA, Yi D, Tong E, Iv M, Latysheva A, Saxhaug C, Jacobsen KD, Helland Å, Emblem KE, Rubin DL, Bjørnerud A, Zaharchuk G, Grøvik E. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform 2023; 16:1056068. [PMID: 36743439 PMCID: PMC9889663 DOI: 10.3389/fninf.2022.1056068] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023] Open
Abstract
Introduction Management of patients with brain metastases is often based on manual lesion detection and segmentation by an expert reader. This is a time- and labor-intensive process, and to that end, this work proposes an end-to-end deep learning segmentation network for a varying number of available MRI available sequences. Methods We adapt and evaluate a 2.5D and a 3D convolution neural network trained and tested on a retrospective multinational study from two independent centers, in addition, nnU-Net was adapted as a comparative benchmark. Segmentation and detection performance was evaluated by: (1) the dice similarity coefficient, (2) a per-metastases and the average detection sensitivity, and (3) the number of false positives. Results The 2.5D and 3D models achieved similar results, albeit the 2.5D model had better detection rate, whereas the 3D model had fewer false positive predictions, and nnU-Net had fewest false positives, but with the lowest detection rate. On MRI data from center 1, the 2.5D, 3D, and nnU-Net detected 79%, 71%, and 65% of all metastases; had an average per patient sensitivity of 0.88, 0.84, and 0.76; and had on average 6.2, 3.2, and 1.7 false positive predictions per patient, respectively. For center 2, the 2.5D, 3D, and nnU-Net detected 88%, 86%, and 78% of all metastases; had an average per patient sensitivity of 0.92, 0.91, and 0.85; and had on average 1.0, 0.4, and 0.1 false positive predictions per patient, respectively. Discussion/Conclusion Our results show that deep learning can yield highly accurate segmentations of brain metastases with few false positives in multinational data, but the accuracy degrades for metastases with an area smaller than 0.4 cm2.
Collapse
Affiliation(s)
- Jon André Ottesen
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway,*Correspondence: Jon André Ottesen ✉
| | - Darvin Yi
- Department of Ophthalmology, University of Illinois, Chicago, IL, United States
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Cathrine Saxhaug
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Kyrre Eeg Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Atle Bjørnerud
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
27
|
[Robotics and computer-assisted procedures in cranial neurosurgery]. CHIRURGIE (HEIDELBERG, GERMANY) 2023; 94:299-306. [PMID: 36629923 DOI: 10.1007/s00104-022-01783-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Accepted: 11/21/2022] [Indexed: 01/12/2023]
Abstract
BACKGROUND The medical technical innovations over the last decade have made operations in the highly sensitive regions of the brain much safer. OBJECTIVE Presentation of how far computer assistance and robotics have become incorporated into clinical neurosurgery. MATERIAL AND METHOD Evaluation of the scientific literature and analysis of the certification status of the corresponding medical devices. RESULTS The rapid development of computer technology and the switch to digital imaging has led to the widespread introduction of neurosurgical planning software and intraoperative neuronavigation. In the field of robotics, the penetration into clinical neurosurgery is currently still largely limited to the automatic setting of trajectories. CONCLUSION The digitalization of imaging has fundamentally transformed neurosurgery. In the field of cranial neurosurgery, computer-assisted procedures can now be distinguished from noncomputer-assisted procedures only in a handful of cases. In the coming years important innovations for the clinical implementation can be expected in the field of robotics.
Collapse
|
28
|
Yu H, Zhang Z, Xia W, Liu Y, Liu L, Luo W, Zhou J, Zhang Y. DeSeg: auto detector-based segmentation for brain metastases. Phys Med Biol 2023; 68. [PMID: 36535028 DOI: 10.1088/1361-6560/acace7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Delineation of brain metastases (BMs) is a paramount step in stereotactic radiosurgery treatment. Clinical practice has specific expectation on BM auto-delineation that the method is supposed to avoid missing of small lesions and yield accurate contours for large lesions. In this study, we propose a novel coarse-to-fine framework, named detector-based segmentation (DeSeg), to incorporate object-level detection into pixel-wise segmentation so as to meet the clinical demand. DeSeg consists of three components: a center-point-guided single-shot detector to localize the potential lesion regions, a multi-head U-Net segmentation model to refine contours, and a data cascade unit to connect both tasks smoothly. Performance on tiny lesions is measured by the object-based sensitivity and positive predictive value (PPV), while that on large lesions is quantified by dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95). Besides, computational complexity is also considered to study the potential of method in real-time processing. This study retrospectively collected 240 BM patients with Gadolinium injected contrast-enhanced T1-weighted magnetic resonance imaging (T1c-MRI), which were randomly split into training, validating and testing datasets (192, 24 and 24 scans, respectively). The lesions in the testing dataset were further divided into two groups based on the volume size (smallS: ≤1.5 cc,N= 88; largeL: > 1.5 cc,N= 15). On average, DeSeg yielded a sensitivity of 0.91 and a PPV of 0.77 on S group, and a DSC of 0.86, an ASSD 0f 0.76 mm and a HD95 of 2.31 mm onLgroup. The results indicated that DeSeg achieved leading sensitivity and PPV for tiny lesions as well as segmentation metrics for large ones. After our clinical validation, DeSeg showed competitive segmentation performance while kept faster processing speed comparing with existing 3D models.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhongzhou Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wenjun Xia
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, 610044, People's Republic of China
| | - Wuman Luo
- School of Applied Sciences, Macao Polytechnic University, Macao, 999078, People's Republic of China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
29
|
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:cancers15020334. [PMID: 36672286 PMCID: PMC9857123 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
30
|
Spiking Neural P System with Synaptic Vesicles and Applications in Multiple Brain Metastasis Segmentation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
31
|
Yoon BC, Pomerantz SR, Mercaldo ND, Goyal S, L’Italien EM, Lev MH, Buch KA, Buchbinder BR, Chen JW, Conklin J, Gupta R, Hunter GJ, Kamalian SC, Kelly HR, Rapalino O, Rincon SP, Romero JM, He J, Schaefer PW, Do S, González RG. Incorporating algorithmic uncertainty into a clinical machine deep learning algorithm for urgent head CTs. PLoS One 2023; 18:e0281900. [PMID: 36913348 PMCID: PMC10010506 DOI: 10.1371/journal.pone.0281900] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 02/03/2023] [Indexed: 03/14/2023] Open
Abstract
Machine learning (ML) algorithms to detect critical findings on head CTs may expedite patient management. Most ML algorithms for diagnostic imaging analysis utilize dichotomous classifications to determine whether a specific abnormality is present. However, imaging findings may be indeterminate, and algorithmic inferences may have substantial uncertainty. We incorporated awareness of uncertainty into an ML algorithm that detects intracranial hemorrhage or other urgent intracranial abnormalities and evaluated prospectively identified, 1000 consecutive noncontrast head CTs assigned to Emergency Department Neuroradiology for interpretation. The algorithm classified the scans into high (IC+) and low (IC-) probabilities for intracranial hemorrhage or other urgent abnormalities. All other cases were designated as No Prediction (NP) by the algorithm. The positive predictive value for IC+ cases (N = 103) was 0.91 (CI: 0.84-0.96), and the negative predictive value for IC- cases (N = 729) was 0.94 (0.91-0.96). Admission, neurosurgical intervention, and 30-day mortality rates for IC+ was 75% (63-84), 35% (24-47), and 10% (4-20), compared to 43% (40-47), 4% (3-6), and 3% (2-5) for IC-. There were 168 NP cases, of which 32% had intracranial hemorrhage or other urgent abnormalities, 31% had artifacts and postoperative changes, and 29% had no abnormalities. An ML algorithm incorporating uncertainty classified most head CTs into clinically relevant groups with high predictive values and may help accelerate the management of patients with intracranial hemorrhage or other urgent intracranial abnormalities.
Collapse
Affiliation(s)
- Byung C. Yoon
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Stuart R. Pomerantz
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Data Science Office, Boston, MA, United States of America
| | - Nathaniel D. Mercaldo
- Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, United States of America
| | - Swati Goyal
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Department of Radiology/ Information Systems, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Eric M. L’Italien
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Department of Radiology/ Information Systems, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Michael H. Lev
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Karen A. Buch
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Bradley R. Buchbinder
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - John W. Chen
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Massachusetts General Hospital Center for Systems Biology (CSB), Boston, MA, United States of America
| | - John Conklin
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Rajiv Gupta
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Massachusetts General Hospital Consortia for Integration of Medicine and Innovative Technologies (CIMIT), Boston, MA, United States of America
- Massachusetts General Hospital CT Innovation and Advanced X-ray Imaging Science (AXIS) Center, Boston, MA, United States of America
| | - George J. Hunter
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Shahmir C. Kamalian
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Hillary R. Kelly
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts Eye and Ear Institute, Harvard Medical School, Boston, MA, United States of America
| | - Otto Rapalino
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Sandra P. Rincon
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Javier M. Romero
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Julian He
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Pamela W. Schaefer
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Enterprise Radiology, Boston, MA, United States of America
| | - Synho Do
- Mass General Brigham Data Science Office, Boston, MA, United States of America
| | - Ramon Gilberto González
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Massachusetts General Hospital Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, United States of America
- * E-mail:
| |
Collapse
|
32
|
Buchner JA, Kofler F, Etzel L, Mayinger M, Christ SM, Brunner TB, Wittig A, Menze B, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus J, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Ferentinos K, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Peeken JC. Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study. Radiother Oncol 2023; 178:109425. [PMID: 36442609 DOI: 10.1016/j.radonc.2022.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Stereotactic radiotherapy is a standard treatment option for patients with brain metastases. The planning target volume is based on gross tumor volume (GTV) segmentation. The aim of this work is to develop and validate a neural network for automatic GTV segmentation to accelerate clinical daily routine practice and minimize interobserver variability. METHODS We analyzed MRIs (T1-weighted sequence ± contrast-enhancement, T2-weighted sequence, and FLAIR sequence) from 348 patients with at least one brain metastasis from different cancer primaries treated in six centers. To generate reference segmentations, all GTVs and the FLAIR hyperintense edematous regions were segmented manually. A 3D-U-Net was trained on a cohort of 260 patients from two centers to segment the GTV and the surrounding FLAIR hyperintense region. During training varying degrees of data augmentation were applied. Model validation was performed using an independent international multicenter test cohort (n = 88) including four centers. RESULTS Our proposed U-Net reached a mean overall Dice similarity coefficient (DSC) of 0.92 ± 0.08 and a mean individual metastasis-wise DSC of 0.89 ± 0.11 in the external test cohort for GTV segmentation. Data augmentation improved the segmentation performance significantly. Detection of brain metastases was effective with a mean F1-Score of 0.93 ± 0.16. The model performance was stable independent of the center (p = 0.3). There was no correlation between metastasis volume and DSC (Pearson correlation coefficient 0.07). CONCLUSION Reliable automated segmentation of brain metastases with neural networks is possible and may support radiotherapy planning by providing more objective GTV definitions.
Collapse
Affiliation(s)
- Josef A Buchner
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany; Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Michael Mayinger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Sebastian M Christ
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Thomas B Brunner
- Department of Radiation Oncology, University Hospital Magdeburg, Magdeburg, Germany
| | - Andrea Wittig
- Department of Radiotherapy and Radiation Oncology, University Hospital Jena, Friedrich-Schiller University, Jena, Germany
| | - Björn Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Bernhard Meyer
- Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Rami A El Shafie
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany; Department of Radiation Oncology, University Medical Center Göttingen, Göttingen, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany
| | - Susanne Rogers
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Oliver Riesterer
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Katrin Schulze
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Horst J Feldmann
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig Holstein, Kiel, Germany
| | - Constantinos Zamboglou
- Department of Radiation Oncology, University of Freiburg - Medical Center, Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany; Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Robert Wolff
- Saphir Radiosurgery Center Frankfurt and Northern Germany, Guestrow, Germany; Department of Neurosurgery, University Hospital Frankfurt, Frankfurt, Germany
| | - Kerstin A Eitz
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Stephanie E Combs
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| |
Collapse
|
33
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
34
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Miki S, Watanabe Y, Suzuki F, Fujimoto K, Mizuki M, Abe O. Computer-aided detection improves brain metastasis identification on non-enhanced CT in less experienced radiologists. Acta Radiol 2022; 64:1958-1965. [PMID: 36426577 DOI: 10.1177/02841851221139124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Background Brain metastases (BMs) are the most common intracranial tumors causing neurological complications associated with significant morbidity and mortality. Purpose To evaluate the effect of computer-aided detection (CAD) on the performance of observers in detecting BMs on non-enhanced computed tomography (NECT). Material and Methods Three less experienced and three experienced radiologists interpreted 30 NECT scans with 89 BMs in 25 cases to detect BMs with and without the assistance of CAD. The observers’ sensitivity, number of false positives (FPs), positive predictive value (PPV), and reading time with and without CAD were compared using paired t-tests. The sensitivity of CAD and the observers were compared using a one-sample t-test Results With CAD, less experienced radiologists’ sensitivity significantly increased from 27.7% ± 4.6% to 32.6% ± 4.8% ( P = 0.007), while the experienced radiologists’ sensitivity did not show a significant difference (from 33.3% ± 3.5% to 31.9% ± 3.7%; P = 0.54). There was no significant difference between conditions with CAD and without CAD for FPs (less experienced radiologists: 23.0 ± 10.4 and 25.0 ± 9.3; P = 0.32; experienced radiologists: 18.3 ± 7.4 and 17.3 ± 6.7; P = 0.76) and PPVs (less experienced radiologists: 57.9% ± 8.3% and 50.9% ± 7.0%; P = 0.14; experienced radiologists: 61.8% ± 12.7% and 64.0% ± 12.1%; P = 0.69). There were no significant differences in reading time with and without CAD (85.0 ± 45.6 s and 73.7 ± 36.7 s; P = 0.09). The sensitivity of CAD was 47.2% (with a PPV of 8.9%), which was significantly higher than that of any radiologist ( P < 0.001). Conclusion CAD improved BM detection sensitivity on NECT without increasing FPs or reading time among less experienced radiologists, but this was not the case among experienced radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Kawasaki, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Soichiro Miki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Fumio Suzuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Kotaro Fujimoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Masumi Mizuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
35
|
Deep learning-based detection algorithm for brain metastases on black blood imaging. Sci Rep 2022; 12:19503. [PMID: 36376364 PMCID: PMC9663732 DOI: 10.1038/s41598-022-23687-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
Brain metastases (BM) are the most common intracranial tumors, and their prevalence is increasing. High-resolution black-blood (BB) imaging was used to complement the conventional contrast-enhanced 3D gradient-echo imaging to detect BM. In this study, we propose an efficient deep learning algorithm (DLA) for BM detection in BB imaging with contrast enhancement scans, and assess the efficacy of an automatic detection algorithm for BM. A total of 113 BM participants with 585 metastases were included in the training cohort for five-fold cross-validation. The You Only Look Once (YOLO) V2 network was trained with 3D BB sampling perfection with application-optimized contrasts using different flip angle evolution (SPACE) images to investigate the BM detection. For the observer performance, two board-certified radiologists and two second-year radiology residents detected the BM and recorded the reading time. For the training cohort, the overall performance of the five-fold cross-validation was 87.95%, 24.82%, 19.35%, 14.48, and 18.40 for sensitivity, precision, F1-Score, the false positive average for the BM dataset, and the false positive average for the normal individual dataset, respectively. For the comparison of reading time with and without DLA, the average reading time was reduced by 20.86% in the range of 15.22-25.77%. The proposed method has the potential to detect BM with a high sensitivity and has a limited number of false positives using BB imaging.
Collapse
|
36
|
Liang Y, Lee K, Bovi JA, Palmer JD, Brown PD, Gondi V, Tomé WA, Benzinger TLS, Mehta MP, Li XA. Deep Learning-Based Automatic Detection of Brain Metastases in Heterogenous Multi-Institutional Magnetic Resonance Imaging Sets: An Exploratory Analysis of NRG-CC001. Int J Radiat Oncol Biol Phys 2022; 114:529-536. [PMID: 35787927 PMCID: PMC9641965 DOI: 10.1016/j.ijrobp.2022.06.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 06/09/2022] [Accepted: 06/21/2022] [Indexed: 10/31/2022]
Abstract
PURPOSE Deep learning-based algorithms have been shown to be able to automatically detect and segment brain metastases (BMs) in magnetic resonance imaging, mostly based on single-institutional data sets. This work aimed to investigate the use of deep convolutional neural networks (DCNN) for BM detection and segmentation on a highly heterogeneous multi-institutional data set. METHODS AND MATERIALS A total of 407 patients from 98 institutions were randomly split into 326 patients from 78 institutions for training/validation and 81 patients from 20 institutions for unbiased testing. The data set contained T1-weighted gadolinium and T2-weighted fluid-attenuated inversion recovery magnetic resonance imaging acquired on diverse scanners using different pulse sequences and various acquisition parameters. Several variants of 3-dimensional U-Net based DCNN models were trained and tuned using 5-fold cross validation on the training set. Performances of different models were compared based on Dice similarity coefficient for segmentation and sensitivity and false positive rate (FPR) for detection. The best performing model was evaluated on the test set. RESULTS A DCNN with an input size of 64 × 64 × 64 and an equal number of 128 kernels for all convolutional layers using instance normalization was identified as the best performing model (Dice similarity coefficient 0.73, sensitivity 0.86, and FPR 1.9) in the 5-fold cross validation experiments. The best performing model demonstrated consistent behavior on the test set (Dice similarity coefficient 0.73, sensitivity 0.91, and FPR 1.7) and successfully detected 7 BMs (out of 327) that were missed during manual delineation. For large BMs with diameters greater than 12 mm, the sensitivity and FPR improved to 0.98 and 0.3, respectively. CONCLUSIONS The DCNN model developed can automatically detect and segment brain metastases with reasonable accuracy, high sensitivity, and low FPR on a multi-institutional data set with nonprespecified and highly variable magnetic resonance imaging sequences. For large BMs, the model achieved clinically relevant results. The model is robust and may be potentially used in real-world situations.
Collapse
Affiliation(s)
- Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Karen Lee
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joseph A Bovi
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joshua D Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute at the Ohio State University, Columbus, Ohio
| | - Paul D Brown
- Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota
| | - Vinai Gondi
- Department of Radiation Oncology, Northwestern Medicine Cancer Center and Proton Center, Warrenville, Illinois
| | - Wolfgang A Tomé
- Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York
| | - Tammie L S Benzinger
- Department of Radiology, Washington University School of Medicine, St Louis, Missouri
| | | | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin.
| |
Collapse
|
37
|
Liew A, Lee CC, Subramaniam V, Lan BL, Tan M. Gradual Self-Training via Confidence and Volume Based Domain Adaptation for Multi Dataset Deep Learning-Based Brain Metastases Detection Using Nonlocal Networks on MRI Images. J Magn Reson Imaging 2022; 57:1728-1740. [PMID: 36208095 DOI: 10.1002/jmri.28456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Research suggests that treatment of multiple brain metastases (BMs) with stereotactic radiosurgery shows improvement when metastases are detected early, providing a case for BM detection capabilities on small lesions. PURPOSE To demonstrate automatic detection of BM on three MRI datasets using a deep learning-based approach. To improve the performance of the network is iteratively co-trained with datasets from different domains. A systematic approach is proposed to prevent catastrophic forgetting during co-training. STUDY TYPE Retrospective. POPULATION A total of 156 patients (105 ground truth and 51 pseudo labels) with 1502 BM (BrainMetShare); 121 patients with 722 BM (local); 400 patients with 447 primary gliomas (BrATS). Training/pseudo labels/validation data were distributed 84/51/21 (BrainMetShare). Training/validation data were split: 121/23 (local) and 375/25 (BrATS). FIELD STRENGTH/SEQUENCE A 5 T and 3 T/T1 spin-echo postcontrast (T1-gradient echo) (BrainMetShare), 3 T/T1 magnetization prepared rapid acquisition gradient echo postcontrast (T1-MPRAGE) (local), 0.5 T, 1 T, and 1.16 T/T1-weighted-fluid-attenuated inversion recovery (T1-FLAIR) (BrATS). ASSESSMENT The ground truth was manually segmented by two (BrainMetShare) and four (BrATS) radiologists and manually annotated by one (local) radiologist. Confidence and volume based domain adaptation (CAVEAT) method of co-training the three datasets on a 3D nonlocal convolutional neural network (CNN) architecture was implemented to detect BM. STATISTICAL TESTS The performance was evaluated using sensitivity and false positive rates per patient (FP/patient) and free receiver operating characteristic (FROC) analysis at seven predefined (1/8, 1/4, 1/2, 1, 2, 4, and 8) FPs per scan. RESULTS The sensitivity and FP/patient from a held-out set registered 0.811 at 2.952 FP/patient (BrainMetShare), 0.74 at 3.130 (local), and 0.723 at 2.240 (BrATS) using the CAVEAT approach with lesions as small as 1 mm being detected. DATA CONCLUSION Improved sensitivities at lower FP can be achieved by co-training datasets via the CAVEAT paradigm to address the problem of data sparsity. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Andrea Liew
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, Malaysia
| | - Chun Cheng Lee
- Radiology Department, Sunway Medical Centre, Bandar Sunway, Malaysia
| | | | - Boon Leong Lan
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, Malaysia.,Advanced Engineering Platform, School of Engineering, Monash University Malaysia, Bandar Sunway, Malaysia
| | - Maxine Tan
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, Malaysia.,School of Electrical and Computer Engineering, The University of Oklahoma, Norman, Oklahoma, USA
| |
Collapse
|
38
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
39
|
Chen MM, Terzic A, Becker AS, Johnson JM, Wu CC, Wintermark M, Wald C, Wu J. Artificial intelligence in oncologic imaging. Eur J Radiol Open 2022; 9:100441. [PMID: 36193451 PMCID: PMC9525817 DOI: 10.1016/j.ejro.2022.100441] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 09/25/2022] [Accepted: 09/26/2022] [Indexed: 01/07/2023] Open
Abstract
Radiology is integral to cancer care. Compared to molecular assays, imaging has its advantages. Imaging as a noninvasive tool can assess the entirety of tumor unbiased by sampling error and is routinely acquired at multiple time points in oncological practice. Imaging data can be digitally post-processed for quantitative assessment. The ever-increasing application of Artificial intelligence (AI) to clinical imaging is challenging radiology to become a discipline with competence in data science, which plays an important role in modern oncology. Beyond streamlining certain clinical tasks, the power of AI lies in its ability to reveal previously undetected or even imperceptible radiographic patterns that may be difficult to ascertain by the human sensory system. Here, we provide a narrative review of the emerging AI applications relevant to the oncological imaging spectrum and elaborate on emerging paradigms and opportunities. We envision that these technical advances will change radiology in the coming years, leading to the optimization of imaging acquisition and discovery of clinically relevant biomarkers for cancer diagnosis, staging, and treatment monitoring. Together, they pave the road for future clinical translation in precision oncology.
Collapse
Affiliation(s)
- Melissa M. Chen
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX, USA
| | - Admir Terzic
- Department of Radiology, Dom Zdravlja Odzak, Odzak, Bosnia and Herzegovina
| | - Anton S. Becker
- Department Radiology, Memorial Sloan Kettering, New York, NY, USA
| | - Jason M. Johnson
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX, USA
| | - Carol C. Wu
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Max Wintermark
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX, USA
| | - Christoph Wald
- Department of Radiology, Lahey Hospital and Medical Center, Burlington, MA, USA
| | - Jia Wu
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
40
|
Dikici E, Nguyen XV, Bigelow M, Ryu JL, Prevedello LM. Advancing Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI Using Noisy Student-Based Training. Diagnostics (Basel) 2022; 12:2023. [PMID: 36010373 PMCID: PMC9407228 DOI: 10.3390/diagnostics12082023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/17/2022] [Accepted: 08/19/2022] [Indexed: 11/17/2022] Open
Abstract
The detection of brain metastases (BM) in their early stages could have a positive impact on the outcome of cancer patients. The authors previously developed a framework for detecting small BM (with diameters of <15 mm) in T1-weighted contrast-enhanced 3D magnetic resonance images (T1c). This study aimed to advance the framework with a noisy-student-based self-training strategy to use a large corpus of unlabeled T1c data. Accordingly, a sensitivity-based noisy-student learning approach was formulated to provide high BM detection sensitivity with a reduced count of false positives. This paper (1) proposes student/teacher convolutional neural network architectures, (2) presents data and model noising mechanisms, and (3) introduces a novel pseudo-labeling strategy factoring in the sensitivity constraint. The evaluation was performed using 217 labeled and 1247 unlabeled exams via two-fold cross-validation. The framework utilizing only the labeled exams produced 9.23 false positives for 90% BM detection sensitivity, whereas the one using the introduced learning strategy led to ~9% reduction in false detections (i.e., 8.44). Significant reductions in false positives (>10%) were also observed in reduced labeled data scenarios (using 50% and 75% of labeled data). The results suggest that the introduced strategy could be utilized in existing medical detection applications with access to unlabeled datasets to elevate their performances.
Collapse
Affiliation(s)
- Engin Dikici
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Xuan V. Nguyen
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Matthew Bigelow
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | | | - Luciano M. Prevedello
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| |
Collapse
|
41
|
Huang Y, Bert C, Sommer P, Frey B, Gaipl U, Distel LV, Weissmann T, Uder M, Schmidt MA, Dörfler A, Maier A, Fietkau R, Putz F. Deep learning for brain metastasis detection and segmentation in longitudinal MRI data. Med Phys 2022; 49:5773-5786. [PMID: 35833351 DOI: 10.1002/mp.15863] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 06/22/2022] [Accepted: 06/28/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is essential for treatment planning and prognosis in radiation therapy. Due to their tiny sizes and relatively low contrast, small brain metastases are very difficult to detect manually. With the recent development of deep learning technologies, several researchers have reported promising results in automated brain metastasis detection. However, the detection sensitivity is still not high enough for tiny brain metastases, and integration into clinical practice in regard to differentiating true metastases from false positives is challenging. METHODS The DeepMedic network with the binary cross-entropy (BCE) loss is used as our baseline method. To improve brain metastasis detection performance, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates metastasis detection sensitivity and specificity at a (sub-)volume level. As sensitivity and precision are always a trade-off, either a high sensitivity or a high precision can be achieved for brain metastasis detection by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of DeepMedic. The modified network is called DeepMedic+ for distinction. Combining a high sensitivity VSS loss and a high specificity loss for DeepMedic+, the majority of true positive metastases are confirmed with high specificity, while additional metastases candidates in each patient are marked with high sensitivity for detailed expert evaluation. RESULTS Our proposed VSS loss improves the sensitivity of brain metastasis detection, increasing the sensitivity from 85.3% for DeepMedic with BCE to 97.5% for DeepMedic with VSS. Alternatively, the precision is improved from 69.1% for DeepMedic with BCE to 98.7% for DeepMedic with VSS. Comparing DeepMedic+ with DeepMedic with the same VSS loss, 44.4% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient need further check, while the majority of true positive metastases are confirmed. CONCLUSIONS Our proposed VSS loss and temporal prior improve brain metastasis detection sensitivity and precision. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice. This facilitates metastasis detection and segmentation for neuroradiologists in diagnostic and radiation oncologists in therapeutic clinical applications. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Philipp Sommer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Udo Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Luitpold V Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Manuel A Schmidt
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Arnd Dörfler
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | | | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
42
|
Min X, Feng Z, Gao J, Chen S, Zhang P, Fu T, Shen H, Wang N. InterNet: Detection of Active Abdominal Arterial Bleeding Using Emergency Digital Subtraction Angiography Imaging With Two-Stage Deep Learning. Front Med (Lausanne) 2022; 9:762091. [PMID: 35847818 PMCID: PMC9276930 DOI: 10.3389/fmed.2022.762091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 05/25/2022] [Indexed: 11/16/2022] Open
Abstract
Objective Active abdominal arterial bleeding is an emergency medical condition. Herein, we present our use of this two-stage InterNet model for detection of active abdominal arterial bleeding using emergency DSA imaging. Methods Firstly, 450 patients who underwent abdominal DSA procedures were randomly selected for development of the region localization stage (RLS). Secondly, 160 consecutive patients with active abdominal arterial bleeding were included for development of the bleeding site detection stage (BSDS) and InterNet (cascade network of RLS and BSDS). Another 50 patients that ruled out active abdominal arterial bleeding were used as negative samples to evaluate InterNet performance. We evaluated the mode's efficacy using the precision-recall (PR) curve. The classification performance of a doctor with and without InterNet was evaluated using a receiver operating characteristic (ROC) curve analysis. Results The AP, precision, and recall of the RLS were 0.99, 0.95, and 0.99 in the validation dataset, respectively. Our InterNet reached a recall of 0.7, the precision for detection of bleeding sites was 53% in the evaluation set. The AUCs of doctors with and without InterNet were 0.803 and 0.759, respectively. In addition, the doctor with InterNet assistant could significantly reduce the elapsed time for the interpretation of each DSA sequence from 84.88 to 43.78 s. Conclusion Our InterNet system could assist interventional radiologists in identifying bleeding foci quickly and may improve the workflow of the DSA operation to a more real-time procedure.
Collapse
Affiliation(s)
- Xiangde Min
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaoyan Feng
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Junfeng Gao
- College of Biomedical Engineering, South-Central of University for Nationalities, Wuhan, China
| | - Shu Chen
- United Imaging Intelligence, Shanghai, China
| | - Peipei Zhang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Tianyu Fu
- United Imaging Intelligence, Shanghai, China
| | - Hong Shen
- United Imaging Intelligence, Shanghai, China
| | - Nan Wang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Nan Wang
| |
Collapse
|
43
|
The Usefulness of Computer-Aided Detection of Brain Metastases on Contrast-Enhanced Computed Tomography Using Single-Shot Multibox Detector: Observer Performance Study. J Comput Assist Tomogr 2022; 46:786-791. [PMID: 35819922 DOI: 10.1097/rct.0000000000001339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study aimed to test the usefulness of computer-aided detection (CAD) for the detection of brain metastasis (BM) on contrast-enhanced computed tomography. METHODS The test data set included whole-brain axial contrast-enhanced computed tomography images of 25 cases with 62 BMs and 5 cases without BM. Six radiologists from 3 institutions with 2 to 4 years of experience independently reviewed the cases, both in conditions with and without CAD assistance. Sensitivity, positive predictive value, number of false positives, and reading time were compared between the conditions using paired t tests. Subanalysis was also performed for groups of lesions divided according to size. A P value <0.05 was considered statistically significant. RESULTS With CAD, sensitivity significantly increased from 80.4% to 83.9% (P = 0.04), whereas positive predictive value significantly decreased from 88.7% to 84.8% (P = 0.03). Reading time with and without CAD was 112 and 107 seconds, respectively (P = 0.38), and the number of false positives was 10.5 with CAD and 7.0 without CAD (P = 0.053). Sensitivity significantly improved for 6- to 12-mm lesions, from 71.2% without CAD to 80.3% with CAD (P = 0.02). The sensitivity of the CAD (95.2%) was significantly higher than that of any reader (with CAD: P = 0.01; without CAD: P = 0.005). CONCLUSIONS Computer-aided detection significantly improved BM detection sensitivity without prolonging reading time while marginally increased the false positives.
Collapse
|
44
|
Park JE. Artificial Intelligence in Neuro-Oncologic Imaging: A Brief Review for Clinical Use Cases and Future Perspectives. Brain Tumor Res Treat 2022; 10:69-75. [PMID: 35545825 PMCID: PMC9098975 DOI: 10.14791/btrt.2021.0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 03/24/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
The artificial intelligence (AI) techniques, both deep learning end-to-end approaches and radiomics with machine learning, have been developed for various imaging-based tasks in neuro-oncology. In this brief review, use cases of AI in neuro-oncologic imaging are summarized: image quality improvement, metastasis detection, radiogenomics, and treatment response monitoring. We then give a brief overview of generative adversarial network and potential utility of synthetic images for various deep learning algorithms of imaging-based tasks and image translation tasks as becoming new data input. Lastly, we highlight the importance of cohorts and clinical trial as a true validation for clinical utility of AI in neuro-oncologic imaging.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
45
|
Kikuchi Y, Togao O, Kikuchi K, Momosaka D, Obara M, Van Cauteren M, Fischer A, Ishigami K, Hiwatashi A. A deep convolutional neural network-based automatic detection of brain metastases with and without blood vessel suppression. Eur Radiol 2022; 32:2998-3005. [PMID: 34993572 DOI: 10.1007/s00330-021-08427-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 10/12/2021] [Accepted: 10/18/2021] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To develop an automated model to detect brain metastases using a convolutional neural network (CNN) and volume isotropic simultaneous interleaved bright-blood and black-blood examination (VISIBLE) and to compare its diagnostic performance with the observer test. METHODS This retrospective study included patients with clinical suspicion of brain metastases imaged with VISIBLE from March 2016 to July 2019 to create a model. Images with and without blood vessel suppression were used for training an existing CNN (DeepMedic). Diagnostic performance was evaluated using sensitivity and false-positive results per case (FPs/case). We compared the diagnostic performance of the CNN model with that of the twelve radiologists. RESULTS Fifty patients (30 males and 20 females; age range 29-86 years; mean 63.3 ± 12.8 years; a total of 165 metastases) who were clinically diagnosed with brain metastasis on follow-up were used for the training. The sensitivity of our model was 91.7%, which was higher than that of the observer test (mean ± standard deviation; 88.7 ± 3.7%). The number of FPs/case in our model was 1.5, which was greater than that by the observer test (0.17 ± 0.09). CONCLUSIONS Compared to radiologists, our model created by VISIBLE and CNN to diagnose brain metastases showed higher sensitivity. The number of FPs/case by our model was greater than that by the observer test of radiologists; however, it was less than that in most of the previous studies with deep learning. KEY POINTS • Our convolutional neural network based on bright-blood and black-blood examination to diagnose brain metastases showed a higher sensitivity than that by the observer test. • The number of false-positives/case by our model was greater than that by the previous observer test; however, it was less than those from most previous studies. • In our model, false-positives were found in the vessels, choroid plexus, and image noise or unknown causes.
Collapse
Affiliation(s)
- Yoshitomo Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Osamu Togao
- Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kazufumi Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Daichi Momosaka
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Makoto Obara
- MR Clinical Science, Philips Japan Ltd, Tokyo, Japan
| | | | | | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Akio Hiwatashi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| |
Collapse
|
46
|
Convolutional Neural Networks to Detect Vestibular Schwannomas on Single MRI Slices: A Feasibility Study. Cancers (Basel) 2022; 14:cancers14092069. [PMID: 35565199 PMCID: PMC9104481 DOI: 10.3390/cancers14092069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 03/30/2022] [Accepted: 04/19/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Due to the fact that they take inter-slice information into account, 3D- and 2.5D-convolutional neural networks (CNNs) potentially perform better in tumor detection tasks than 2D-CNNs. However, this potential benefit is at the expense of increased computational power and the need for segmentations as an input. Therefore, in this study we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. We retrained (539 patients) and internally validated (94 patients) a pretrained CNN using contrast-enhanced MRI slices from one institution. Furthermore, we externally validated the CNN using contrast-enhanced MRI slices from another institution. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) and 0.912 (95% CI 0.866–0.958) for the internal and external validation, respectively. Our findings indicate that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased requirement for computational power and the fact that there is no need for segmentations. Abstract In this study. we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. A pretrained CNN (ResNet-34) was retrained and internally validated using contrast-enhanced T1-weighted (T1c) MRI slices from one institution. In a second step, the model was externally validated using T1c- and T1-weighted (T1) slices from a different institution. As a substitute, bisected slices were used with and without tumors originating from whole transversal slices that contained part of the unilateral VS. The model predictions were assessed based on the categorical accuracy and confusion matrices. A total of 539, 94, and 74 patients were included for training, internal validation, and external T1c validation, respectively. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) for the internal validation and 0.912 (95% CI 0.866–0.958) for the external T1c validation. We suggest that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased demand for computational power and the fact that there is no need for segmentations. However, further research is needed on the difference between 2D-CNNs and more complex architectures.
Collapse
|
47
|
Swinburne NC, Yadav V, Kim J, Choi YR, Gutman DC, Yang JT, Moss N, Stone J, Tisnado J, Hatzoglou V, Haque SS, Karimi S, Lyo J, Juluru K, Pichotta K, Gao J, Shah SP, Holodny AI, Young RJ. Semisupervised Training of a Brain MRI Tumor Detection Model Using Mined Annotations. Radiology 2022; 303:80-89. [PMID: 35040676 PMCID: PMC8962822 DOI: 10.1148/radiol.210817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 10/12/2021] [Accepted: 11/03/2021] [Indexed: 11/11/2022]
Abstract
Background Artificial intelligence (AI) applications for cancer imaging conceptually begin with automated tumor detection, which can provide the foundation for downstream AI tasks. However, supervised training requires many image annotations, and performing dedicated post hoc image labeling is burdensome and costly. Purpose To investigate whether clinically generated image annotations can be data mined from the picture archiving and communication system (PACS), automatically curated, and used for semisupervised training of a brain MRI tumor detection model. Materials and Methods In this retrospective study, the cancer center PACS was mined for brain MRI scans acquired between January 2012 and December 2017 and included all annotated axial T1 postcontrast images. Line annotations were converted to boxes, excluding boxes shorter than 1 cm or longer than 7 cm. The resulting boxes were used for supervised training of object detection models using RetinaNet and Mask region-based convolutional neural network (R-CNN) architectures. The best-performing model trained from the mined data set was used to detect unannotated tumors on training images themselves (self-labeling), automatically correcting many of the missing labels. After self-labeling, new models were trained using this expanded data set. Models were scored for precision, recall, and F1 using a held-out test data set comprising 754 manually labeled images from 100 patients (403 intra-axial and 56 extra-axial enhancing tumors). Model F1 scores were compared using bootstrap resampling. Results The PACS query extracted 31 150 line annotations, yielding 11 880 boxes that met inclusion criteria. This mined data set was used to train models, yielding F1 scores of 0.886 for RetinaNet and 0.908 for Mask R-CNN. Self-labeling added 18 562 training boxes, improving model F1 scores to 0.935 (P < .001) and 0.954 (P < .001), respectively. Conclusion The application of semisupervised learning to mined image annotations significantly improved tumor detection performance, achieving an excellent F1 score of 0.954. This development pipeline can be extended for other imaging modalities, repurposing unused data silos to potentially enable automated tumor detection across radiologic modalities. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
| | | | - Julie Kim
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Ye R. Choi
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - David C. Gutman
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Jonathan T. Yang
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Nelson Moss
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Jacqueline Stone
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Jamie Tisnado
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Vaios Hatzoglou
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Sofia S. Haque
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Sasan Karimi
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - John Lyo
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Krishna Juluru
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Karl Pichotta
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Jianjiong Gao
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Sohrab P. Shah
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Andrei I. Holodny
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | - Robert J. Young
- From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G.,
J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology
(J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and
Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.),
Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill
Cornell Medical College, New York, NY (J.K.)
| | | |
Collapse
|
48
|
Dikici E, Nguyen XV, Bigelow M, Prevedello LM. Augmented Networks for Faster Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI. Comput Med Imaging Graph 2022; 98:102059. [DOI: 10.1016/j.compmedimag.2022.102059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 01/21/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
49
|
Braeker N, Schmitz C, Wagner N, Stanicki BJ, Schröder C, Ehret F, Fürweger C, Zwahlen DR, Förster R, Muacevic A, Windisch P. Classifying the Acquisition Sequence for Brain MRIs Using Neural Networks on Single Slices. Cureus 2022; 14:e22435. [PMID: 35345703 PMCID: PMC8941825 DOI: 10.7759/cureus.22435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Background Neural networks for analyzing MRIs are oftentimes trained on particular combinations of perspectives and acquisition sequences. Since real-world data are less structured and do not follow a standard denomination of acquisition sequences, this impedes the transition from deep learning research to clinical application. The purpose of this study is therefore to assess the feasibility of classifying the acquisition sequence from a single MRI slice using convolutional neural networks. Methods A total of 113 MRI slices from 52 patients were used in a transfer learning approach to train three convolutional neural networks of different complexities to predict the acquisition sequence, while 27 slices were used for internal validation. The model then underwent external validation on 600 slices from 273 patients belonging to one of four classes (T1-weighted without contrast enhancement, T1-weighted with contrast enhancement, T2-weighted, and diffusion-weighted). Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results The neural networks achieved a categorical accuracy of 0.79, 0.81, and 0.84 on the external validation data. The implementation of Grad-CAM showed no clear pattern of focus except for T2-weighted slices, where the network focused on areas containing cerebrospinal fluid. Conclusion Automatically classifying the acquisition sequence using neural networks seems feasible and could be used to facilitate the automatic labelling of MRI data.
Collapse
|
50
|
Omari EA, Zhang Y, Ahunbay E, Paulson E, Amjad A, Chen X, Liang Y, Li XA. Multi parametric magnetic resonance imaging for radiation treatment planning. Med Phys 2022; 49:2836-2845. [PMID: 35170769 DOI: 10.1002/mp.15534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/05/2021] [Accepted: 01/03/2022] [Indexed: 11/09/2022] Open
Abstract
In recent years, multi-parametric magnetic resonance imaging (MpMRI) has played a major role in radiation therapy treatment planning. The superior soft tissue contrast, functional or physiological imaging capabilities and the flexibility of site-specific image sequence development has placed MpMRI at the forefront. In this article, the present status of MpMRI for external beam radiation therapy planning is reviewed. Common MpMRI sequences, preprocessing and QA strategies are briefly discussed, and various image registration techniques and strategies are addressed. Image segmentation methods including automatic segmentation and deep learning techniques for organs at risk and target delineation are reviewed. Due to the advancement in MRI guided online adaptive radiotherapy, treatment planning considerations addressing MRI only planning are also discussed. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Eenas A Omari
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ergun Ahunbay
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Xinfeng Chen
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| |
Collapse
|