1
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
2
|
Bhattacharya K, Mahajan A, Mynalli S. Imaging Recommendations for Diagnosis, Staging, and Management of Central Nervous System Neoplasms in Adults: CNS Metastases. Cancers (Basel) 2024; 16:2667. [PMID: 39123394 PMCID: PMC11311790 DOI: 10.3390/cancers16152667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 07/07/2024] [Accepted: 07/12/2024] [Indexed: 08/12/2024] Open
Abstract
Brain metastases (BMs) are the most common central nervous system (CNS) neoplasms, with an increasing incidence that is due in part to an overall increase in primary cancers, improved neuroimaging modalities leading to increased detection, better systemic therapies, and longer patient survival. OBJECTIVE To identify cancer patients at a higher risk of developing CNS metastases and to evaluate associated prognostic factors. METHODS Review of imaging referral guidelines, response criteria, interval imaging assessment, modality of choice, as well as the association of clinical, serological, and imaging findings as per various cancer societies. RESULTS Quantitative response assessment of target and non-target brain metastases as well as an interval imaging protocol set up based on primary histological diagnosis and therapy status are discussed as per various cancer societies and imaging programs. CONCLUSION Predictive factors in the primary tumor as well as independent variables of brain metastases like size, number, and response to therapy are necessary in management. The location of CNS metastases, symptomatic disease, as well as follow up imaging findings form a skeletal plan to prognosticate the disease, keeping in mind all the available new advanced therapy options of surgery, radiation, and immunotherapy that improve patient outcome significantly.
Collapse
Affiliation(s)
- Kajari Bhattacharya
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, India; (K.B.); (S.M.)
| | - Abhishek Mahajan
- Department of Imaging, The Clatterbridge Cancer Centre NHS Foundation Trust, 65 Pembroke Place, Liverpool L7 8YA, UK
- Faculty of Health and Life Sciences, University of Liverpool, Liverpool L69 3BX, UK
| | - Soujanya Mynalli
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, India; (K.B.); (S.M.)
| |
Collapse
|
3
|
Cho SJ, Cho W, Choi D, Sim G, Jeong SY, Baik SH, Bae YJ, Choi BS, Kim JH, Yoo S, Han JH, Kim CY, Choo J, Sunwoo L. Prediction of treatment response after stereotactic radiosurgery of brain metastasis using deep learning and radiomics on longitudinal MRI data. Sci Rep 2024; 14:11085. [PMID: 38750084 PMCID: PMC11096355 DOI: 10.1038/s41598-024-60781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 04/26/2024] [Indexed: 05/18/2024] Open
Abstract
We developed artificial intelligence models to predict the brain metastasis (BM) treatment response after stereotactic radiosurgery (SRS) using longitudinal magnetic resonance imaging (MRI) data and evaluated prediction accuracy changes according to the number of sequential MRI scans. We included four sequential MRI scans for 194 patients with BM and 369 target lesions for the Developmental dataset. The data were randomly split (8:2 ratio) for training and testing. For external validation, 172 MRI scans from 43 patients with BM and 62 target lesions were additionally enrolled. The maximum axial diameter (Dmax), radiomics, and deep learning (DL) models were generated for comparison. We evaluated the simple convolutional neural network (CNN) model and a gated recurrent unit (Conv-GRU)-based CNN model in the DL arm. The Conv-GRU model performed superior to the simple CNN models. For both datasets, the area under the curve (AUC) was significantly higher for the two-dimensional (2D) Conv-GRU model than for the 3D Conv-GRU, Dmax, and radiomics models. The accuracy of the 2D Conv-GRU model increased with the number of follow-up studies. In conclusion, using longitudinal MRI data, the 2D Conv-GRU model outperformed all other models in predicting the treatment response after SRS of BM.
Collapse
Affiliation(s)
- Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Wonwoo Cho
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Dongmin Choi
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Gyuhyeon Sim
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - So Yeong Jeong
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sooyoung Yoo
- Office of eHealth Research and Business, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jung Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jaegul Choo
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea.
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea.
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
| |
Collapse
|
4
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
5
|
Park YW, Park JE, Ahn SS, Han K, Kim N, Oh JY, Lee DH, Won SY, Shin I, Kim HS, Lee SK. Deep learning-based metastasis detection in patients with lung cancer to enhance reproducibility and reduce workload in brain metastasis screening with MRI: a multi-center study. Cancer Imaging 2024; 24:32. [PMID: 38429843 PMCID: PMC10905821 DOI: 10.1186/s40644-024-00669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024] Open
Abstract
OBJECTIVES To assess whether a deep learning-based system (DLS) with black-blood imaging for brain metastasis (BM) improves the diagnostic workflow in a multi-center setting. MATERIALS AND METHODS In this retrospective study, a DLS was developed in 101 patients and validated on 264 consecutive patients (with lung cancer) having newly developed BM from two tertiary university hospitals, which performed black-blood imaging between January 2020 and April 2021. Four neuroradiologists independently evaluated BM either with segmented masks and BM counts provided (with DLS) or not provided (without DLS) on a clinical trial imaging management system (CTIMS). To assess reading reproducibility, BM count agreement between the readers and the reference standard were calculated using limits of agreement (LoA). Readers' workload was assessed with reading time, which was automatically measured on CTIMS, and were compared between with and without DLS using linear mixed models considering the imaging center. RESULTS In the validation cohort, the detection sensitivity and positive predictive value of the DLS were 90.2% (95% confidence interval [CI]: 88.1-92.2) and 88.2% (95% CI: 85.7-90.4), respectively. The difference between the readers and the reference counts was larger without DLS (LoA: -0.281, 95% CI: -2.888, 2.325) than with DLS (LoA: -0.163, 95% CI: -2.692, 2.367). The reading time was reduced from mean 66.9 s (interquartile range: 43.2-90.6) to 57.3 s (interquartile range: 33.6-81.0) (P <.001) in the with DLS group, regardless of the imaging center. CONCLUSION Deep learning-based BM detection and counting with black-blood imaging improved reproducibility and reduced reading time, on multi-center validation.
Collapse
Affiliation(s)
- Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea.
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea.
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | | | - Joo Young Oh
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Da Hyun Lee
- Department of Radiology, Ajou University Medical Center, Suwon, Korea
| | - So Yeon Won
- Department of Radiology, Samsung Seoul Hospital, Seoul, Korea
| | - Ilah Shin
- Department of Radiology, The Catholic University of Korea, Seoul St. Mary's hospital, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| |
Collapse
|
6
|
Fairchild A, Salama JK, Godfrey D, Wiggins WF, Ackerson BG, Oyekunle T, Niedzwiecki D, Fecci PE, Kirkpatrick JP, Floyd SR. Incidence and imaging characteristics of difficult to detect retrospectively identified brain metastases in patients receiving repeat courses of stereotactic radiosurgery. J Neurooncol 2024:10.1007/s11060-024-04594-6. [PMID: 38340295 DOI: 10.1007/s11060-024-04594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
PURPOSE During stereotactic radiosurgery (SRS) planning for brain metastases (BM), brain MRIs are reviewed to select appropriate targets based on radiographic characteristics. Some BM are difficult to detect and/or definitively identify and may go untreated initially, only to become apparent on future imaging. We hypothesized that in patients receiving multiple courses of SRS, reviewing the initial planning MRI would reveal early evidence of lesions that developed into metastases requiring SRS. METHODS Patients undergoing two or more courses of SRS to BM within 6 months between 2016 and 2018 were included in this single-institution, retrospective study. Brain MRIs from the initial course were reviewed for lesions at the same location as subsequently treated metastases; if present, this lesion was classified as a "retrospectively identified metastasis" or RIM. RIMs were subcategorized as meeting or not meeting diagnostic imaging criteria for BM (+ DC or -DC, respectively). RESULTS Among 683 patients undergoing 923 SRS courses, 98 patients met inclusion criteria. There were 115 repeat courses of SRS, with 345 treated metastases in the subsequent course, 128 of which were associated with RIMs found in a prior MRI. 58% of RIMs were + DC. 17 (15%) of subsequent courses consisted solely of metastases associated with + DC RIMs. CONCLUSION Radiographic evidence of brain metastases requiring future treatment was occasionally present on brain MRIs from prior SRS treatments. Most RIMs were + DC, and some subsequent SRS courses treated only + DC RIMs. These findings suggest enhanced BM detection might enable earlier treatment and reduce the need for additional SRS.
Collapse
Affiliation(s)
- Andrew Fairchild
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.
- Piedmont Radiation Oncology, 3333 Silas Creek Parkway, Winston Salem, NC, 27103, USA.
| | - Joseph K Salama
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Radiation Oncology Service, Durham VA Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Walter F Wiggins
- Deartment of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Bradley G Ackerson
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Taofik Oyekunle
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Peter E Fecci
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - John P Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - Scott R Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
7
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
8
|
Lv B, Wang K, Wei N, Yu F, Tao T, Shi Y. Diagnostic value of deep learning-assisted endoscopic ultrasound for pancreatic tumors: a systematic review and meta-analysis. Front Oncol 2023; 13:1191008. [PMID: 37576885 PMCID: PMC10414790 DOI: 10.3389/fonc.2023.1191008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/13/2023] [Indexed: 08/15/2023] Open
Abstract
Background and aims Endoscopic ultrasonography (EUS) is commonly utilized in the diagnosis of pancreatic tumors, although as this modality relies primarily on the practitioner's visual judgment, it is prone to result in a missed diagnosis or misdiagnosis due to inexperience, fatigue, or distraction. Deep learning (DL) techniques, which can be used to automatically extract detailed imaging features from images, have been increasingly beneficial in the field of medical image-based assisted diagnosis. The present systematic review included a meta-analysis aimed at evaluating the accuracy of DL-assisted EUS for the diagnosis of pancreatic tumors diagnosis. Methods We performed a comprehensive search for all studies relevant to EUS and DL in the following four databases, from their inception through February 2023: PubMed, Embase, Web of Science, and the Cochrane Library. Target studies were strictly screened based on specific inclusion and exclusion criteria, after which we performed a meta-analysis using Stata 16.0 to assess the diagnostic ability of DL and compare it with that of EUS practitioners. Any sources of heterogeneity were explored using subgroup and meta-regression analyses. Results A total of 10 studies, involving 3,529 patients and 34,773 training images, were included in the present meta-analysis. The pooled sensitivity was 93% (95% confidence interval [CI], 87-96%), the pooled specificity was 95% (95% CI, 89-98%), and the area under the summary receiver operating characteristic curve (AUC) was 0.98 (95% CI, 0.96-0.99). Conclusion DL-assisted EUS has a high accuracy and clinical applicability for diagnosing pancreatic tumors. Systematic review registration https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023391853, identifier CRD42023391853.
Collapse
Affiliation(s)
- Bing Lv
- School of Computer Science and Technology, Shandong University of Technology, Zibo, Shandong, China
| | - Kunhong Wang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Ning Wei
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Feng Yu
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Tao Tao
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Yanting Shi
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| |
Collapse
|
9
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
10
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
11
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
12
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Miki S, Watanabe Y, Suzuki F, Fujimoto K, Mizuki M, Abe O. Computer-aided detection improves brain metastasis identification on non-enhanced CT in less experienced radiologists. Acta Radiol 2022; 64:1958-1965. [PMID: 36426577 DOI: 10.1177/02841851221139124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Background Brain metastases (BMs) are the most common intracranial tumors causing neurological complications associated with significant morbidity and mortality. Purpose To evaluate the effect of computer-aided detection (CAD) on the performance of observers in detecting BMs on non-enhanced computed tomography (NECT). Material and Methods Three less experienced and three experienced radiologists interpreted 30 NECT scans with 89 BMs in 25 cases to detect BMs with and without the assistance of CAD. The observers’ sensitivity, number of false positives (FPs), positive predictive value (PPV), and reading time with and without CAD were compared using paired t-tests. The sensitivity of CAD and the observers were compared using a one-sample t-test Results With CAD, less experienced radiologists’ sensitivity significantly increased from 27.7% ± 4.6% to 32.6% ± 4.8% ( P = 0.007), while the experienced radiologists’ sensitivity did not show a significant difference (from 33.3% ± 3.5% to 31.9% ± 3.7%; P = 0.54). There was no significant difference between conditions with CAD and without CAD for FPs (less experienced radiologists: 23.0 ± 10.4 and 25.0 ± 9.3; P = 0.32; experienced radiologists: 18.3 ± 7.4 and 17.3 ± 6.7; P = 0.76) and PPVs (less experienced radiologists: 57.9% ± 8.3% and 50.9% ± 7.0%; P = 0.14; experienced radiologists: 61.8% ± 12.7% and 64.0% ± 12.1%; P = 0.69). There were no significant differences in reading time with and without CAD (85.0 ± 45.6 s and 73.7 ± 36.7 s; P = 0.09). The sensitivity of CAD was 47.2% (with a PPV of 8.9%), which was significantly higher than that of any radiologist ( P < 0.001). Conclusion CAD improved BM detection sensitivity on NECT without increasing FPs or reading time among less experienced radiologists, but this was not the case among experienced radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Kawasaki, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Soichiro Miki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Fumio Suzuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Kotaro Fujimoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Masumi Mizuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
13
|
Huang Y, Bert C, Sommer P, Frey B, Gaipl U, Distel LV, Weissmann T, Uder M, Schmidt MA, Dörfler A, Maier A, Fietkau R, Putz F. Deep learning for brain metastasis detection and segmentation in longitudinal MRI data. Med Phys 2022; 49:5773-5786. [PMID: 35833351 DOI: 10.1002/mp.15863] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 06/22/2022] [Accepted: 06/28/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is essential for treatment planning and prognosis in radiation therapy. Due to their tiny sizes and relatively low contrast, small brain metastases are very difficult to detect manually. With the recent development of deep learning technologies, several researchers have reported promising results in automated brain metastasis detection. However, the detection sensitivity is still not high enough for tiny brain metastases, and integration into clinical practice in regard to differentiating true metastases from false positives is challenging. METHODS The DeepMedic network with the binary cross-entropy (BCE) loss is used as our baseline method. To improve brain metastasis detection performance, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates metastasis detection sensitivity and specificity at a (sub-)volume level. As sensitivity and precision are always a trade-off, either a high sensitivity or a high precision can be achieved for brain metastasis detection by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of DeepMedic. The modified network is called DeepMedic+ for distinction. Combining a high sensitivity VSS loss and a high specificity loss for DeepMedic+, the majority of true positive metastases are confirmed with high specificity, while additional metastases candidates in each patient are marked with high sensitivity for detailed expert evaluation. RESULTS Our proposed VSS loss improves the sensitivity of brain metastasis detection, increasing the sensitivity from 85.3% for DeepMedic with BCE to 97.5% for DeepMedic with VSS. Alternatively, the precision is improved from 69.1% for DeepMedic with BCE to 98.7% for DeepMedic with VSS. Comparing DeepMedic+ with DeepMedic with the same VSS loss, 44.4% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient need further check, while the majority of true positive metastases are confirmed. CONCLUSIONS Our proposed VSS loss and temporal prior improve brain metastasis detection sensitivity and precision. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice. This facilitates metastasis detection and segmentation for neuroradiologists in diagnostic and radiation oncologists in therapeutic clinical applications. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Philipp Sommer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Udo Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Luitpold V Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Manuel A Schmidt
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Arnd Dörfler
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | | | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
14
|
The Usefulness of Computer-Aided Detection of Brain Metastases on Contrast-Enhanced Computed Tomography Using Single-Shot Multibox Detector: Observer Performance Study. J Comput Assist Tomogr 2022; 46:786-791. [PMID: 35819922 DOI: 10.1097/rct.0000000000001339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study aimed to test the usefulness of computer-aided detection (CAD) for the detection of brain metastasis (BM) on contrast-enhanced computed tomography. METHODS The test data set included whole-brain axial contrast-enhanced computed tomography images of 25 cases with 62 BMs and 5 cases without BM. Six radiologists from 3 institutions with 2 to 4 years of experience independently reviewed the cases, both in conditions with and without CAD assistance. Sensitivity, positive predictive value, number of false positives, and reading time were compared between the conditions using paired t tests. Subanalysis was also performed for groups of lesions divided according to size. A P value <0.05 was considered statistically significant. RESULTS With CAD, sensitivity significantly increased from 80.4% to 83.9% (P = 0.04), whereas positive predictive value significantly decreased from 88.7% to 84.8% (P = 0.03). Reading time with and without CAD was 112 and 107 seconds, respectively (P = 0.38), and the number of false positives was 10.5 with CAD and 7.0 without CAD (P = 0.053). Sensitivity significantly improved for 6- to 12-mm lesions, from 71.2% without CAD to 80.3% with CAD (P = 0.02). The sensitivity of the CAD (95.2%) was significantly higher than that of any reader (with CAD: P = 0.01; without CAD: P = 0.005). CONCLUSIONS Computer-aided detection significantly improved BM detection sensitivity without prolonging reading time while marginally increased the false positives.
Collapse
|
15
|
Kikuchi Y, Togao O, Kikuchi K, Momosaka D, Obara M, Van Cauteren M, Fischer A, Ishigami K, Hiwatashi A. A deep convolutional neural network-based automatic detection of brain metastases with and without blood vessel suppression. Eur Radiol 2022; 32:2998-3005. [PMID: 34993572 DOI: 10.1007/s00330-021-08427-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 10/12/2021] [Accepted: 10/18/2021] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To develop an automated model to detect brain metastases using a convolutional neural network (CNN) and volume isotropic simultaneous interleaved bright-blood and black-blood examination (VISIBLE) and to compare its diagnostic performance with the observer test. METHODS This retrospective study included patients with clinical suspicion of brain metastases imaged with VISIBLE from March 2016 to July 2019 to create a model. Images with and without blood vessel suppression were used for training an existing CNN (DeepMedic). Diagnostic performance was evaluated using sensitivity and false-positive results per case (FPs/case). We compared the diagnostic performance of the CNN model with that of the twelve radiologists. RESULTS Fifty patients (30 males and 20 females; age range 29-86 years; mean 63.3 ± 12.8 years; a total of 165 metastases) who were clinically diagnosed with brain metastasis on follow-up were used for the training. The sensitivity of our model was 91.7%, which was higher than that of the observer test (mean ± standard deviation; 88.7 ± 3.7%). The number of FPs/case in our model was 1.5, which was greater than that by the observer test (0.17 ± 0.09). CONCLUSIONS Compared to radiologists, our model created by VISIBLE and CNN to diagnose brain metastases showed higher sensitivity. The number of FPs/case by our model was greater than that by the observer test of radiologists; however, it was less than that in most of the previous studies with deep learning. KEY POINTS • Our convolutional neural network based on bright-blood and black-blood examination to diagnose brain metastases showed a higher sensitivity than that by the observer test. • The number of false-positives/case by our model was greater than that by the previous observer test; however, it was less than those from most previous studies. • In our model, false-positives were found in the vessels, choroid plexus, and image noise or unknown causes.
Collapse
Affiliation(s)
- Yoshitomo Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Osamu Togao
- Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kazufumi Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Daichi Momosaka
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Makoto Obara
- MR Clinical Science, Philips Japan Ltd, Tokyo, Japan
| | | | | | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Akio Hiwatashi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| |
Collapse
|
16
|
Krauze AV, Zhuge Y, Zhao R, Tasci E, Camphausen K. AI-Driven Image Analysis in Central Nervous System Tumors-Traditional Machine Learning, Deep Learning and Hybrid Models. JOURNAL OF BIOTECHNOLOGY AND BIOMEDICINE 2022; 5:1-19. [PMID: 35106480 PMCID: PMC8802234 DOI: 10.26502/jbb.2642-91280046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The interpretation of imaging in medicine in general and in oncology specifically remains problematic due to several limitations which include the need to incorporate detailed clinical history, patient and disease-specific history, clinical exam features, previous and ongoing treatment, and account for the dependency on reproducible human interpretation of multiple factors with incomplete data linkage. To standardize reporting, minimize bias, expedite management, and improve outcomes, the use of Artificial Intelligence (AI) has gained significant prominence in imaging analysis. In oncology, AI methods have as a result been explored in most cancer types with ongoing progress in employing AI towards imaging for oncology treatment, assessing treatment response, and understanding and communicating prognosis. Challenges remain with limited available data sets, variability in imaging changes over time augmented by a growing heterogeneity in analysis approaches. We review the imaging analysis workflow and examine how hand-crafted features also referred to as traditional Machine Learning (ML), Deep Learning (DL) approaches, and hybrid analyses, are being employed in AI-driven imaging analysis in central nervous system tumors. ML, DL, and hybrid approaches coexist, and their combination may produce superior results although data in this space is as yet novel, and conclusions and pitfalls have yet to be fully explored. We note the growing technical complexities that may become increasingly separated from the clinic and enforce the acute need for clinician engagement to guide progress and ensure that conclusions derived from AI-driven imaging analysis reflect that same level of scrutiny lent to other avenues of clinical research.
Collapse
Affiliation(s)
- A V Krauze
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - Y Zhuge
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - R Zhao
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, Canada
| | - E Tasci
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - K Camphausen
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| |
Collapse
|
17
|
Yin S, Luo X, Yang Y, Shao Y, Ma L, Lin C, Yang Q, Wang D, Luo Y, Mai Z, Fan W, Zheng D, Li J, Cheng F, Zhang Y, Zhong X, Shen F, Shao G, Wu J, Sun Y, Luo H, Li C, Gao Y, Shen D, Zhang R, Xie C. OUP accepted manuscript. Neuro Oncol 2022; 24:1559-1570. [PMID: 35100427 PMCID: PMC9435500 DOI: 10.1093/neuonc/noac025] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
| | | | | | - Ying Shao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Qiuxia Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Deling Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhijun Mai
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Fengyan Cheng
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Yuhui Zhang
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Xinwei Zhong
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Fangmin Shen
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Guohua Shao
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Jiahao Wu
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Huiyan Luo
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chaofeng Li
- Department of Artificial Intelligence Laboratory, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Rong Zhang
- Rong Zhang, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| | - Chuanmiao Xie
- Corresponding Authors: Chuanmiao Xie, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| |
Collapse
|
18
|
Cho J, Kim YJ, Sunwoo L, Lee GP, Nguyen TQ, Cho SJ, Baik SH, Bae YJ, Choi BS, Jung C, Sohn CH, Han JH, Kim CY, Kim KG, Kim JH. Deep Learning-Based Computer-Aided Detection System for Automated Treatment Response Assessment of Brain Metastases on 3D MRI. Front Oncol 2021; 11:739639. [PMID: 34778056 PMCID: PMC8579083 DOI: 10.3389/fonc.2021.739639] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 09/30/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Although accurate treatment response assessment for brain metastases (BMs) is crucial, it is highly labor intensive. This retrospective study aimed to develop a computer-aided detection (CAD) system for automated BM detection and treatment response evaluation using deep learning. METHODS We included 214 consecutive MRI examinations of 147 patients with BM obtained between January 2015 and August 2016. These were divided into the training (174 MR images from 127 patients) and test datasets according to temporal separation (temporal test set #1; 40 MR images from 20 patients). For external validation, 24 patients with BM and 11 patients without BM from other institutions were included (geographic test set). In addition, we included 12 MRIs from BM patients obtained between August 2017 and March 2020 (temporal test set #2). Detection sensitivity, dice similarity coefficient (DSC) for segmentation, and agreements in one-dimensional and volumetric Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) criteria between CAD and radiologists were assessed. RESULTS In the temporal test set #1, the sensitivity was 75.1% (95% confidence interval [CI]: 69.6%, 79.9%), mean DSC was 0.69 ± 0.22, and false-positive (FP) rate per scan was 0.8 for BM ≥ 5 mm. Agreements in the RANO-BM criteria were moderate (κ, 0.52) and substantial (κ, 0.68) for one-dimensional and volumetric, respectively. In the geographic test set, sensitivity was 87.7% (95% CI: 77.2%, 94.5%), mean DSC was 0.68 ± 0.20, and FP rate per scan was 1.9 for BM ≥ 5 mm. In the temporal test set #2, sensitivity was 94.7% (95% CI: 74.0%, 99.9%), mean DSC was 0.82 ± 0.20, and FP per scan was 0.5 (6/12) for BM ≥ 5 mm. CONCLUSIONS Our CAD showed potential for automated treatment response assessment of BM ≥ 5 mm.
Collapse
Affiliation(s)
- Jungheum Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Gi Pyo Lee
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Toan Quang Nguyen
- Department of Radiology, Vietnam National Cancer Hospital, Hanoi, Vietnam
| | - Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chul-Ho Sohn
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea
| | - Jung-Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
19
|
Williams S, Layard Horsfall H, Funnell JP, Hanrahan JG, Khan DZ, Muirhead W, Stoyanov D, Marcus HJ. Artificial Intelligence in Brain Tumour Surgery-An Emerging Paradigm. Cancers (Basel) 2021; 13:cancers13195010. [PMID: 34638495 PMCID: PMC8508169 DOI: 10.3390/cancers13195010] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 01/01/2023] Open
Abstract
Artificial intelligence (AI) platforms have the potential to cause a paradigm shift in brain tumour surgery. Brain tumour surgery augmented with AI can result in safer and more effective treatment. In this review article, we explore the current and future role of AI in patients undergoing brain tumour surgery, including aiding diagnosis, optimising the surgical plan, providing support during the operation, and better predicting the prognosis. Finally, we discuss barriers to the successful clinical implementation, the ethical concerns, and we provide our perspective on how the field could be advanced.
Collapse
Affiliation(s)
- Simon Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
- Correspondence:
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - John G. Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danyal Z. Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| |
Collapse
|
20
|
Amemiya S, Takao H, Kato S, Yamashita H, Sakamoto N, Abe O. Feature-fusion improves MRI single-shot deep learning detection of small brain metastases. J Neuroimaging 2021; 32:111-119. [PMID: 34388855 DOI: 10.1111/jon.12916] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 07/25/2021] [Accepted: 07/26/2021] [Indexed: 12/01/2022] Open
Abstract
BACKGROUND AND PURPOSE To examine whether feature-fusion (FF) method improves single-shot detector's (SSD's) detection of small brain metastases on contrast-enhanced (CE) T1-weighted MRI. METHODS The study included 234 MRI scans from 234 patients (64.3 years±12.0; 126 men). The ground-truth annotation was performed semiautomatically. SSDs with and without an FF module were developed and trained using 178 scans. The detection performance was evaluated at the SSDs' 50% confidence threshold using sensitivity, positive-predictive value (PPV), and the false-positive (FP) per scan with the remaining 56 scans. RESULTS FF-SSD achieved an overall sensitivity of 86.0% (95% confidence interval [CI]: [83.0%, 85.6%]; 196/228) and 46.8% PPV (95% CI: [42.0%, 46.3%]; 196/434), with 4.3 FP (95% CI: [4.3, 4.9]). Lesions smaller than 3 mm had 45.8% sensitivity (95% CI: [36.1%, 45.5%]; 22/48) with 2.0 FP (95% CI: [1.9, 2.1]). Lesions measuring 3-6 mm had 92.3% sensitivity (95% CI: [86.5%, 92.0%]; 48/52) with 1.8 FP (95% CI: [1.7, 2.2]). Lesions larger than 6 mm had 98.4% sensitivity (95% CI: [97.8%, 99.4%]; 126/128) 0.5 FP (95% CI: [0.5, 0.8]) per scan. FF-SSD had a significantly higher sensitivity for lesions < 3 mm (p = 0.008, t = 3.53) than the baseline SSD, while the overall PPV was similar (p = 0.06, t = -2.16). A similar trend was observed even when the detector's confidence threshold was varied as low as 0.2, for which the FF-SSD's sensitivity was 91.2% and the FP was 9.5. CONCLUSIONS The FF-SSD algorithm identified brain metastases on CE T1-weighted MRI with high accuracy.
Collapse
Affiliation(s)
- Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
21
|
Contrast-Enhanced Black Blood MRI Sequence Is Superior to Conventional T1 Sequence in Automated Detection of Brain Metastases by Convolutional Neural Networks. Diagnostics (Basel) 2021; 11:diagnostics11061016. [PMID: 34206103 PMCID: PMC8230135 DOI: 10.3390/diagnostics11061016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/18/2021] [Accepted: 05/28/2021] [Indexed: 12/11/2022] Open
Abstract
Background: in magnetic resonance imaging (MRI), automated detection of brain metastases with convolutional neural networks (CNN) represents an extraordinary challenge due to small lesions sometimes posing as brain vessels as well as other confounders. Literature reporting high false positive rates when using conventional contrast enhanced (CE) T1 sequences questions their usefulness in clinical routine. CE black blood (BB) sequences may overcome these limitations by suppressing contrast-enhanced structures, thus facilitating lesion detection. This study compared CNN performance in conventional CE T1 and BB sequences and tested for objective improvement of brain lesion detection. Methods: we included a subgroup of 127 consecutive patients, receiving both CE T1 and BB sequences, referred for MRI concerning metastatic spread to the brain. A pretrained CNN was retrained with a customized monolayer classifier using either T1 or BB scans of brain lesions. Results: CE T1 imaging-based training resulted in an internal validation accuracy of 85.5% vs. 92.3% in BB imaging (p < 0.01). In holdout validation analysis, T1 image-based prediction presented poor specificity and sensitivity with an AUC of 0.53 compared to 0.87 in BB-imaging-based prediction. Conclusions: detection of brain lesions with CNN, BB-MRI imaging represents a highly effective input type when compared to conventional CE T1-MRI imaging. Use of BB-MRI can overcome the current limitations for automated brain lesion detection and the objectively excellent performance of our CNN suggests routine usage of BB sequences for radiological analysis.
Collapse
|
22
|
Deike-Hofmann K, Dancs D, Paech D, Schlemmer HP, Maier-Hein K, Bäumer P, Radbruch A, Götz M. Pre-examinations Improve Automated Metastases Detection on Cranial MRI. Invest Radiol 2021; 56:320-327. [PMID: 33259442 DOI: 10.1097/rli.0000000000000745] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
MATERIALS AND METHODS Our local ethics committee approved this retrospective monocenter study.First, a dual-time approach was assessed, for which the CNN was provided sequences of the MRI that initially depicted new MM (diagnosis MRI) as well as of a prediagnosis MRI: inclusion of only contrast-enhanced T1-weighted images (CNNdual_ce) was compared with inclusion of also the native T1-weighted images, T2-weighted images, and FLAIR sequences of both time points (CNNdual_all).Second, results were compared with the corresponding single time approaches, in which the CNN was provided exclusively the respective sequences of the diagnosis MRI.Casewise diagnostic performance parameters were calculated from 5-fold cross-validation. RESULTS In total, 94 cases with 494 MMs were included. Overall, the highest diagnostic performance was achieved by inclusion of only the contrast-enhanced T1-weighted images of the diagnosis and of a prediagnosis MRI (CNNdual_ce, sensitivity = 73%, PPV = 25%, F1-score = 36%). Using exclusively contrast-enhanced T1-weighted images as input resulted in significantly less false-positives (FPs) compared with inclusion of further sequences beyond contrast-enhanced T1-weighted images (FPs = 5/7 for CNNdual_ce/CNNdual_all, P < 1e-5). Comparison of contrast-enhanced dual and mono time approaches revealed that exclusion of prediagnosis MRI significantly increased FPs (FPs = 5/10 for CNNdual_ce/CNNce, P < 1e-9).Approaches with only native sequences were clearly inferior to CNNs that were provided contrast-enhanced sequences. CONCLUSIONS Automated MM detection on contrast-enhanced T1-weighted images performed with high sensitivity. Frequent FPs due to artifacts and vessels were significantly reduced by additional inclusion of prediagnosis MRI, but not by inclusion of further sequences beyond contrast-enhanced T1-weighted images. Future studies might investigate different change detection architectures for computer-aided detection.
Collapse
Affiliation(s)
| | - Dorottya Dancs
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | - Daniel Paech
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | | | - Klaus Maier-Hein
- Department for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Philipp Bäumer
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | | | - Michael Götz
- Department for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| |
Collapse
|
23
|
Yoo Y, Ceccaldi P, Liu S, Re TJ, Cao Y, Balter JM, Gibson E. Evaluating deep learning methods in detecting and segmenting different sizes of brain metastases on 3D post-contrast T1-weighted images. J Med Imaging (Bellingham) 2021; 8:037001. [PMID: 34041305 PMCID: PMC8140611 DOI: 10.1117/1.jmi.8.3.037001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 04/28/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of > 0.83 and precision of > 0.44 for lesion volume > 0.3 cm 3 but deteriorated as metastasis size decreased below 0.3 cm 3 to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to 0.3 cm 3 but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.
Collapse
Affiliation(s)
- Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Pascal Ceccaldi
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Siqi Liu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Thomas J. Re
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
- University of Michigan, Department of Biomedical Engineering, Ann Arbor, Michigan, United States
| | - James M. Balter
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
- University of Michigan, Department of Biomedical Engineering, Ann Arbor, Michigan, United States
| | - Eli Gibson
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| |
Collapse
|
24
|
Robust performance of deep learning for automatic detection and segmentation of brain metastases using three-dimensional black-blood and three-dimensional gradient echo imaging. Eur Radiol 2021; 31:6686-6695. [PMID: 33738598 DOI: 10.1007/s00330-021-07783-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 12/22/2020] [Accepted: 02/12/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVES To evaluate whether a deep learning (DL) model using both three-dimensional (3D) black-blood (BB) imaging and 3D gradient echo (GRE) imaging may improve the detection and segmentation performance of brain metastases compared to that using only 3D GRE imaging. METHODS A total of 188 patients with brain metastases (917 lesions) who underwent a brain metastasis MRI protocol including contrast-enhanced 3D BB and 3D GRE were included in the training set. DL models based on 3D U-net were constructed. The models were validated in the test set consisting of 45 patients with brain metastases (203 lesions) and 49 patients without brain metastases. RESULTS The combined 3D BB and 3D GRE model yielded better performance than the 3D GRE model (sensitivities of 93.1% vs 76.8%, p < 0.001), and this effect was significantly stronger in subgroups with small metastases (p interaction < 0.001). For metastases < 3 mm, ≥ 3 mm and < 10 mm, and ≥ 10 mm, the sensitivities were 82.4%, 93.2%, and 100%, respectively. The combined 3D BB and 3D GRE model showed a false-positive per case of 0.59 in the test set. The combined 3D BB and 3D GRE model showed a Dice coefficient of 0.822, while 3D GRE model showed a lower Dice coefficient of 0.756. CONCLUSIONS The combined 3D BB and 3D GRE DL model may improve the detection and segmentation performance of brain metastases, especially in detecting small metastases. KEY POINTS • The combined 3D BB and 3D GRE model yielded better performance for the detection of brain metastases than the 3D GRE model (p < 0.001), with sensitivities of 93.1% and 76.8%, respectively. • The combined 3D BB and 3D GRE model showed a false-positive rate per case of 0.59 in the test set. • The combined 3D BB and 3D GRE model showed a Dice coefficient of 0.822, while the 3D GRE model showed a lower Dice coefficient of 0.756.
Collapse
|
25
|
Vasey B, Ursprung S, Beddoe B, Taylor EH, Marlow N, Bilbro N, Watkinson P, McCulloch P. Association of Clinician Diagnostic Performance With Machine Learning-Based Decision Support Systems: A Systematic Review. JAMA Netw Open 2021; 4:e211276. [PMID: 33704476 PMCID: PMC7953308 DOI: 10.1001/jamanetworkopen.2021.1276] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
IMPORTANCE An increasing number of machine learning (ML)-based clinical decision support systems (CDSSs) are described in the medical literature, but this research focuses almost entirely on comparing CDSS directly with clinicians (human vs computer). Little is known about the outcomes of these systems when used as adjuncts to human decision-making (human vs human with computer). OBJECTIVES To conduct a systematic review to investigate the association between the interactive use of ML-based diagnostic CDSSs and clinician performance and to examine the extent of the CDSSs' human factors evaluation. EVIDENCE REVIEW A search of MEDLINE, Embase, PsycINFO, and grey literature was conducted for the period between January 1, 2010, and May 31, 2019. Peer-reviewed studies published in English comparing human clinician performance with and without interactive use of an ML-based diagnostic CDSSs were included. All metrics used to assess human performance were considered as outcomes. The risk of bias was assessed using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and Risk of Bias in Non-Randomised Studies-Intervention (ROBINS-I). Narrative summaries were produced for the main outcomes. Given the heterogeneity of medical conditions, outcomes of interest, and evaluation metrics, no meta-analysis was performed. FINDINGS A total of 8112 studies were initially retrieved and 5154 abstracts were screened; of these, 37 studies met the inclusion criteria. The median number of participating clinicians was 4 (interquartile range, 3-8). Of the 107 results that reported statistical significance, 54 (50%) were increased by the use of CDSSs, 4 (4%) were decreased, and 49 (46%) showed no change or an unclear change. In the subgroup of studies carried out in representative clinical settings, no association between the use of ML-based diagnostic CDSSs and improved clinician performance could be observed. Interobserver agreement was the commonly reported outcome whose change was the most strongly associated with CDSS use. Four studies (11%) reported on user feedback, and, in all but 1 case, clinicians decided to override at least some of the algorithms' recommendations. Twenty-eight studies (76%) were rated as having a high risk of bias in at least 1 of the 4 QUADAS-2 core domains, and 6 studies (16%) were considered to be at serious or critical risk of bias using ROBINS-I. CONCLUSIONS AND RELEVANCE This systematic review found only sparse evidence that the use of ML-based CDSSs is associated with improved clinician diagnostic performance. Most studies had a low number of participants, were at high or unclear risk of bias, and showed little or no consideration for human factors. Caution should be exercised when estimating the current potential of ML to improve human diagnostic performance, and more comprehensive evaluation should be conducted before deploying ML-based CDSSs in clinical settings. The results highlight the importance of considering supported human decisions as end points rather than merely the stand-alone CDSSs outputs.
Collapse
Affiliation(s)
- Baptiste Vasey
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
| | - Stephan Ursprung
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| | - Benjamin Beddoe
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Elliott H. Taylor
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
| | - Neale Marlow
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Nicole Bilbro
- Department of Surgery, Maimonides Medical Center, Brooklyn, New York
| | - Peter Watkinson
- Critical Care Research Group, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Peter McCulloch
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
26
|
Cho SJ, Sunwoo L, Baik SH, Bae YJ, Choi BS, Kim JH. Brain metastasis detection using machine learning: a systematic review and meta-analysis. Neuro Oncol 2021; 23:214-225. [PMID: 33075135 PMCID: PMC7906058 DOI: 10.1093/neuonc/noaa232] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Accurate detection of brain metastasis (BM) is important for cancer patients. We aimed to systematically review the performance and quality of machine-learning-based BM detection on MRI in the relevant literature. METHODS A systematic literature search was performed for relevant studies reported before April 27, 2020. We assessed the quality of the studies using modified tailored questionnaires of the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria and the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Pooled detectability was calculated using an inverse-variance weighting model. RESULTS A total of 12 studies were included, which showed a clear transition from classical machine learning (cML) to deep learning (DL) after 2018. The studies on DL used a larger sample size than those on cML. The cML and DL groups also differed in the composition of the dataset, and technical details such as data augmentation. The pooled proportions of detectability of BM were 88.7% (95% CI, 84-93%) and 90.1% (95% CI, 84-95%) in the cML and DL groups, respectively. The false-positive rate per person was lower in the DL group than the cML group (10 vs 135, P < 0.001). In the patient selection domain of QUADAS-2, three studies (25%) were designated as high risk due to non-consecutive enrollment and arbitrary exclusion of nodules. CONCLUSION A comparable detectability of BM with a low false-positive rate per person was found in the DL group compared with the cML group. Improvements are required in terms of quality and study design.
Collapse
Affiliation(s)
- Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| |
Collapse
|
27
|
Amemiya S, Takao H, Kato S, Yamashita H, Sakamoto N, Abe O. Automatic detection of brain metastases on contrast-enhanced CT with deep-learning feature-fused single-shot detectors. Eur J Radiol 2021; 136:109577. [PMID: 33550213 DOI: 10.1016/j.ejrad.2021.109577] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/03/2021] [Accepted: 01/27/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE Despite the potential usefulness, no automatic detector is available for brain metastases on contrast-enhanced CT (CECT). The study aims to develop and investigate deep learning-based detectors for brain metastases detection on CECT. METHOD The study included 127 CECTs from 127 patients (65.5 years±11.1; 87 men). The ground-truth annotation was performed semi-automatically by applying connected-component analysis to the binarized dataset by three radiologists. Single-shot detector (SSD) algorithms, with and without a feature-fusion module, were developed and trained using 97 scans. The performance was evaluated at the detector's 50 % confidence threshold with the remaining 30 scans using sensitivity, positive-predictive value (PPV), and the false-positive rate per scan (FPR). RESULTS Feature-fused SSD achieved an overall sensitivity of 88.1 % (95 % confidence interval [CI]: [85.2 %,88.6 %]; 214/243) and PPV of 36.0 % (95 % CI: [33.7 %,37.1 %]; 233/648), with 13.8 FPR (95 % CI: [12.7,15.0]). Lesions < 3 mm had a sensitivity of 23.1 % (95 % CI: [21.2 %,40.0 %]; 3/13), with 0.2 FPR (95 % CI: [0.23,0.65]). Lesions measuring 3-6 mm had a sensitivity of 80.0 % (95 % CI: [76.0 %,79.8 %]); 60/75) with 5.8 FPR (95 % CI: [5.0,6.2]). Lesions > 6 mm had a sensitivity of 97.4 % (95 % CI: [94.1 %,97.4 %]); 151/155) with 7.9 FPR (95 % CI: [7.2,8.5]). Feature-fused SSD had a significantly higher overall sensitivity (p = 0.03, t = 2.75) or sensitivity for lesions < 3 mm (p = 0.002, t = 4.49) than baseline SSD, while the overall PPV was similar (p = 0.96, t = -0.02). CONCLUSIONS The SSD algorithm identified brain metastases on CECT with reasonable accuracy for lesions > 3 mm without pre/post-processing.
Collapse
Affiliation(s)
- Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan; Department of Radiology, Juntendo University Hospital, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
28
|
Álvarez-Machancoses Ó, DeAndrés Galiana EJ, Cernea A, Fernández de la Viña J, Fernández-Martínez JL. On the Role of Artificial Intelligence in Genomics to Enhance Precision Medicine. PHARMACOGENOMICS & PERSONALIZED MEDICINE 2020; 13:105-119. [PMID: 32256101 PMCID: PMC7090191 DOI: 10.2147/pgpm.s205082] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 02/17/2020] [Indexed: 12/21/2022]
Abstract
The complexity of orphan diseases, which are those that do not have an effective treatment, together with the high dimensionality of the genetic data used for their analysis and the high degree of uncertainty in the understanding of the mechanisms and genetic pathways which are involved in their development, motivate the use of advanced techniques of artificial intelligence and in-depth knowledge of molecular biology, which is crucial in order to find plausible solutions in drug design, including drug repositioning. Particularly, we show that the use of robust deep sampling methodologies of the altered genetics serves to obtain meaningful results and dramatically decreases the cost of research and development in drug design, influencing very positively the use of precision medicine and the outcomes in patients. The target-centric approach and the use of strong prior hypotheses that are not matched against reality (disease genetic data) are undoubtedly the cause of the high number of drug design failures and attrition rates. Sampling and prediction under uncertain conditions cannot be avoided in the development of precision medicine.
Collapse
Affiliation(s)
- Óscar Álvarez-Machancoses
- Group of Inverse Problems, Optimization and Machine Learning, Department of Mathematics, University of Oviedo, Oviedo 33007, Spain.,DeepBiosInsights, NETGEV (Maof Tech), Dimona 8610902, Israel
| | - Enrique J DeAndrés Galiana
- Group of Inverse Problems, Optimization and Machine Learning, Department of Mathematics, University of Oviedo, Oviedo 33007, Spain
| | - Ana Cernea
- Group of Inverse Problems, Optimization and Machine Learning, Department of Mathematics, University of Oviedo, Oviedo 33007, Spain
| | - J Fernández de la Viña
- Group of Inverse Problems, Optimization and Machine Learning, Department of Mathematics, University of Oviedo, Oviedo 33007, Spain
| | | |
Collapse
|
29
|
Zhang M, Young GS, Chen H, Li J, Qin L, McFaline-Figueroa JR, Reardon DA, Cao X, Wu X, Xu X. Deep-Learning Detection of Cancer Metastases to the Brain on MRI. J Magn Reson Imaging 2020; 52:1227-1236. [PMID: 32167652 DOI: 10.1002/jmri.27129] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Approximately one-fourth of all cancer metastases are found in the brain. MRI is the primary technique for detection of brain metastasis, planning of radiotherapy, and the monitoring of treatment response. Progress in tumor treatment now requires detection of new or growing metastases at the small subcentimeter size, when these therapies are most effective. PURPOSE To develop a deep-learning-based approach for finding brain metastasis on MRI. STUDY TYPE Retrospective. SEQUENCE Axial postcontrast 3D T1 -weighted imaging. FIELD STRENGTH 1.5T and 3T. POPULATION A total of 361 scans of 121 patients were used to train and test the Faster region-based convolutional neural network (Faster R-CNN): 1565 lesions in 270 scans of 73 patients for training; 488 lesions in 91 scans of 48 patients for testing. From the 48 outputs of Faster R-CNN, 212 lesions in 46 scans of 18 patients were used for training the RUSBoost algorithm (MatLab) and 276 lesions in 45 scans of 30 patients for testing. ASSESSMENT Two radiologists diagnosed and supervised annotation of metastases on brain MRI as ground truth. This data were used to produce a 2-step pipeline consisting of a Faster R-CNN for detecting abnormal hyperintensity that may represent brain metastasis and a RUSBoost classifier to reduce the number of false-positive foci detected. STATISTICAL TESTS The performance of the algorithm was evaluated by using sensitivity, false-positive rate, and receiver's operating characteristic (ROC) curves. The detection performance was assessed both per-metastases and per-slice. RESULTS Testing on held-out brain MRI data demonstrated 96% sensitivity and 20 false-positive metastases per scan. The results showed an 87.1% sensitivity and 0.24 false-positive metastases per slice. The area under the ROC curve was 0.79. CONCLUSION Our results showed that deep-learning-based computer-aided detection (CAD) had the potential of detecting brain metastases with high sensitivity and reasonable specificity. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2 J. Magn. Reson. Imaging 2020;52:1227-1236.
Collapse
Affiliation(s)
- Min Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Geoffrey S Young
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Huai Chen
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, Guangdong, China
| | - Jing Li
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Radiology, The Affiliated Hospital of Zhengzhou University (Henan Cancer Hospital), Zhengzhou, Henan, China
| | - Lei Qin
- Department of Radiology, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | | | - David A Reardon
- Department of Radiology, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Xinhua Cao
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Xian Wu
- Department of Computer Science and Technology, Tsing-hua University, Beijing, China
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
30
|
Sun H. Editorial for "Deep-Learning Detection of Cancer Metastasis to the Brain on MRI". J Magn Reson Imaging 2020; 52:1237-1238. [PMID: 32154967 DOI: 10.1002/jmri.27131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 02/25/2020] [Indexed: 11/10/2022] Open
Abstract
LEVEL OF EVIDENCE 5 TECHNICAL EFFICACY: Stage 1 J. Magn. Reson. Imaging 2020;52:1237-1238.
Collapse
Affiliation(s)
- Hongfu Sun
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
31
|
Podnar S, Kukar M, Gunčar G, Notar M, Gošnjak N, Notar M. Diagnosing brain tumours by routine blood tests using machine learning. Sci Rep 2019; 9:14481. [PMID: 31597942 PMCID: PMC6785553 DOI: 10.1038/s41598-019-51147-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 09/25/2019] [Indexed: 12/25/2022] Open
Abstract
Routine blood test results are assumed to contain much more information than is usually recognised even by the most experienced clinicians. Using routine blood tests from 15,176 neurological patients we built a machine learning predictive model for the diagnosis of brain tumours. We validated the model by retrospective analysis of 68 consecutive brain tumour and 215 control patients presenting to the neurological emergency service. Only patients with head imaging and routine blood test data were included in the validation sample. The sensitivity and specificity of the adapted tumour model in the validation group were 96% and 74%, respectively. Our data demonstrate the feasibility of brain tumour diagnosis from routine blood tests using machine learning. The reported diagnostic accuracy is comparable and possibly complementary to that of imaging studies. The presented machine learning approach opens a completely new avenue in the diagnosis of these grave neurological diseases and demonstrates the utility of valuable information obtained from routine blood tests.
Collapse
Affiliation(s)
- Simon Podnar
- Division of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia.
| | - Matjaž Kukar
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia.,Smart Blood Analytics Swiss SA, Chur, Switzerland
| | | | - Mateja Notar
- Smart Blood Analytics Swiss SA, Chur, Switzerland
| | - Nina Gošnjak
- Division of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Marko Notar
- Smart Blood Analytics Swiss SA, Chur, Switzerland
| |
Collapse
|
32
|
Becker A. Artificial intelligence in medicine: What is it doing for us today? HEALTH POLICY AND TECHNOLOGY 2019. [DOI: 10.1016/j.hlpt.2019.03.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
33
|
Grøvik E, Yi D, Iv M, Tong E, Rubin D, Zaharchuk G. Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J Magn Reson Imaging 2019; 51:175-182. [PMID: 31050074 DOI: 10.1002/jmri.26766] [Citation(s) in RCA: 131] [Impact Index Per Article: 26.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 04/18/2019] [Accepted: 04/18/2019] [Indexed: 01/16/2023] Open
Abstract
BACKGROUND Detecting and segmenting brain metastases is a tedious and time-consuming task for many radiologists, particularly with the growing use of multisequence 3D imaging. PURPOSE To demonstrate automated detection and segmentation of brain metastases on multisequence MRI using a deep-learning approach based on a fully convolution neural network (CNN). STUDY TYPE Retrospective. POPULATION In all, 156 patients with brain metastases from several primary cancers were included. FIELD STRENGTH 1.5T and 3T. [Correction added on May 24, 2019, after first online publication: In the preceding sentence, the first field strength listed was corrected.] SEQUENCE: Pretherapy MR images included pre- and postgadolinium T1 -weighted 3D fast spin echo (CUBE), postgadolinium T1 -weighted 3D axial IR-prepped FSPGR (BRAVO), and 3D CUBE fluid attenuated inversion recovery (FLAIR). ASSESSMENT The ground truth was established by manual delineation by two experienced neuroradiologists. CNN training/development was performed using 100 and 5 patients, respectively, with a 2.5D network based on a GoogLeNet architecture. The results were evaluated in 51 patients, equally separated into those with few (1-3), multiple (4-10), and many (>10) lesions. STATISTICAL TESTS Network performance was evaluated using precision, recall, Dice/F1 score, and receiver operating characteristic (ROC) curve statistics. For an optimal probability threshold, detection and segmentation performance was assessed on a per-metastasis basis. The Wilcoxon rank sum test was used to test the differences between patient subgroups. RESULTS The area under the ROC curve (AUC), averaged across all patients, was 0.98 ± 0.04. The AUC in the subgroups was 0.99 ± 0.01, 0.97 ± 0.05, and 0.97 ± 0.03 for patients having 1-3, 4-10, and >10 metastases, respectively. Using an average optimal probability threshold determined by the development set, precision, recall, and Dice score were 0.79 ± 0.20, 0.53 ± 0.22, and 0.79 ± 0.12, respectively. At the same probability threshold, the network showed an average false-positive rate of 8.3/patient (no lesion-size limit) and 3.4/patient (10 mm3 lesion size limit). DATA CONCLUSION A deep-learning approach using multisequence MRI can automatically detect and segment brain metastases with high accuracy. LEVEL OF EVIDENCE 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2020;51:175-182.
Collapse
Affiliation(s)
- Endre Grøvik
- Department of Radiology, Stanford University, Stanford, California, USA
- Department for Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Darvin Yi
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| |
Collapse
|
34
|
Sakai K, Yamada K. Machine learning studies on major brain diseases: 5-year trends of 2014–2018. Jpn J Radiol 2018; 37:34-72. [DOI: 10.1007/s11604-018-0794-4] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2018] [Accepted: 11/14/2018] [Indexed: 12/17/2022]
|
35
|
Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors. Sci Rep 2018; 8:9450. [PMID: 29930257 PMCID: PMC6013490 DOI: 10.1038/s41598-018-27742-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 06/05/2018] [Indexed: 11/16/2022] Open
Abstract
Black-blood (BB) imaging is used to complement contrast-enhanced 3D gradient-echo (CE 3D-GRE) imaging for detecting brain metastases, requiring additional scan time. In this study, we proposed deep-learned 3D BB imaging with an auto-labelling technique and 3D convolutional neural networks for brain metastases detection without additional BB scan. Patients were randomly selected for training (29 sets) and testing (36 sets). Two neuroradiologists independently evaluated deep-learned and original BB images, assessing the degree of blood vessel suppression and lesion conspicuity. Vessel signals were effectively suppressed in all patients. The figure of merits, which indicate the diagnostic performance of radiologists, were 0.9708 with deep-learned BB and 0.9437 with original BB imaging, suggesting that the deep-learned BB imaging is highly comparable to the original BB imaging (difference was not significant; p = 0.2142). In per patient analysis, sensitivities were 100% for both deep-learned and original BB imaging; however, the original BB imaging indicated false positive results for two patients. In per lesion analysis, sensitivities were 90.3% for deep-learned and 100% for original BB images. There were eight false positive lesions on the original BB imaging but only one on the deep-learned BB imaging. Deep-learned 3D BB imaging can be effective for brain metastases detection.
Collapse
|
36
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 162] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
37
|
Charron O, Lallement A, Jarnet D, Noblet V, Clavier JB, Meyer P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med 2018; 95:43-54. [PMID: 29455079 DOI: 10.1016/j.compbiomed.2018.02.004] [Citation(s) in RCA: 148] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Revised: 02/06/2018] [Accepted: 02/07/2018] [Indexed: 02/04/2023]
Abstract
Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter <1 cm). As part of these treatments, effective detection and precise segmentation of lesions are imperative. Many methods based on deep-learning approaches have been developed for the automatic segmentation of gliomas, but very little for that of brain metastases. We adapted an existing 3D convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI.
Collapse
Affiliation(s)
- Odelin Charron
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France
| | | | - Delphine Jarnet
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France
| | | | | | - Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France; ICube-UMR 7357, Strasbourg, France.
| |
Collapse
|