1
|
Mandal S, Chakraborty S, Tariq MA, Ali K, Elavia Z, Khan MK, Garcia DB, Ali S, Al Hooti J, Kumar DV. Artificial Intelligence and Deep Learning in Revolutionizing Brain Tumor Diagnosis and Treatment: A Narrative Review. Cureus 2024; 16:e66157. [PMID: 39233936 PMCID: PMC11372433 DOI: 10.7759/cureus.66157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/05/2024] [Indexed: 09/06/2024] Open
Abstract
The emergence of artificial intelligence (AI) in the medical field holds promise in improving medical management, particularly in personalized strategies for the diagnosis and treatment of brain tumors. However, integrating AI into clinical practice has proven to be a challenge. Deep learning (DL) is very convenient for extracting relevant information from large amounts of data that has increased in medical history and imaging records, which shortens diagnosis time, that would otherwise overwhelm manual methods. In addition, DL aids in automated tumor segmentation, classification, and diagnosis. DL models such as the Brain Tumor Classification Model and the Inception-Resnet V2, or hybrid techniques that enhance these functions and combine DL networks with support vector machine and k-nearest neighbors, identify tumor phenotypes and brain metastases, allowing real-time decision-making and enhancing preoperative planning. AI algorithms and DL development facilitate radiological diagnostics such as computed tomography, positron emission tomography scans, and magnetic resonance imaging (MRI) by integrating two-dimensional and three-dimensional MRI using DenseNet and 3D convolutional neural network architectures, which enable precise tumor delineation. DL offers benefits in neuro-interventional procedures, and the shift toward computer-assisted interventions acknowledges the need for more accurate and efficient image analysis methods. Further research is needed to realize the potential impact of DL in improving these outcomes.
Collapse
Affiliation(s)
- Shobha Mandal
- Internal Medicine, Guthrie Robert Packer Hospital, Sayre, USA
| | - Subhadeep Chakraborty
- Electronics and Communication, Maulana Abul Kalam Azad University of Technology, West Bengal, IND
| | | | - Kamran Ali
- Internal Medicine, United Medical and Dental College, Karachi, PAK
| | - Zenia Elavia
- Medical School, Dr. D. Y. Patil Medical College, Hospital & Research Centre, Pune, IND
| | - Misbah Kamal Khan
- Internal Medicine, Peoples University of Medical and Health Sciences, Nawabshah, PAK
| | | | - Sofia Ali
- Medical School, Peninsula Medical School, Plymouth, GBR
| | | | - Divyanshi Vijay Kumar
- Internal Medicine, Smt. Nathiba Hargovandas Lakhmichand Municipal Medical College, Ahmedabad, IND
| |
Collapse
|
2
|
Awuah WA, Adebusoye FT, Wellington J, David L, Salam A, Weng Yee AL, Lansiaux E, Yarlagadda R, Garg T, Abdul-Rahman T, Kalmanovich J, Miteu GD, Kundu M, Mykolaivna NI. Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery. World Neurosurg X 2024; 23:100301. [PMID: 38577317 PMCID: PMC10992893 DOI: 10.1016/j.wnsx.2024.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/23/2023] [Accepted: 02/21/2024] [Indexed: 04/06/2024] Open
Abstract
Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.
Collapse
Affiliation(s)
| | | | - Jack Wellington
- Cardiff University School of Medicine, Cardiff University, Wales, United Kingdom
| | - Lian David
- Norwich Medical School, University of East Anglia, United Kingdom
| | - Abdus Salam
- Department of Surgery, Khyber Teaching Hospital, Peshawar, Pakistan
| | | | | | - Rohan Yarlagadda
- Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Tulika Garg
- Government Medical College and Hospital Chandigarh, India
| | | | | | | | - Mrinmoy Kundu
- Institute of Medical Sciences and SUM Hospital, Bhubaneswar, India
| | | |
Collapse
|
3
|
Yu Z, Li X, Li J, Chen W, Tang Z, Geng D. HSA-net with a novel CAD pipeline boosts both clinical brain tumor MR image classification and segmentation. Comput Biol Med 2024; 170:108039. [PMID: 38308874 DOI: 10.1016/j.compbiomed.2024.108039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/07/2024] [Accepted: 01/26/2024] [Indexed: 02/05/2024]
Abstract
Brain tumors are among the most prevalent neoplasms in current medical studies. Accurately distinguishing and classifying brain tumor types accurately is crucial for patient treatment and survival in clinical practice. However, existing computer-aided diagnostic pipelines are inadequate for practical medical use due to tumor complexity. In this study, we curated a multi-centre brain tumor dataset that includes various clinical brain tumor data types, including segmentation and classification annotations, surpassing previous efforts. To enhance brain tumor segmentation accuracy, we propose a new segmentation method: HSA-Net. This method utilizes the Shared Weight Dilated Convolution module (SWDC) and Hybrid Dense Dilated Convolution module (HDense) to capture multi-scale information while minimizing parameter count. The Effective Multi-Dimensional Attention (EMA) and Important Feature Attention (IFA) modules effectively aggregate task-related information. We introduce a novel clinical brain tumor computer-aided diagnosis pipeline (CAD) that combines HSA-Net with pipeline modification. This approach not only improves segmentation accuracy but also utilizes the segmentation mask as an additional channel feature to enhance brain tumor classification results. Our experimental evaluation of 3327 real clinical data demonstrates the effectiveness of the proposed method, achieving an average Dice coefficient of 86.85 % for segmentation and a classification accuracy of 95.35 %. We also validated the effectiveness of our proposed method using the publicly available BraTS dataset.
Collapse
Affiliation(s)
- Zekuan Yu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China.
| | - Xiang Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; School of Safety Science and Engineering, Anhui University of Science and Technology, Huainan, 232000, China
| | - Jiaxin Li
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Weiqiang Chen
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Zhiri Tang
- School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai, China
| | - Daoying Geng
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; Huashan Hospital, Fudan University, Shanghai, 200040, China.
| |
Collapse
|
4
|
Hammer Y, Najjar W, Kahanov L, Joskowicz L, Shoshan Y. Two is better than one: longitudinal detection and volumetric evaluation of brain metastases after Stereotactic Radiosurgery with a deep learning pipeline. J Neurooncol 2024; 166:547-555. [PMID: 38300389 PMCID: PMC10876809 DOI: 10.1007/s11060-024-04580-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 01/18/2024] [Indexed: 02/02/2024]
Abstract
PURPOSE Close MRI surveillance of patients with brain metastases following Stereotactic Radiosurgery (SRS) treatment is essential for assessing treatment response and the current disease status in the brain. This follow-up necessitates the comparison of target lesion sizes in pre- (prior) and post-SRS treatment (current) T1W-Gad MRI scans. Our aim was to evaluate SimU-Net, a novel deep-learning model for the detection and volumetric analysis of brain metastases and their temporal changes in paired prior and current scans. METHODS SimU-Net is a simultaneous multi-channel 3D U-Net model trained on pairs of registered prior and current scans of a patient. We evaluated its performance on 271 pairs of T1W-Gad MRI scans from 226 patients who underwent SRS. An expert oncological neurosurgeon manually delineated 1,889 brain metastases in all the MRI scans (1,368 with diameters > 5 mm, 834 > 10 mm). The SimU-Net model was trained/validated on 205 pairs from 169 patients (1,360 metastases) and tested on 66 pairs from 57 patients (529 metastases). The results were then compared to the ground truth delineations. RESULTS SimU-Net yielded a mean (std) detection precision and recall of 1.00±0.00 and 0.99±0.06 for metastases > 10 mm, 0.90±0.22 and 0.97±0.12 for metastases > 5 mm of, and 0.76±0.27 and 0.94±0.16 for metastases of all sizes. It improves lesion detection precision by 8% for all metastases sizes and by 12.5% for metastases < 10 mm with respect to standalone 3D U-Net. The segmentation Dice scores were 0.90±0.10, 0.89±0.10 and 0.89±0.10 for the above metastases sizes, all above the observer variability of 0.80±0.13. CONCLUSION Automated detection and volumetric quantification of brain metastases following SRS have the potential to enhance the assessment of treatment response and alleviate the clinician workload.
Collapse
Affiliation(s)
- Yonny Hammer
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond. J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel
| | - Wenad Najjar
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Lea Kahanov
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond. J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel.
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Yigal Shoshan
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| |
Collapse
|
5
|
Wang TW, Hsu MS, Lee WK, Pan HC, Yang HC, Lee CC, Wu YT. Brain metastasis tumor segmentation and detection using deep learning algorithms: A systematic review and meta-analysis. Radiother Oncol 2024; 190:110007. [PMID: 37967585 DOI: 10.1016/j.radonc.2023.110007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2023] [Accepted: 11/08/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Manual detection of brain metastases is both laborious and inconsistent, driving the need for more efficient solutions. Accordingly, our systematic review and meta-analysis assessed the efficacy of deep learning algorithms in detecting and segmenting brain metastases from various primary origins in MRI images. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science up to May 24, 2023, which yielded 42 relevant studies for our analysis. We assessed the quality of these studies using the QUADAS-2 and CLAIM tools. Using a random-effect model, we calculated the pooled lesion-wise dice score as well as patient-wise and lesion-wise sensitivity. We performed subgroup analyses to investigate the influence of factors such as publication year, study design, training center of the model, validation methods, slice thickness, model input dimensions, MRI sequences fed to the model, and the specific deep learning algorithms employed. Additionally, meta-regression analyses were carried out considering the number of patients in the studies, count of MRI manufacturers, count of MRI models, training sample size, and lesion number. RESULTS Our analysis highlighted that deep learning models, particularly the U-Net and its variants, demonstrated superior segmentation accuracy. Enhanced detection sensitivity was observed with an increased diversity in MRI hardware, both in terms of manufacturer and model variety. Furthermore, slice thickness was identified as a significant factor influencing lesion-wise detection sensitivity. Overall, the pooled results indicated a lesion-wise dice score of 79%, with patient-wise and lesion-wise sensitivities at 86% and 87%, respectively. CONCLUSIONS The study underscores the potential of deep learning in improving brain metastasis diagnostics and treatment planning. Still, more extensive cohorts and larger meta-analysis are needed for more practical and generalizable algorithms. Future research should prioritize these areas to advance the field. This study was funded by the Gen. & Mrs. M.C. Peng Fellowship and registered under PROSPERO (CRD42023427776).
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ming-Sheng Hsu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan; National Yang Ming Chiao Tung University, College Medical Device Innovation and Translation Center, Taiwan.
| |
Collapse
|
6
|
Chou CJ, Yang HC, Chang PY, Chen CJ, Wu HM, Lin CF, Lai IC, Peng SJ. Automated identification and quantification of metastatic brain tumors and perilesional edema based on a deep learning neural network. J Neurooncol 2024; 166:167-174. [PMID: 38133789 DOI: 10.1007/s11060-023-04540-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023]
Abstract
PURPOSE This paper presents a deep learning model for use in the automated segmentation of metastatic brain tumors and associated perilesional edema. METHODS The model was trained using Gamma Knife surgical data (90 MRI sets from 46 patients), including the initial treatment plan and follow-up images (T1-weighted contrast-enhanced (T1cWI) and T2-weighted images (T2WI)) manually annotated by neurosurgeons to indicate the target tumor and edema regions. A mask region-based convolutional neural network was used to extract brain parenchyma, after which the DeepMedic 3D convolutional neural network was in the segmentation of tumors and edemas. RESULTS Five-fold cross-validation demonstrated the efficacy of the brain parenchyma extraction model, achieving a Dice similarity coefficient of 96.4%. The segmentation models used for metastatic tumors and brain edema achieved Dice similarity coefficients of 71.6% and 85.1%, respectively. This study also presents an intuitive graphical user interface to facilitate the use of these models in clinical analysis. CONCLUSION This paper introduces a deep learning model for the automated segmentation and quantification of brain metastatic tumors and perilesional edema trained using only T1cWI and T2WI. This technique could facilitate further research on metastatic tumors and perilesional edema as well as other intracranial lesions.
Collapse
Affiliation(s)
- Chi-Jen Chou
- Division of Neurosurgery, Department of Surgery, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Po-Yao Chang
- Department of Electrical Engineering, National Central University, Taoyuan, Taiwan
| | - Ching-Jen Chen
- Department of Neurological Surgery, University of Virginia Health System, Charlottesville, VA, 22903, USA
| | - Hsiu-Mei Wu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chun-Fu Lin
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - I-Chun Lai
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Heavy Particles & Radiation Oncology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Syu-Jyun Peng
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, No.250, Wuxing St., Xinyi Dist., Taipei City, 110, Taiwan.
- Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
7
|
Wang J, Sun J, Xu J, Lu S, Wang H, Huang C, Zhang F, Yu Y, Gao X, Wang M, Wang Y, Ruan X, Pan Y. Detection of Intracranial Aneurysms Using Multiphase CT Angiography with a Deep Learning Model. Acad Radiol 2023; 30:2477-2486. [PMID: 36737273 DOI: 10.1016/j.acra.2022.12.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023]
Abstract
RATIONALE AND OBJECTIVES Determine the effect of a multiphase fusion deep-learning model with automatic phase selection in detection of intracranial aneurysm (IA) from computed tomography angiography (CTA) images. MATERIALS AND METHODS CTA images of intracranial arteries from patients at Ningbo First Hospital were retrospectively analyzed. Images were randomly classified as training data, internal validation data, or test data. CTA images from cases examined by digital subtraction angiography (DSA) were examined for independent validation. A deep-learning model was constructed by automatic phase selection of multiphase fusion, and compared to the single-phase algorithm to evaluate algorithm sensitivity. RESULTS We analyzed 1110 patients (1493 aneurysms) as training data, 139 patients (174 aneurysms) as internal validation data, and 134 patients (175 aneurysms) as test data. The sensitivity of the multiphase analysis of the internal validation data, test data, and independent validation data were greater than from the single-phase analysis. The recall of the multiphase selection was greater or equal to that of single-phase selection in the aneurysm position, shape, size, and rupture status. Use of the test data to determine the presence and absence of aneurysm rupture led to a recall from multiphase selection of 94.8% and 87.6% respectively; both of these values were greater than those from single-phase selection (89.6% and 79.4%). CONCLUSION A multiphase fusion deep learning model with automatic phase selection provided automated detection of IAs with high sensitivity.
Collapse
Affiliation(s)
- Jinglu Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Jie Sun
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Shiyu Lu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Hao Wang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Fandong Zhang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Xiang Gao
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Ming Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Yu Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Xinzhong Ruan
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Yuning Pan
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China; Key Laboratory of Precision Medicine for Atherosclerotic Diseases of Zhejiang Province, People's Republic of China.
| |
Collapse
|
8
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
9
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
10
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
11
|
Priya S, Ward C, Bathla G. Letter to editor regarding article "fully automated radiomics-based machine learning models for multiclass classification of single brain tumors: Glioblastoma, lymphoma, and metastasis". J Neuroradiol 2023; 50:40-41. [PMID: 36610935 DOI: 10.1016/j.neurad.2022.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 12/25/2022] [Indexed: 01/07/2023]
Affiliation(s)
- Sarv Priya
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Caitlin Ward
- Division of Biostatistics, School of Public Health, University of Minnesota, USA
| | - Girish Bathla
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
12
|
Ottesen JA, Yi D, Tong E, Iv M, Latysheva A, Saxhaug C, Jacobsen KD, Helland Å, Emblem KE, Rubin DL, Bjørnerud A, Zaharchuk G, Grøvik E. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform 2023; 16:1056068. [PMID: 36743439 PMCID: PMC9889663 DOI: 10.3389/fninf.2022.1056068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023] Open
Abstract
Introduction Management of patients with brain metastases is often based on manual lesion detection and segmentation by an expert reader. This is a time- and labor-intensive process, and to that end, this work proposes an end-to-end deep learning segmentation network for a varying number of available MRI available sequences. Methods We adapt and evaluate a 2.5D and a 3D convolution neural network trained and tested on a retrospective multinational study from two independent centers, in addition, nnU-Net was adapted as a comparative benchmark. Segmentation and detection performance was evaluated by: (1) the dice similarity coefficient, (2) a per-metastases and the average detection sensitivity, and (3) the number of false positives. Results The 2.5D and 3D models achieved similar results, albeit the 2.5D model had better detection rate, whereas the 3D model had fewer false positive predictions, and nnU-Net had fewest false positives, but with the lowest detection rate. On MRI data from center 1, the 2.5D, 3D, and nnU-Net detected 79%, 71%, and 65% of all metastases; had an average per patient sensitivity of 0.88, 0.84, and 0.76; and had on average 6.2, 3.2, and 1.7 false positive predictions per patient, respectively. For center 2, the 2.5D, 3D, and nnU-Net detected 88%, 86%, and 78% of all metastases; had an average per patient sensitivity of 0.92, 0.91, and 0.85; and had on average 1.0, 0.4, and 0.1 false positive predictions per patient, respectively. Discussion/Conclusion Our results show that deep learning can yield highly accurate segmentations of brain metastases with few false positives in multinational data, but the accuracy degrades for metastases with an area smaller than 0.4 cm2.
Collapse
Affiliation(s)
- Jon André Ottesen
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway,*Correspondence: Jon André Ottesen ✉
| | - Darvin Yi
- Department of Ophthalmology, University of Illinois, Chicago, IL, United States
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Cathrine Saxhaug
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Kyrre Eeg Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Atle Bjørnerud
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
13
|
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:cancers15020334. [PMID: 36672286 PMCID: PMC9857123 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
14
|
Spiking Neural P System with Synaptic Vesicles and Applications in Multiple Brain Metastasis Segmentation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
15
|
Buchner JA, Kofler F, Etzel L, Mayinger M, Christ SM, Brunner TB, Wittig A, Menze B, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus J, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Ferentinos K, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Peeken JC. Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study. Radiother Oncol 2023; 178:109425. [PMID: 36442609 DOI: 10.1016/j.radonc.2022.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Stereotactic radiotherapy is a standard treatment option for patients with brain metastases. The planning target volume is based on gross tumor volume (GTV) segmentation. The aim of this work is to develop and validate a neural network for automatic GTV segmentation to accelerate clinical daily routine practice and minimize interobserver variability. METHODS We analyzed MRIs (T1-weighted sequence ± contrast-enhancement, T2-weighted sequence, and FLAIR sequence) from 348 patients with at least one brain metastasis from different cancer primaries treated in six centers. To generate reference segmentations, all GTVs and the FLAIR hyperintense edematous regions were segmented manually. A 3D-U-Net was trained on a cohort of 260 patients from two centers to segment the GTV and the surrounding FLAIR hyperintense region. During training varying degrees of data augmentation were applied. Model validation was performed using an independent international multicenter test cohort (n = 88) including four centers. RESULTS Our proposed U-Net reached a mean overall Dice similarity coefficient (DSC) of 0.92 ± 0.08 and a mean individual metastasis-wise DSC of 0.89 ± 0.11 in the external test cohort for GTV segmentation. Data augmentation improved the segmentation performance significantly. Detection of brain metastases was effective with a mean F1-Score of 0.93 ± 0.16. The model performance was stable independent of the center (p = 0.3). There was no correlation between metastasis volume and DSC (Pearson correlation coefficient 0.07). CONCLUSION Reliable automated segmentation of brain metastases with neural networks is possible and may support radiotherapy planning by providing more objective GTV definitions.
Collapse
Affiliation(s)
- Josef A Buchner
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany; Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Michael Mayinger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Sebastian M Christ
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Thomas B Brunner
- Department of Radiation Oncology, University Hospital Magdeburg, Magdeburg, Germany
| | - Andrea Wittig
- Department of Radiotherapy and Radiation Oncology, University Hospital Jena, Friedrich-Schiller University, Jena, Germany
| | - Björn Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Bernhard Meyer
- Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Rami A El Shafie
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany; Department of Radiation Oncology, University Medical Center Göttingen, Göttingen, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany
| | - Susanne Rogers
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Oliver Riesterer
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Katrin Schulze
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Horst J Feldmann
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig Holstein, Kiel, Germany
| | - Constantinos Zamboglou
- Department of Radiation Oncology, University of Freiburg - Medical Center, Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany; Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Robert Wolff
- Saphir Radiosurgery Center Frankfurt and Northern Germany, Guestrow, Germany; Department of Neurosurgery, University Hospital Frankfurt, Frankfurt, Germany
| | - Kerstin A Eitz
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Stephanie E Combs
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| |
Collapse
|
16
|
Chartrand G, Emiliani RD, Pawlowski SA, Markel DA, Bahig H, Cengarle-Samak A, Rajakesari S, Lavoie J, Ducharme S, Roberge D. Automated Detection of Brain Metastases on T1-Weighted MRI Using a Convolutional Neural Network: Impact of Volume Aware Loss and Sampling Strategy. J Magn Reson Imaging 2022; 56:1885-1898. [PMID: 35624544 DOI: 10.1002/jmri.28274] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Detection of brain metastases (BM) and segmentation for treatment planning could be optimized with machine learning methods. Convolutional neural networks (CNNs) are promising, but their trade-offs between sensitivity and precision frequently lead to missing small lesions. HYPOTHESIS Combining volume aware (VA) loss function and sampling strategy could improve BM detection sensitivity. STUDY TYPE Retrospective. POPULATION A total of 530 radiation oncology patients (55% women) were split into a training/validation set (433 patients/1460 BM) and an independent test set (97 patients/296 BM). FIELD STRENGTH/SEQUENCE 1.5 T and 3 T, contrast-enhanced three-dimensional (3D) T1-weighted fast gradient echo sequences. ASSESSMENT Ground truth masks were based on radiotherapy treatment planning contours reviewed by experts. A U-Net inspired model was trained. Three loss functions (Dice, Dice + boundary, and VA) and two sampling methods (label and VA) were compared. Results were reported with Dice scores, volumetric error, lesion detection sensitivity, and precision. A detected voxel within the ground truth constituted a true positive. STATISTICAL TESTS McNemar's exact test to compare detected lesions between models. Pearson's correlation coefficient and Bland-Altman analysis to compare volume agreement between predicted and ground truth volumes. Statistical significance was set at P ≤ 0.05. RESULTS Combining VA loss and VA sampling performed best with an overall sensitivity of 91% and precision of 81%. For BM in the 2.5-6 mm estimated sphere diameter range, VA loss reduced false negatives by 58% and VA sampling reduced it further by 30%. In the same range, the boundary loss achieved the highest precision at 81%, but a low sensitivity (24%) and a 31% Dice loss. DATA CONCLUSION Considering BM size in the loss and sampling function of CNN may increase the detection sensitivity regarding small BM. Our pipeline relying on a single contrast-enhanced T1-weighted MRI sequence could reach a detection sensitivity of 91%, with an average of only 0.66 false positives per scan. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | | | | | - Daniel A Markel
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Selvan Rajakesari
- Department of Radiation Oncology, Hopital Charles Lemoyne, Greenfield Park, Québec, Canada
| | | | - Simon Ducharme
- AFX Medical Inc., Montréal, Canada.,Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montréal, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Canada
| | - David Roberge
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
17
|
Deep learning-based detection algorithm for brain metastases on black blood imaging. Sci Rep 2022; 12:19503. [PMID: 36376364 PMCID: PMC9663732 DOI: 10.1038/s41598-022-23687-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
Brain metastases (BM) are the most common intracranial tumors, and their prevalence is increasing. High-resolution black-blood (BB) imaging was used to complement the conventional contrast-enhanced 3D gradient-echo imaging to detect BM. In this study, we propose an efficient deep learning algorithm (DLA) for BM detection in BB imaging with contrast enhancement scans, and assess the efficacy of an automatic detection algorithm for BM. A total of 113 BM participants with 585 metastases were included in the training cohort for five-fold cross-validation. The You Only Look Once (YOLO) V2 network was trained with 3D BB sampling perfection with application-optimized contrasts using different flip angle evolution (SPACE) images to investigate the BM detection. For the observer performance, two board-certified radiologists and two second-year radiology residents detected the BM and recorded the reading time. For the training cohort, the overall performance of the five-fold cross-validation was 87.95%, 24.82%, 19.35%, 14.48, and 18.40 for sensitivity, precision, F1-Score, the false positive average for the BM dataset, and the false positive average for the normal individual dataset, respectively. For the comparison of reading time with and without DLA, the average reading time was reduced by 20.86% in the range of 15.22-25.77%. The proposed method has the potential to detect BM with a high sensitivity and has a limited number of false positives using BB imaging.
Collapse
|
18
|
Abstract
Melanoma skin cancer is one of the most dangerous types of skin cancer, which, if not diagnosed early, may lead to death. Therefore, an accurate diagnosis is needed to detect melanoma. Traditionally, a dermatologist utilizes a microscope to inspect and then provide a report on a biopsy for diagnosis; however, this diagnosis process is not easy and requires experience. Hence, there is a need to facilitate the diagnosis process while still yielding an accurate diagnosis. For this purpose, artificial intelligence techniques can assist the dermatologist in carrying out diagnosis. In this study, we considered the detection of melanoma through deep learning based on cutaneous image processing. For this purpose, we tested several convolutional neural network (CNN) architectures, including DenseNet201, MobileNetV2, ResNet50V2, ResNet152V2, Xception, VGG16, VGG19, and GoogleNet, and evaluated the associated deep learning models on graphical processing units (GPUs). A dataset consisting of 7146 images was processed using these models, and we compared the obtained results. The experimental results showed that GoogleNet can obtain the highest performance accuracy on both the training and test sets (74.91% and 76.08%, respectively).
Collapse
|
19
|
Deep-learning 2.5-dimensional single-shot detector improves the performance of automated detection of brain metastases on contrast-enhanced CT. Neuroradiology 2022; 64:1511-1518. [PMID: 35064786 DOI: 10.1007/s00234-022-02902-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/15/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Collapse
|
20
|
Pflüger I, Wald T, Isensee F, Schell M, Meredig H, Schlamp K, Bernhardt D, Brugnara G, Heußel CP, Debus J, Wick W, Bendszus M, Maier-Hein KH, Vollmuth P. Automated detection and quantification of brain metastases on clinical MRI data using artificial neural networks. Neurooncol Adv 2022; 4:vdac138. [PMID: 36105388 PMCID: PMC9466273 DOI: 10.1093/noajnl/vdac138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Reliable detection and precise volumetric quantification of brain metastases (BM) on MRI are essential for guiding treatment decisions. Here we evaluate the potential of artificial neural networks (ANN) for automated detection and quantification of BM.
Methods
A consecutive series of 308 patients with BM was used for developing an ANN (with a 4:1 split for training/testing) for automated volumetric assessment of contrast-enhancing tumors (CE) and non-enhancing FLAIR signal abnormality including edema (NEE). An independent consecutive series of 30 patients was used for external testing. Performance was assessed case-wise for CE and NEE and lesion-wise for CE using the case-wise/lesion-wise DICE-coefficient (C/L-DICE), positive predictive value (L-PPV) and sensitivity (C/L-Sensitivity).
Results
The performance of detecting CE lesions on the validation dataset was not significantly affected when evaluating different volumetric thresholds (0.001–0.2 cm3; P = .2028). The median L-DICE and median C-DICE for CE lesions were 0.78 (IQR = 0.6–0.91) and 0.90 (IQR = 0.85–0.94) in the institutional as well as 0.79 (IQR = 0.67–0.82) and 0.84 (IQR = 0.76–0.89) in the external test dataset. The corresponding median L-Sensitivity and median L-PPV were 0.81 (IQR = 0.63–0.92) and 0.79 (IQR = 0.63–0.93) in the institutional test dataset, as compared to 0.85 (IQR = 0.76–0.94) and 0.76 (IQR = 0.68–0.88) in the external test dataset. The median C-DICE for NEE was 0.96 (IQR = 0.92–0.97) in the institutional test dataset as compared to 0.85 (IQR = 0.72–0.91) in the external test dataset.
Conclusion
The developed ANN-based algorithm (publicly available at www.github.com/NeuroAI-HD/HD-BM) allows reliable detection and precise volumetric quantification of CE and NEE compartments in patients with BM.
Collapse
Affiliation(s)
- Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Tassilo Wald
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Hagen Meredig
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Kai Schlamp
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University Munich , Munich , Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Claus Peter Heußel
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
- Member of the Cerman Center for Lung Research (DZL), Translational Lung Research Center (TLRC) , Heidelberg , Germany
| | - Juergen Debus
- Department of Radiation Oncology, Heidelberg University Hospital , Heidelberg , Germany
- Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg University Hospital , Heidelberg , Germany
- German Cancer Consotium (DKTK), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ) , Heidelberg , Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Wolfgang Wick
- Neurology Clinic, Heidelberg University Hospital , Heidelberg , Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Klaus H Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| |
Collapse
|
21
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning single-shot detector for automatic detection of brain metastases with the combined use of contrast-enhanced and non-enhanced computed tomography images. Eur J Radiol 2021; 144:110015. [PMID: 34742108 DOI: 10.1016/j.ejrad.2021.110015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/10/2021] [Accepted: 10/27/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE To develop a deep-learning object detection model for automatic detection of brain metastases that simultaneously uses contrast-enhanced and non-enhanced images as inputs, and to compare its performance with that of a model that uses only contrast-enhanced images. METHOD A total of 116 computed tomography (CT) scans of 116 patients with brain metastases were included in this study. They showed a total of 659 metastases, 428 of which were used for training and validation (mean size, 11.3 ± 9.9 mm) and 231 were used for testing (mean size, 9.0 ± 7.0 mm). Single-shot detector (SSD) models were constructed with a feature fusion module, and their results were compared per lesion at a confidence threshold of 50%. RESULTS The sensitivity was 88.7% for the model that used both contrast-enhanced and non-enhanced CT images (the CE + NECT model) and 87.6% for the model that used only contrast-enhanced CT images (the CECT model). The positive predictive value (PPV) was 44.0% for the CE + NECT model and 37.2% for the CECT model. The number of false positives per patient was 9.9 for the CE + NECT model and 13.6 for the CECT model. The CE + NECT model had a significantly higher PPV (t test, p < 0.001), significantly fewer false positives (t test, p < 0.001), and a tendency to be more sensitive (t test, p = 0.14). CONCLUSIONS The results indicate that the information on true contrast enhancement obtained by comparing the contrast-enhanced and non-enhanced images may prevent the detection of pseudolesions, suppress false positives, and improve the performance of deep-learning object detection models.
Collapse
Affiliation(s)
- Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
22
|
Deep Learning-Based Segmentation of Various Brain Lesions for Radiosurgery. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11199180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Semantic segmentation of medical images with deep learning models is rapidly being developed. In this study, we benchmarked state-of-the-art deep learning segmentation algorithms on our clinical stereotactic radiosurgery dataset. The dataset consists of 1688 patients with various brain lesions (pituitary tumors, meningioma, schwannoma, brain metastases, arteriovenous malformation, and trigeminal neuralgia), and we divided the dataset into a training set (1557 patients) and test set (131 patients). This study demonstrates the strengths and weaknesses of deep-learning algorithms in a fairly practical scenario. We compared the model performances concerning their sampling method, model architecture, and the choice of loss functions, identifying suitable settings for their applications and shedding light on the possible improvements. Evidence from this study led us to conclude that deep learning could be promising in assisting the segmentation of brain lesions even if the training dataset was of high heterogeneity in lesion types and sizes.
Collapse
|
23
|
Jünger ST, Hoyer UCI, Schaufler D, Laukamp KR, Goertz L, Thiele F, Grunz JP, Schlamann M, Perkuhn M, Kabbasch C, Persigehl T, Grau S, Borggrefe J, Scheffler M, Shahzad R, Pennig L. Fully Automated MR Detection and Segmentation of Brain Metastases in Non-small Cell Lung Cancer Using Deep Learning. J Magn Reson Imaging 2021; 54:1608-1622. [PMID: 34032344 DOI: 10.1002/jmri.27741] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 05/12/2021] [Accepted: 05/12/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Non-small cell lung cancer (NSCLC) is the most common tumor entity spreading to the brain and up to 50% of patients develop brain metastases (BMs). Detection of BMs on MRI is challenging with an inherent risk of missed diagnosis. PURPOSE To train and evaluate a deep learning model (DLM) for fully automated detection and 3D segmentation of BMs in NSCLC on clinical routine MRI. STUDY TYPE Retrospective. POPULATION Ninety-eight NSCLC patients with 315 BMs on pretreatment MRI, divided into training (66 patients, 248 BMs) and independent test (17 patients, 67 BMs) and control (15 patients, 0 BMs) cohorts. FIELD STRENGTH/SEQUENCE T1 -/T2 -weighted, T1 -weighted contrast-enhanced (T1 CE; gradient-echo and spin-echo sequences), and FLAIR at 1.0, 1.5, and 3.0 T from various vendors and study centers. ASSESSMENT A 3D convolutional neural network (DeepMedic) was trained on the training cohort using 5-fold cross-validation and evaluated on the independent test and control sets. Three-dimensional voxel-wise manual segmentations of BMs by a neurosurgeon and a radiologist on T1 CE served as the reference standard. STATISTICAL TESTS Sensitivity (recall) and false positive (FP) findings per scan, dice similarity coefficient (DSC) to compare the spatial overlap between manual and automated segmentations, Pearson's correlation coefficient (r) to evaluate the relationship between quantitative volumetric measurements of segmentations, and Wilcoxon rank-sum test to compare the volumes of BMs. A P value <0.05 was considered statistically significant. RESULTS In the test set, the DLM detected 57 of the 67 BMs (mean volume: 0.99 ± 4.24 cm3 ), resulting in a sensitivity of 85.1%, while FP findings of 1.5 per scan were observed. Missed BMs had a significantly smaller volume (0.05 ± 0.04 cm3 ) than detected BMs (0.96 ± 2.4 cm3 ). Compared with the reference standard, automated segmentations achieved a median DSC of 0.72 and a good volumetric correlation (r = 0.95). In the control set, 1.8 FPs/scan were observed. DATA CONCLUSION Deep learning provided a high detection sensitivity and good segmentation performance for BMs in NSCLC on heterogeneous scanner data while yielding a low number of FP findings. Level of Evidence 3 Technical Efficacy Stage 2.
Collapse
Affiliation(s)
- Stephanie T Jünger
- Department of General Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Centre for Integrated Oncology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Ulrike Cornelia Isabel Hoyer
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Diana Schaufler
- Department of Internal Medicine, Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf, Network Genomic Medicine, Lung Cancer Group Cologne, Faculty of Medicine and University Hospital of Cologne, University of Cologne, Cologne, Germany
| | - Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Marc Schlamann
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - Christoph Kabbasch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Stefan Grau
- Department of General Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Centre for Integrated Oncology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jan Borggrefe
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Matthias Scheffler
- Department of Internal Medicine, Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf, Network Genomic Medicine, Lung Cancer Group Cologne, Faculty of Medicine and University Hospital of Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
24
|
Pennig L, Hoyer UCI, Krauskopf A, Shahzad R, Jünger ST, Thiele F, Laukamp KR, Grunz JP, Perkuhn M, Schlamann M, Kabbasch C, Borggrefe J, Goertz L. Deep learning assistance increases the detection sensitivity of radiologists for secondary intracranial aneurysms in subarachnoid hemorrhage. Neuroradiology 2021; 63:1985-1994. [PMID: 33837806 PMCID: PMC8589782 DOI: 10.1007/s00234-021-02697-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 03/21/2021] [Indexed: 12/03/2022]
Abstract
Purpose To evaluate whether a deep learning model (DLM) could increase the detection sensitivity of radiologists for intracranial aneurysms on CT angiography (CTA) in aneurysmal subarachnoid hemorrhage (aSAH). Methods Three different DLMs were trained on CTA datasets of 68 aSAH patients with 79 aneurysms with their outputs being combined applying ensemble learning (DLM-Ens). The DLM-Ens was evaluated on an independent test set of 104 aSAH patients with 126 aneuryms (mean volume 129.2 ± 185.4 mm3, 13.0% at the posterior circulation), which were determined by two radiologists and one neurosurgeon in consensus using CTA and digital subtraction angiography scans. CTA scans of the test set were then presented to three blinded radiologists (reader 1: 13, reader 2: 4, and reader 3: 3 years of experience in diagnostic neuroradiology), who assessed them individually for aneurysms. Detection sensitivities for aneurysms of the readers with and without the assistance of the DLM were compared. Results In the test set, the detection sensitivity of the DLM-Ens (85.7%) was comparable to the radiologists (reader 1: 91.2%, reader 2: 86.5%, and reader 3: 86.5%; Fleiss κ of 0.502). DLM-assistance significantly increased the detection sensitivity (reader 1: 97.6%, reader 2: 97.6%,and reader 3: 96.0%; overall P=.024; Fleiss κ of 0.878), especially for secondary aneurysms (88.2% of the additional aneurysms provided by the DLM). Conclusion Deep learning significantly improved the detection sensitivity of radiologists for aneurysms in aSAH, especially for secondary aneurysms. It therefore represents a valuable adjunct for physicians to establish an accurate diagnosis in order to optimize patient treatment.
Collapse
Affiliation(s)
- Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.
| | - Ulrike Cornelia Isabel Hoyer
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Alexandra Krauskopf
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Stephanie T Jünger
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Marc Schlamann
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Christoph Kabbasch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Jan Borggrefe
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
| |
Collapse
|