1
|
Yun S, Park JE, Kim N, Park SY, Kim HS. Reducing false positives in deep learning-based brain metastasis detection by using both gradient-echo and spin-echo contrast-enhanced MRI: validation in a multi-center diagnostic cohort. Eur Radiol 2024; 34:2873-2884. [PMID: 37891415 DOI: 10.1007/s00330-023-10318-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/08/2023] [Accepted: 08/18/2023] [Indexed: 10/29/2023]
Abstract
OBJECTIVES To develop a deep learning (DL) for detection of brain metastasis (BM) that incorporates both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced DL) and evaluate it in a clinical cohort in comparison with human readers and DL using gradient-echo-based imaging only (GRE DL). MATERIALS AND METHODS DL detection was developed using data from 200 patients with BM (training set) and tested in 62 (internal) and 48 (external) consecutive patients who underwent stereotactic radiosurgery and diagnostic dual-enhanced imaging (dual-enhanced DL) and later guide GRE imaging (GRE DL). The detection sensitivity and positive predictive value (PPV) were compared between two DLs. Two neuroradiologists independently analyzed BM and reference standards for BM were separately drawn by another neuroradiologist. The relative differences (RDs) from the reference standard BM numbers were compared between the DLs and neuroradiologists. RESULTS Sensitivity was similar between GRE DL (93%, 95% confidence interval [CI]: 90-96%) and dual-enhanced DL (92% [89-94%]). The PPV of the dual-enhanced DL was higher (89% [86-92%], p < .001) than that of GRE DL (76%, [72-80%]). GRE DL significantly overestimated the number of metastases (false positives; RD: 0.05, 95% CI: 0.00-0.58) compared with neuroradiologists (RD: 0.00, 95% CI: - 0.28, 0.15, p < .001), whereas dual-enhanced DL (RD: 0.00, 95% CI: 0.00-0.15) did not show a statistically significant difference from neuroradiologists (RD: 0.00, 95% CI: - 0.20-0.10, p = .913). CONCLUSION The dual-enhanced DL showed improved detection of BM and reduced overestimation compared with GRE DL, achieving similar performance to neuroradiologists. CLINICAL RELEVANCE STATEMENT The use of deep learning-based brain metastasis detection with turbo spin-echo imaging reduces false positive detections, aiding in the guidance of stereotactic radiosurgery when gradient-echo imaging alone is employed. KEY POINTS •Deep learning for brain metastasis detection improved by using both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced deep learning). •Dual-enhanced deep learning increased true positive detections and reduced overestimation. •Dual-enhanced deep learning achieved similar performance to neuroradiologists for brain metastasis counts.
Collapse
Affiliation(s)
- Suyoung Yun
- Department of Radiology, Busan Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | | | - Seo Young Park
- Department of Statistics and Data Science, Korea National Open University, Seoul, Republic of Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea
| |
Collapse
|
2
|
Hong JS, You WC, Sun MH, Pan HC, Lin YH, Lu YF, Chen KM, Huang TH, Lee WK, Wu YT. Deep Learning Detection and Segmentation of Brain Arteriovenous Malformation on Magnetic Resonance Angiography. J Magn Reson Imaging 2024; 59:587-598. [PMID: 37220191 DOI: 10.1002/jmri.28795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 05/06/2023] [Accepted: 05/08/2023] [Indexed: 05/25/2023] Open
Abstract
BACKGROUND The delineation of brain arteriovenous malformations (bAVMs) is crucial for subsequent treatment planning. Manual segmentation is time-consuming and labor-intensive. Applying deep learning to automatically detect and segment bAVM might help to improve clinical practice efficiency. PURPOSE To develop an approach for detecting bAVM and segmenting its nidus on Time-of-flight magnetic resonance angiography using deep learning methods. STUDY TYPE Retrospective. SUBJECTS 221 bAVM patients aged 7-79 underwent radiosurgery from 2003 to 2020. They were split into 177 training, 22 validation, and 22 test data. FIELD STRENGTH/SEQUENCE 1.5 T, Time-of-flight magnetic resonance angiography based on 3D gradient echo. ASSESSMENT The YOLOv5 and YOLOv8 algorithms were utilized to detect bAVM lesions and the U-Net and U-Net++ models to segment the nidus from the bounding boxes. The mean average precision, F1, precision, and recall were used to assess the model performance on the bAVM detection. To evaluate the model's performance on nidus segmentation, the Dice coefficient and balanced average Hausdorff distance (rbAHD) were employed. STATISTICAL TESTS The Student's t-test was used to test the cross-validation results (P < 0.05). The Wilcoxon rank test was applied to compare the median for the reference values and the model inference results (P < 0.05). RESULTS The detection results demonstrated that the model with pretraining and augmentation performed optimally. The U-Net++ with random dilation mechanism resulted in higher Dice and lower rbAHD, compared to that without that mechanism, across varying dilated bounding box conditions (P < 0.05). When combining detection and segmentation, the Dice and rbAHD were statistically different from the references calculated using the detected bounding boxes (P < 0.05). For the detected lesions in the test dataset, it showed the highest Dice of 0.82 and the lowest rbAHD of 5.3%. DATA CONCLUSION This study showed that pretraining and data augmentation improved YOLO detection performance. Properly limiting lesion ranges allows for adequate bAVM segmentation. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 1.
Collapse
Affiliation(s)
- Jia-Sheng Hong
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Weir-Chiang You
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Ming-Hsi Sun
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Yi-Hui Lin
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Yung-Fa Lu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Kuan-Ming Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Tzu-Hsuan Huang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| |
Collapse
|
3
|
Cheng Z, Wang L. Dynamic hierarchical multi-scale fusion network with axial MLP for medical image segmentation. Sci Rep 2023; 13:6342. [PMID: 37072483 PMCID: PMC10113245 DOI: 10.1038/s41598-023-32813-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 04/03/2023] [Indexed: 05/03/2023] Open
Abstract
Medical image segmentation provides various effective methods for accuracy and robustness of organ segmentation, lesion detection, and classification. Medical images have fixed structures, simple semantics, and diverse details, and thus fusing rich multi-scale features can augment segmentation accuracy. Given that the density of diseased tissue may be comparable to that of surrounding normal tissue, both global and local information are critical for segmentation results. Therefore, considering the importance of multi-scale, global, and local information, in this paper, we propose the dynamic hierarchical multi-scale fusion network with axial mlp (multilayer perceptron) (DHMF-MLP), which integrates the proposed hierarchical multi-scale fusion (HMSF) module. Specifically, HMSF not only reduces the loss of detail information by integrating the features of each stage of the encoder, but also has different receptive fields, thereby improving the segmentation results for small lesions and multi-lesion regions. In HMSF, we not only propose the adaptive attention mechanism (ASAM) to adaptively adjust the semantic conflicts arising during the fusion process but also introduce Axial-mlp to improve the global modeling capability of the network. Extensive experiments on public datasets confirm the excellent performance of our proposed DHMF-MLP. In particular, on the BUSI, ISIC 2018, and GlaS datasets, IoU reaches 70.65%, 83.46%, and 87.04%, respectively.
Collapse
Affiliation(s)
- Zhikun Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
4
|
Yacob YM, Alquran H, Mustafa WA, Alsalatie M, Sakim HAM, Lola MS. H. pylori Related Atrophic Gastritis Detection Using Enhanced Convolution Neural Network (CNN) Learner. Diagnostics (Basel) 2023; 13:diagnostics13030336. [PMID: 36766441 PMCID: PMC9914156 DOI: 10.3390/diagnostics13030336] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/09/2023] [Accepted: 01/12/2023] [Indexed: 01/19/2023] Open
Abstract
Atrophic gastritis (AG) is commonly caused by the infection of the Helicobacter pylori (H. pylori) bacteria. If untreated, AG may develop into a chronic condition leading to gastric cancer, which is deemed to be the third primary cause of cancer-related deaths worldwide. Precursory detection of AG is crucial to avoid such cases. This work focuses on H. pylori-associated infection located at the gastric antrum, where the classification is of binary classes of normal versus atrophic gastritis. Existing work developed the Deep Convolution Neural Network (DCNN) of GoogLeNet with 22 layers of the pre-trained model. Another study employed GoogLeNet based on the Inception Module, fast and robust fuzzy C-means (FRFCM), and simple linear iterative clustering (SLIC) superpixel algorithms to identify gastric disease. GoogLeNet with Caffe framework and ResNet-50 are machine learners that detect H. pylori infection. Nonetheless, the accuracy may become abundant as the network depth increases. An upgrade to the current standards method is highly anticipated to avoid untreated and inaccurate diagnoses that may lead to chronic AG. The proposed work incorporates improved techniques revolving within DCNN with pooling as pre-trained models and channel shuffle to assist streams of information across feature channels to ease the training of networks for deeper CNN. In addition, Canonical Correlation Analysis (CCA) feature fusion method and ReliefF feature selection approaches are intended to revamp the combined techniques. CCA models the relationship between the two data sets of significant features generated by pre-trained ShuffleNet. ReliefF reduces and selects essential features from CCA and is classified using the Generalized Additive Model (GAM). It is believed the extended work is justified with a 98.2% testing accuracy reading, thus providing an accurate diagnosis of normal versus atrophic gastritis.
Collapse
Affiliation(s)
- Yasmin Mohd Yacob
- Faculty of Electronic Engineering & Technology, Pauh Putra Campus, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
- Centre of Excellence for Advanced Computing, Pauh Putra Campus, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
- Department of Biomedical Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Wan Azani Mustafa
- Centre of Excellence for Advanced Computing, Pauh Putra Campus, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
- Faculty of Electrical Engineering & Technology, Pauh Putra Campus, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
- Correspondence:
| | - Mohammed Alsalatie
- King Hussein Medical Center, Royal Jordanian Medical Service, The Institute of Biomedical Technology, Amman 11855, Jordan
| | - Harsa Amylia Mat Sakim
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 11800, Penang, Malaysia
| | - Muhamad Safiih Lola
- Faculty of Ocean Engineering Technology and Informatics, Universiti Malaysia Terengganu, Kuala Terengganu 21030, Terengganu, Malaysia
| |
Collapse
|
5
|
Abstract
In magnetic resonance imaging (MRI) segmentation, conventional approaches utilize U-Net models with encoder–decoder structures, segmentation models using vision transformers, or models that combine a vision transformer with an encoder–decoder model structure. However, conventional models have large sizes and slow computation speed and, in vision transformer models, the computation amount sharply increases with the image size. To overcome these problems, this paper proposes a model that combines Swin transformer blocks and a lightweight U-Net type model that has an HarDNet blocks-based encoder–decoder structure. To maintain the features of the hierarchical transformer and shifted-windows approach of the Swin transformer model, the Swin transformer is used in the first skip connection layer of the encoder instead of in the encoder–decoder bottleneck. The proposed model, called STHarDNet, was evaluated by separating the anatomical tracings of lesions after stroke (ATLAS) dataset, which comprises 229 T1-weighted MRI images, into training and validation datasets. It achieved Dice, IoU, precision, and recall values of 0.5547, 0.4185, 0.6764, and 0.5286, respectively, which are better than those of the state-of-the-art models U-Net, SegNet, PSPNet, FCHarDNet, TransHarDNet, Swin Transformer, Swin UNet, X-Net, and D-UNet. Thus, STHarDNet improves the accuracy and speed of MRI image-based stroke diagnosis.
Collapse
|