1
|
Yu W, Liu Y, Zhao Y, Huang H, Liu J, Yao X, Li J, Xie Z, Jiang L, Wu H, Cao X, Zhou J, Guo Y, Li G, Ren MX, Quan Y, Mu T, Izquierdo GA, Zhang G, Zhao R, Zhao D, Yan J, Zhang H, Lv J, Yao Q, Duan Y, Zhou H, Liu T, He Y, Bian T, Dai W, Huai J, Wang X, He Q, Gao Y, Ren W, Niu G, Zhao G. Deep Learning-Based Classification of Cancer Cell in Leptomeningeal Metastasis on Cytomorphologic Features of Cerebrospinal Fluid. Front Oncol 2022; 12:821594. [PMID: 35273914 PMCID: PMC8904144 DOI: 10.3389/fonc.2022.821594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/18/2022] [Indexed: 11/22/2022] Open
Abstract
Background It is a critical challenge to diagnose leptomeningeal metastasis (LM), given its technical difficulty and the lack of typical symptoms. The existing gold standard of diagnosing LM is to use positive cerebrospinal fluid (CSF) cytology, which consumes significantly more time to classify cells under a microscope. Objective This study aims to establish a deep learning model to classify cancer cells in CSF, thus facilitating doctors to achieve an accurate and fast diagnosis of LM in an early stage. Method The cerebrospinal fluid laboratory of Xijing Hospital provides 53,255 cells from 90 LM patients in the research. We used two deep convolutional neural networks (CNN) models to classify cells in the CSF. A five-way cell classification model (CNN1) consists of lymphocytes, monocytes, neutrophils, erythrocytes, and cancer cells. A four-way cancer cell classification model (CNN2) consists of lung cancer cells, gastric cancer cells, breast cancer cells, and pancreatic cancer cells. Here, the CNN models were constructed by Resnet-inception-V2. We evaluated the performance of the proposed models on two external datasets and compared them with the results from 42 doctors of various levels of experience in the human-machine tests. Furthermore, we develop a computer-aided diagnosis (CAD) software to generate cytology diagnosis reports in the research rapidly. Results With respect to the validation set, the mean average precision (mAP) of CNN1 is over 95% and that of CNN2 is close to 80%. Hence, the proposed deep learning model effectively classifies cells in CSF to facilitate the screening of cancer cells. In the human-machine tests, the accuracy of CNN1 is similar to the results from experts, with higher accuracy than doctors in other levels. Moreover, the overall accuracy of CNN2 is 10% higher than that of experts, with a time consumption of only one-third of that consumed by an expert. Using the CAD software saves 90% working time of cytologists. Conclusion A deep learning method has been developed to assist the LM diagnosis with high accuracy and low time consumption effectively. Thanks to labeled data and step-by-step training, our proposed method can successfully classify cancer cells in the CSF to assist LM diagnosis early. In addition, this unique research can predict cancer’s primary source of LM, which relies on cytomorphologic features without immunohistochemistry. Our results show that deep learning can be widely used in medical images to classify cerebrospinal fluid cells. For complex cancer classification tasks, the accuracy of the proposed method is significantly higher than that of specialist doctors, and its performance is better than that of junior doctors and interns. The application of CNNs and CAD software may ultimately aid in expediting the diagnosis and overcoming the shortage of experienced cytologists, thereby facilitating earlier treatment and improving the prognosis of LM.
Collapse
Affiliation(s)
- Wenjin Yu
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China.,Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China.,The College of Life Sciences and Medicine, Northwest University, Xi'an, China
| | - Yangyang Liu
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Yunsong Zhao
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Haofan Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jiahao Liu
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | - Xiaofeng Yao
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | - Jingwen Li
- The College of Medicine, Xiamen University, Xiamen, China
| | - Zhen Xie
- The College of Life Sciences and Medicine, Northwest University, Xi'an, China
| | - Luyue Jiang
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Heping Wu
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Xinhao Cao
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Jiaming Zhou
- Ophthalmology, Department of Clinical Science, Lund University, Lund, Sweden
| | - Yuting Guo
- Institute of Fluid Science, Tohoku University, Sendai, Japan
| | - Gaoyang Li
- Institute of Fluid Science, Tohoku University, Sendai, Japan
| | - Matthew Xinhu Ren
- Biology Program, Faculty of Science, The University of British Columbia, Vancouver, BC, Canada
| | - Yi Quan
- School of Microelectronics, Xidian University, Xi'an, China
| | - Tingmin Mu
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | | | - Guoxun Zhang
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China.,Multiple Sclerosis Unit, Neurology Service, Vithas Nisa Hospital, Seville, Spain
| | - Runze Zhao
- Department of Ophthalmology, Eye Institute of PLA, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Di Zhao
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Jiangyun Yan
- Department of Neurology, Xiji Country People's Hospital, Ningxia, China
| | - Haijun Zhang
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Junchao Lv
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Qian Yao
- The College of Life Sciences and Medicine, Northwest University, Xi'an, China
| | - Yan Duan
- The College of Life Sciences and Medicine, Northwest University, Xi'an, China
| | - Huimin Zhou
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Tingting Liu
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Ying He
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Ting Bian
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Wen Dai
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China
| | - Jiahui Huai
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | - Xiyuan Wang
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | - Qian He
- Department of Neurology, Yan'an University Medical College No. 3 Affiliated Hospital, Xianyang, China
| | - Yi Gao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen, Guangzhou, China.,Marshall Laboratory of Biomedical Engineering, Shenzhen, China.,Peng Cheng Laboratory, Shenzhen, China
| | - Wei Ren
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Gang Niu
- Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education & International Center for Dielectric Research, School of Electronic Science and Engineering & The International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, China
| | - Gang Zhao
- Department of Neurology, Xijing Hospital, the Fourth Military Medical University, Xi'an, China.,The College of Life Sciences and Medicine, Northwest University, Xi'an, China
| |
Collapse
|
3
|
Lin WW, Juang C, Yueh MH, Huang TM, Li T, Wang S, Yau ST. 3D brain tumor segmentation using a two-stage optimal mass transport algorithm. Sci Rep 2021; 11:14686. [PMID: 34376714 PMCID: PMC8355223 DOI: 10.1038/s41598-021-94071-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 06/30/2021] [Indexed: 11/29/2022] Open
Abstract
Optimal mass transport (OMT) theory, the goal of which is to move any irregular 3D object (i.e., the brain) without causing significant distortion, is used to preprocess brain tumor datasets for the first time in this paper. The first stage of a two-stage OMT (TSOMT) procedure transforms the brain into a unit solid ball. The second stage transforms the unit ball into a cube, as it is easier to apply a 3D convolutional neural network to rectangular coordinates. Small variations in the local mass-measure stretch ratio among all the brain tumor datasets confirm the robustness of the transform. Additionally, the distortion is kept at a minimum with a reasonable transport cost. The original [Formula: see text] dataset is thus reduced to a cube of [Formula: see text], which is a 76.6% reduction in the total number of voxels, without losing much detail. Three typical U-Nets are trained separately to predict the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) from the cube. An impressive training accuracy of 0.9822 in the WT cube is achieved at 400 epochs. An inverse TSOMT method is applied to the predicted cube to obtain the brain results. The conversion loss from the TSOMT method to the inverse TSOMT method is found to be less than one percent. For training, good Dice scores (0.9781 for the WT, 0.9637 for the TC, and 0.9305 for the ET) can be obtained. Significant improvements in brain tumor detection and the segmentation accuracy are achieved. For testing, postprocessing (rotation) is added to the TSOMT, U-Net prediction, and inverse TSOMT methods for an accuracy improvement of one to two percent. It takes 200 seconds to complete the whole segmentation process on each new brain tumor dataset.
Collapse
Affiliation(s)
- Wen-Wei Lin
- Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, 300, Taiwan
| | - Cheng Juang
- Electronics Department, Ming Hsin University of Science and Technology, Hsinchu, 304, Taiwan
| | - Mei-Heng Yueh
- Department of Mathematics, National Taiwan Normal University, Taipei, 116, Taiwan
| | - Tsung-Ming Huang
- Department of Mathematics, National Taiwan Normal University, Taipei, 116, Taiwan.
| | - Tiexiang Li
- School of Mathematics, Southeast University, Nanjing, 211189, People's Republic of China
- Nanjing Center for Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Sheng Wang
- Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, 300, Taiwan
| | - Shing-Tung Yau
- Department of Mathematics, Harvard University, Cambridge, USA
| |
Collapse
|
5
|
Tampu IE, Haj-Hosseini N, Eklund A. Does Anatomical Contextual Information Improve 3D U-Net-Based Brain Tumor Segmentation? Diagnostics (Basel) 2021; 11:1159. [PMID: 34201964 PMCID: PMC8306843 DOI: 10.3390/diagnostics11071159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/16/2022] Open
Abstract
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels.
Collapse
Affiliation(s)
- Iulian Emil Tampu
- Department of Biomedical Engineering, Linköping University, 581 83 Linköping, Sweden; (N.H.-H.); (A.E.)
- Center for Medical Image Science and Visualization, Linköping University, 581 83 Linköping, Sweden
| | - Neda Haj-Hosseini
- Department of Biomedical Engineering, Linköping University, 581 83 Linköping, Sweden; (N.H.-H.); (A.E.)
- Center for Medical Image Science and Visualization, Linköping University, 581 83 Linköping, Sweden
| | - Anders Eklund
- Department of Biomedical Engineering, Linköping University, 581 83 Linköping, Sweden; (N.H.-H.); (A.E.)
- Center for Medical Image Science and Visualization, Linköping University, 581 83 Linköping, Sweden
- Department of Computer and Information Science, Linköping University, 581 83 Linköping, Sweden
| |
Collapse
|
6
|
Khan MA, Ashraf I, Alhaisoni M, Damaševičius R, Scherer R, Rehman A, Bukhari SAC. Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists. Diagnostics (Basel) 2020; 10:565. [PMID: 32781795 PMCID: PMC7459797 DOI: 10.3390/diagnostics10080565] [Citation(s) in RCA: 124] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 08/01/2020] [Accepted: 08/04/2020] [Indexed: 11/17/2022] Open
Abstract
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, HITEC University, Museum Road, Taxila 47080, Pakistan;
| | - Imran Ashraf
- Department of Computer Engineering, HITEC University, Museum Road, Taxila 47080, Pakistan;
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 81451, Saudi Arabia;
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Rafal Scherer
- Department of Intelligent Computer Systems, Czestochowa University of Technology, 42-200 Czestochowa, Poland;
| | - Amjad Rehman
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, New York, NY 11439, USA;
| |
Collapse
|