1
|
Metin SZ, Uyulan Ç, Farhad S, Ergüzel TT, Türk Ö, Metin B, Çerezci Ö, Tarhan N. Deep Learning-Based Artificial Intelligence Can Differentiate Treatment-Resistant and Responsive Depression Cases with High Accuracy. Clin EEG Neurosci 2024:15500594241273181. [PMID: 39251228 DOI: 10.1177/15500594241273181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Background: Although there are many treatment options available for depression, a large portion of patients with depression are diagnosed with treatment-resistant depression (TRD), which is characterized by an inadequate response to antidepressant treatment. Identifying the TRD population is crucial in terms of saving time and resources in depression treatment. Recently several studies employed various methods on EEG datasets for automatic depression detection or treatment outcome prediction. However, no previous study has used the deep learning (DL) approach and EEG signals for detecting treatment resistance. Method: 77 patients with TRD, 43 patients with non-TRD, and 40 healthy controls were compared using GoogleNet convolutional neural network and DL on EEG data. Additionally, Class Activation Maps (CAMs) acquired from the TRD and non-TRD groups were used to obtain distinctive regions for classification. Results: GoogleNet classified the healthy controls and non-TRD group with 88.43%, the healthy controls and TRD subjects with 89.73%, and the TRD and non-TRD group with 90.05% accuracy. The external validation accuracy for the TRD-non-TRD classification was 73.33%. Finally, the CAM analysis revealed that the TRD group contained dominant features in class detection of deep learning architecture in almost all electrodes. Limitations: Our study is limited by the moderate sample size of clinical groups and the retrospective nature of the study. Conclusion: These findings suggest that EEG-based deep learning can be used to classify treatment resistance in depression and may in the future prove to be a useful tool in psychiatry practice to identify patients who need more vigorous intervention.
Collapse
Affiliation(s)
| | - Çağlar Uyulan
- Department of Mechanical Engineering, Katip Çelebi University, İzmir, Turkey
| | - Shams Farhad
- Department of Neuroscience, Uskudar University, Istanbul, Turkey
| | - Türker Tekin Ergüzel
- Department of Software Engineering, Faculty of Engineering and Natural Sciences, Uskudar University, Istanbul, Turkey
| | - Ömer Türk
- Department of Computer Technologies, Artuklu University, Mardin, Turkey
| | - Barış Metin
- Neurology Department, Medical Faculty, Uskudar University, Istanbul, Turkey
| | - Önder Çerezci
- Department of Physioterapy and Rehabilitation, Faculty of Health SciencesUskudar University, Istanbul, Turkey
| | - Nevzat Tarhan
- Department of Psychiatry, Uskudar University, Istanbul, Turkey
| |
Collapse
|
2
|
Pande SD, Ahammad SH, Madhav BTP, Ramya KR, Smirani LK, Hossain MA, Rashed ANZ. Assessment of brain tumor detection techniques and recommendation of neural network. BIOMED ENG-BIOMED TE 2024; 69:395-406. [PMID: 38285486 DOI: 10.1515/bmt-2022-0336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 01/05/2024] [Indexed: 01/30/2024]
Abstract
OBJECTIVES Brain tumor classification is amongst the most complex and challenging jobs in the computer domain. The latest advances in brain tumor detection systems (BTDS) are presented as they can inspire new researchers to deliver new architectures for effective and efficient tumor detection. Here, the data of the multi-modal brain tumor segmentation task is employed, which has been registered, skull stripped, and histogram matching is conducted with the ferrous volume of high contrast. METHODS This research further configures a capsule network (CapsNet) for brain tumor classification. Results of the latest deep neural network (NN) architectures for tumor detection are compared and presented. The VGG16 and CapsNet architectures yield the highest f1-score and precision values, followed by VGG19. Overall, ResNet152, MobileNet, and MobileNetV2 give us the lowest f1-score. RESULTS The VGG16 and CapsNet have produced outstanding results. However, VGG16 and VGG19 are more profound architecture, resulting in slower computation speed. The research then recommends the latest suitable NN for effective brain tumor detection. CONCLUSIONS Finally, the work concludes with future directions and potential new architectures for tumor detection.
Collapse
Affiliation(s)
| | - Shaik Hasane Ahammad
- Department of ECE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India
| | | | - Kalangi Ruth Ramya
- Department of Computer Engineering, Indira College of Engineering and Management, Pune, MH, India
| | - Lassaad K Smirani
- Deanship of Information Technology, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Md Amzad Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science and Technology, Jashore, Bangladesh
| | - Ahmed Nabih Zaki Rashed
- Electronics and Electrical Communications Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
- Department of VLSI Microelectronics, Institute of Electronics and Communication Engineering, Saveetha School of Engineering, SIMATS, Chennai, Tamilnadu, India
| |
Collapse
|
3
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
4
|
Zhong LW, Chen KS, Yang HB, Liu SD, Zong ZT, Zhang XQ. Exploring machine learning applications in Meningioma Research (2004-2023). Heliyon 2024; 10:e32596. [PMID: 38975185 PMCID: PMC11225743 DOI: 10.1016/j.heliyon.2024.e32596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/19/2024] [Accepted: 06/05/2024] [Indexed: 07/09/2024] Open
Abstract
Objective This study aims to examine the trends in machine learning application to meningiomas between 2004 and 2023. Methods Publication data were extracted from the Science Citation Index Expanded (SCI-E) within the Web of Science Core Collection (WOSCC). Using CiteSpace 6.2.R6, a comprehensive analysis of publications, authors, cited authors, countries, institutions, cited journals, references, and keywords was conducted on December 1, 2023. Results The analysis included a total of 342 articles. Prior to 2007, no publications existed in this field, and the number remained modest until 2017. A significant increase occurred in publications from 2018 onwards. The majority of the top 10 authors hailed from Germany and China, with the USA also exerting substantial international influence, particularly in academic institutions. Journals from the IEEE series contributed significantly to the publications. "Deep learning," "brain tumor," and "classification" emerged as the primary keywords of focus among researchers. The developmental pattern in this field primarily involved a combination of interdisciplinary integration and the refinement of major disciplinary branches. Conclusion Machine learning has demonstrated significant value in predicting early meningiomas and tailoring treatment plans. Key research focuses involve optimizing detection indicators and selecting superior machine learning algorithms. Future efforts should aim to develop high-performance algorithms to drive further innovation in this field.
Collapse
Affiliation(s)
- Li-wei Zhong
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Kun-shan Chen
- The Second Affiliated Hospital of Jiujiang University, Jiujiang, Jiangxi, China
| | - Hua-biao Yang
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Shi-dan Liu
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Zhi-tao Zong
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Xue-qin Zhang
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| |
Collapse
|
5
|
Chen W, Tan X, Zhang J, Du G, Fu Q, Jiang H. A robust approach for multi-type classification of brain tumor using deep feature fusion. Front Neurosci 2024; 18:1288274. [PMID: 38440396 PMCID: PMC10909817 DOI: 10.3389/fnins.2024.1288274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Brain tumors can be classified into many different types based on their shape, texture, and location. Accurate diagnosis of brain tumor types can help doctors to develop appropriate treatment plans to save patients' lives. Therefore, it is very crucial to improve the accuracy of this classification system for brain tumors to assist doctors in their treatment. We propose a deep feature fusion method based on convolutional neural networks to enhance the accuracy and robustness of brain tumor classification while mitigating the risk of over-fitting. Firstly, the extracted features of three pre-trained models including ResNet101, DenseNet121, and EfficientNetB0 are adjusted to ensure that the shape of extracted features for the three models is the same. Secondly, the three models are fine-tuned to extract features from brain tumor images. Thirdly, pairwise summation of the extracted features is carried out to achieve feature fusion. Finally, classification of brain tumors based on fused features is performed. The public datasets including Figshare (Dataset 1) and Kaggle (Dataset 2) are used to verify the reliability of the proposed method. Experimental results demonstrate that the fusion method of ResNet101 and DenseNet121 features achieves the best performance, which achieves classification accuracy of 99.18 and 97.24% in Figshare dataset and Kaggle dataset, respectively.
Collapse
Affiliation(s)
- Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
6
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
7
|
Chen L, Chen R, Li T, Huang L, Tang C, Li Y, Zeng Z. MRI radiomics model for predicting TERT promoter mutation status in glioblastoma. Brain Behav 2023; 13:e3324. [PMID: 38054695 PMCID: PMC10726789 DOI: 10.1002/brb3.3324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/05/2023] [Accepted: 10/31/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND AND PURPOSE The presence of TERT promoter mutations has been associated with worse prognosis and resistance to therapy for patients with glioblastoma (GBM). This study aimed to determine whether the combination model of different feature selections and classification algorithms based on multiparameter MRI can be used to predict TERT subtype in GBM patients. METHODS A total of 143 patients were included in our retrospective study, and 2553 features were obtained. The datasets were randomly divided into training and test sets in a ratio of 7:3. The synthetic minority oversampling technique was used to achieve data balance. The Pearson correlation coefficients were used for dimension reduction. Three feature selections and five classification algorithms were used to model the selected features. Finally, 10-fold cross validation was applied to the training dataset. RESULTS A model with eight features generated by recursive feature elimination (RFE) and linear discriminant analysis (LDA) showed the greatest diagnostic performance (area under the curve values for the training, validation, and testing sets: 0.983, 0.964, and 0.926, respectively), followed by relief and random forest (RF), analysis of variance and RF. Furthermore, the relief was the optimal feature selection for separately evaluating those five classification algorithms, and RF was the most preferable algorithm for separately assessing the three feature selectors. ADC entropy was the parameter that made the greatest contribution to the discrimination of TERT mutations. CONCLUSIONS Radiomics model generated by RFE and LDA mainly based on ADC entropy showed good performance in predicting TERT promoter mutations in GBM.
Collapse
Affiliation(s)
- Ling Chen
- Department of RadiologyThe First Affiliated Hospital of Guangxi Medical UniversityNanningGuangxiChina
- Department of RadiologyLiuzhou Worker's HospitalThe Fourth Affiliated HospitalGuangxi Medical UniversityNanningGuangxiChina
| | - Runrong Chen
- Department of RadiologyThe First Affiliated Hospital of Guangxi Medical UniversityNanningGuangxiChina
| | - Tao Li
- Department of RadiologyLiuzhou Worker's HospitalThe Fourth Affiliated HospitalGuangxi Medical UniversityNanningGuangxiChina
| | - Lizhao Huang
- Department of RadiologyLiuzhou Worker's HospitalThe Fourth Affiliated HospitalGuangxi Medical UniversityNanningGuangxiChina
| | - Chuyun Tang
- Department of RadiologyThe First Affiliated Hospital of Guangxi Medical UniversityNanningGuangxiChina
| | - Yao Li
- Department of NeurosurgeryLiuzhou Worker's HospitalThe Fourth Affiliated HospitalGuangxi Medical UniversityNanningGuangxiChina
| | - Zisan Zeng
- Department of RadiologyThe First Affiliated Hospital of Guangxi Medical UniversityNanningGuangxiChina
| |
Collapse
|
8
|
Zhang H, Zhang H, Zhang Y, Zhou B, Wu L, Lei Y, Huang B. Deep Learning Radiomics for the Assessment of Telomerase Reverse Transcriptase Promoter Mutation Status in Patients With Glioblastoma Using Multiparametric MRI. J Magn Reson Imaging 2023; 58:1441-1451. [PMID: 36896953 DOI: 10.1002/jmri.28671] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 02/21/2023] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Studies have shown that magnetic resonance imaging (MRI)-based deep learning radiomics (DLR) has the potential to assess glioma grade; however, its role in predicting telomerase reverse transcriptase (TERT) promoter mutation status in patients with glioblastoma (GBM) remains unclear. PURPOSE To evaluate the value of deep learning (DL) in multiparametric MRI-based radiomics in identifying TERT promoter mutations in patients with GBM preoperatively. STUDY TYPE Retrospective. POPULATION A total of 274 patients with isocitrate dehydrogenase-wildtype GBM were included in the study. The training and external validation cohorts included 156 (54.3 ± 12.7 years; 96 males) and 118 (54 .2 ± 13.4 years; 73 males) patients, respectively. FIELD STRENGTH/SEQUENCE Axial contrast-enhanced T1-weighted spin-echo inversion recovery sequence (T1CE), T1-weighted spin-echo inversion recovery sequence (T1WI), and T2-weighted spin-echo inversion recovery sequence (T2WI) on 1.5-T and 3.0-T scanners were used in this study. ASSESSMENT Overall tumor area regions (the tumor core and edema) were segmented, and the radiomics and DL features were extracted from preprocessed multiparameter preoperative brain MRI images-T1WI, T1CE, and T2WI. A model based on the DLR signature, clinical signature, and clinical DLR (CDLR) nomogram was developed and validated to identify TERT promoter mutation status. STATISTICAL TESTS The Mann-Whitney U test, Pearson test, least absolute shrinkage and selection operator, and logistic regression analysis were applied for feature selection and construction of radiomics and DL signatures. Results were considered statistically significant at P-value <0.05. RESULTS The DLR signature showed the best discriminative power for predicting TERT promoter mutations, yielding an AUC of 0.990 and 0.890 in the training and external validation cohorts, respectively. Furthermore, the DLR signature outperformed CDLR nomogram (P = 0.670) and significantly outperformed clinical models in the validation cohort. DATA CONCLUSION The multiparameter MRI-based DLR signature exhibited a promising performance for the assessment of TERT promoter mutations in patients with GBM, which could provide information for individualized treatment. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Hongbo Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Hanwen Zhang
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Yuze Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Beibei Zhou
- Department of Radiology, The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China
| | - Lei Wu
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yi Lei
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Biao Huang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
9
|
B. A, Kaur M, Singh D, Roy S, Amoon M. Efficient Skip Connections-Based Residual Network (ESRNet) for Brain Tumor Classification. Diagnostics (Basel) 2023; 13:3234. [PMID: 37892055 PMCID: PMC10606037 DOI: 10.3390/diagnostics13203234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ashwini B.
- Department of ISE, NMAM Institute of Technology, Nitte (Deemed to be University), Nitte 574110, India;
| | - Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Satyabrata Roy
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India;
| | - Mohammed Amoon
- Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| |
Collapse
|
10
|
Zheng Y, Huang D, Hao X, Wei J, Lu H, Liu Y. UniVisNet: A Unified Visualization and Classification Network for accurate grading of gliomas from MRI. Comput Biol Med 2023; 165:107332. [PMID: 37598632 DOI: 10.1016/j.compbiomed.2023.107332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/30/2023] [Accepted: 08/07/2023] [Indexed: 08/22/2023]
Abstract
Accurate grading of brain tumors plays a crucial role in the diagnosis and treatment of glioma. While convolutional neural networks (CNNs) have shown promising performance in this task, their clinical applicability is still constrained by the interpretability and robustness of the models. In the conventional framework, the classification model is trained first, and then visual explanations are generated. However, this approach often leads to models that prioritize classification performance or complexity, making it difficult to achieve a precise visual explanation. Motivated by these challenges, we propose the Unified Visualization and Classification Network (UniVisNet), a novel framework that aims to improve both the classification performance and the generation of high-resolution visual explanations. UniVisNet addresses attention misalignment by introducing a subregion-based attention mechanism, which replaces traditional down-sampling operations. Additionally, multiscale feature maps are fused to achieve higher resolution, enabling the generation of detailed visual explanations. To streamline the process, we introduce the Unified Visualization and Classification head (UniVisHead), which directly generates visual explanations without the need for additional separation steps. Through extensive experiments, our proposed UniVisNet consistently outperforms strong baseline classification models and prevalent visualization methods. Notably, UniVisNet achieves remarkable results on the glioma grading task, including an AUC of 94.7%, an accuracy of 89.3%, a sensitivity of 90.4%, and a specificity of 85.3%. Moreover, UniVisNet provides visually interpretable explanations that surpass existing approaches. In conclusion, UniVisNet innovatively generates visual explanations in brain tumor grading by simultaneously improving the classification performance and generating high-resolution visual explanations. This work contributes to the clinical application of deep learning, empowering clinicians with comprehensive insights into the spatial heterogeneity of glioma.
Collapse
Affiliation(s)
- Yao Zheng
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Dong Huang
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Xiaoshuo Hao
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Jie Wei
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Hongbing Lu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| | - Yang Liu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| |
Collapse
|
11
|
Zhang J, Tan X, Chen W, Du G, Fu Q, Zhang H, Jiang H. EFF_D_SVM: a robust multi-type brain tumor classification system. Front Neurosci 2023; 17:1269100. [PMID: 37841686 PMCID: PMC10570803 DOI: 10.3389/fnins.2023.1269100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 08/29/2023] [Indexed: 10/17/2023] Open
Abstract
Brain tumors are one of the most threatening diseases to human health. Accurate identification of the type of brain tumor is essential for patients and doctors. An automated brain tumor diagnosis system based on Magnetic Resonance Imaging (MRI) can help doctors to identify the type of tumor and reduce their workload, so it is vital to improve the performance of such systems. Due to the challenge of collecting sufficient data on brain tumors, utilizing pre-trained Convolutional Neural Network (CNN) models for brain tumors classification is a feasible approach. The study proposes a novel brain tumor classification system, called EFF_D_SVM, which is developed on the basic of pre-trained EfficientNetB0 model. Firstly, a new feature extraction module EFF_D was proposed, in which the classification layer of EfficientNetB0 was replaced with two dropout layers and two dense layers. Secondly, the EFF_D model was fine-tuned using Softmax, and then features of brain tumor images were extracted using the fine-tuned EFF_D. Finally, the features were classified using Support Vector Machine (SVM). In order to verify the effectiveness of the proposed brain tumor classification system, a series of comparative experiments were carried out. Moreover, to understand the extracted features of the brain tumor images, Grad-CAM technology was used to visualize the proposed model. Furthermore, cross-validation was conducted to verify the robustness of the proposed model. The evaluation metrics including accuracy, F1-score, recall, and precision were used to evaluate proposed system performance. The experimental results indicate that the proposed model is superior to other state-of-the-art models.
Collapse
Affiliation(s)
- Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongri Zhang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
12
|
Zheng YM, Che JY, Yuan MG, Wu ZJ, Pang J, Zhou RZ, Li XL, Dong C. A CT-Based Deep Learning Radiomics Nomogram to Predict Histological Grades of Head and Neck Squamous Cell Carcinoma. Acad Radiol 2023; 30:1591-1599. [PMID: 36460582 DOI: 10.1016/j.acra.2022.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/01/2022] [Accepted: 11/10/2022] [Indexed: 12/03/2022]
Abstract
RATIONALE AND OBJECTIVES Accurate pretreatment assessment of histological differentiation grade of head and neck squamous cell carcinoma (HNSCC) is crucial for prognosis evaluation. This study aimed to construct and validate a contrast-enhanced computed tomography (CECT)-based deep learning radiomics nomogram (DLRN) to predict histological differentiation grades of HNSCC. MATERIALS AND METHODS A total of 204 patients with HNSCC who underwent CECT scans were enrolled in this study. The participants recruited from two hospitals were split into a training set (n=124, 74 well/moderately differentiated and 50 poorly differentiated) of patients from one hospital and an external test set of patients from the other hospital (n=80, 49 well/moderately differentiated and 31 poorly differentiated). CECT-based manually-extracted radiomics (MER) features and deep learning (DL) features were extracted and selected. The selected MER features and DL features were then combined to construct a DLRN via multivariate logistic regression. The predictive performance of the DLRN was assessed using ROCs and decision curve analysis (DCA). RESULTS Three MER features and seven DL features were finally selected. The DLRN incorporating the selected MER and DL features showed good predictive value for the histological differentiation grades of HNSCC (well/moderately differentiated vs. poorly differentiated) in both the training (AUC, 0.878) and test (AUC, 0.822) sets. DCA demonstrated that the DLRN was clinically useful for predicting histological differentiation grades of HNSCC. CONCLUSION A CECT-based DLRN was constructed to predict histological differentiation grades of HNSCC. The DLRN showed good predictive efficacy and might be useful for prognostic evaluation of patients with HNSCC.
Collapse
Affiliation(s)
- Ying-Mei Zheng
- Health Management Center, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Jun-Yi Che
- Department of Radiology, Qingdao Municipal Hospital, Qingdao, China
| | - Ming-Gang Yuan
- Department of Nuclear Medicine, Affiliated Qingdao Central Hospital, Qingdao University, Qingdao, China
| | - Zeng-Jie Wu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Jing Pang
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Rui-Zhi Zhou
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiao-Li Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Cheng Dong
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
13
|
Shao H, Wang S. Deep Classification with Linearity-Enhanced Logits to Softmax Function. ENTROPY (BASEL, SWITZERLAND) 2023; 25:e25050727. [PMID: 37238482 DOI: 10.3390/e25050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/20/2023] [Accepted: 03/27/2023] [Indexed: 05/28/2023]
Abstract
Recently, there has been a rapid increase in deep classification tasks, such as image recognition and target detection. As one of the most crucial components in Convolutional Neural Network (CNN) architectures, softmax arguably encourages CNN to achieve better performance in image recognition. Under this scheme, we present a conceptually intuitive learning objection function: Orthogonal-Softmax. The primary property of the loss function is to use a linear approximation model that is designed by Gram-Schmidt orthogonalization. Firstly, compared with the traditional softmax and Taylor-Softmax, Orthogonal-Softmax has a stronger relationship through orthogonal polynomials expansion. Secondly, a new loss function is advanced to acquire highly discriminative features for classification tasks. At last, we present a linear softmax loss to further promote the intra-class compactness and inter-class discrepancy simultaneously. The results of the widespread experimental discussion on four benchmark datasets manifest the validity of the presented method. Besides, we want to explore the non-ground truth samples in the future.
Collapse
Affiliation(s)
- Hao Shao
- School of Mathematics and Statistics, Yunnan Unverisity, Kunming 650504, China
| | - Shunfang Wang
- School of Information Science and Engineering, Yunnan Unverisity, Kunming 650504, China
- The Key Lab of Intelligent Systems and Computing of Yunnan Province, Yunnan University, Kunming 650504, China
| |
Collapse
|
14
|
Medical Image Classifications for 6G IoT-Enabled Smart Health Systems. Diagnostics (Basel) 2023; 13:diagnostics13050834. [PMID: 36899978 PMCID: PMC10000954 DOI: 10.3390/diagnostics13050834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/03/2023] [Accepted: 02/19/2023] [Indexed: 02/24/2023] Open
Abstract
As day-to-day-generated data become massive in the 6G-enabled Internet of medical things (IoMT), the process of medical diagnosis becomes critical in the healthcare system. This paper presents a framework incorporated into the 6G-enabled IoMT to improve prediction accuracy and provide a real-time medical diagnosis. The proposed framework integrates deep learning and optimization techniques to render accurate and precise results. The medical computed tomography images are preprocessed and fed into an efficient neural network designed for learning image representations and converting each image to a feature vector. The extracted features from each image are then learned using a MobileNetV3 architecture. Furthermore, we enhanced the performance of the arithmetic optimization algorithm (AOA) based on the hunger games search (HGS). In the developed method, named AOAHG, the operators of the HGS are applied to enhance the AOA's exploitation ability while allocating the feasible region. The developed AOAG selects the most relevant features and ensures the overall model classification improvement. To assess the validity of our framework, we conducted evaluation experiments on four datasets, including ISIC-2016 and PH2 for skin cancer detection, white blood cell (WBC) detection, and optical coherence tomography (OCT) classification, using different evaluation metrics. The framework showed remarkable performance compared to currently existing methods in the literature. In addition, the developed AOAHG provided results better than other FS approaches according to the obtained accuracy, precision, recall, and F1-score as performance measures. For example, AOAHG had 87.30%, 96.40%, 88.60%, and 99.69% for the ISIC, PH2, WBC, and OCT datasets, respectively.
Collapse
|
15
|
DTBV: A Deep Transfer-Based Bone Cancer Diagnosis System Using VGG16 Feature Extraction. Diagnostics (Basel) 2023; 13:diagnostics13040757. [PMID: 36832245 PMCID: PMC9955441 DOI: 10.3390/diagnostics13040757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/23/2023] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
Among the many different types of cancer, bone cancer is the most lethal and least prevalent. More cases are reported each year. Early diagnosis of bone cancer is crucial since it helps limit the spread of malignant cells and reduce mortality. The manual method of detection of bone cancer is cumbersome and requires specialized knowledge. A deep transfer-based bone cancer diagnosis (DTBV) system using VGG16 feature extraction is proposed to address these issues. The proposed DTBV system uses a transfer learning (TL) approach in which a pre-trained convolutional neural network (CNN) model is used to extract features from the pre-processed input image and a support vector machine (SVM) model is used to train using these features to distinguish between cancerous and healthy bone. The CNN is applied to the image datasets as it provides better image recognition with high accuracy when the layers in neural network feature extraction increase. In the proposed DTBV system, the VGG16 model extracts the features from the input X-ray image. A mutual information statistic that measures the dependency between the different features is then used to select the best features. This is the first time this method has been used for detecting bone cancer. Once selected features are selected, they are fed into the SVM classifier. The SVM model classifies the given testing dataset into malignant and benign categories. A comprehensive performance evaluation has demonstrated that the proposed DTBV system is highly efficient in detecting bone cancer, with an accuracy of 93.9%, which is more accurate than other existing systems.
Collapse
|
16
|
Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection from MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13040651. [PMID: 36832138 PMCID: PMC9955350 DOI: 10.3390/diagnostics13040651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/04/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.
Collapse
|
17
|
Mathematical Assessment of Machine Learning Models Used for Brain Tumor Diagnosis. Diagnostics (Basel) 2023; 13:diagnostics13040618. [PMID: 36832106 PMCID: PMC9955898 DOI: 10.3390/diagnostics13040618] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 01/27/2023] [Accepted: 02/02/2023] [Indexed: 02/11/2023] Open
Abstract
The brain is an intrinsic and complicated component of human anatomy. It is a collection of connective tissues and nerve cells that regulate the principal actions of the entire body. Brain tumor cancer is a serious mortality factor and a highly intractable disease. Even though brain tumors are not considered a fundamental cause of cancer deaths worldwide, about 40% of other cancer types are metastasized to the brain and transform into brain tumors. Computer-aided devices for diagnosis through magnetic resonance imaging (MRI) have remained the gold standard for the diagnosis of brain tumors, but this conventional method has been greatly challenged with inefficiencies and drawbacks related to the late detection of brain tumors, high risk in biopsy procedures, and low specificity. To circumvent these underlying hurdles, machine learning models have recently been developed to enhance computer-aided diagnosis tools for advanced, precise, and automatic early detection of brain tumors. This study takes a novel approach to evaluate machine learning models (support vector machine (SVM), random forest (RF), gradient-boosting model (GBM), convolutional neural network (CNN), K-nearest neighbor (KNN), AlexNet, GoogLeNet, CNN VGG19, and CapsNet) used for the early detection and classification of brain tumors by deploying the multicriteria decision-making method called fuzzy preference ranking organization method for enrichment evaluations (PROMETHEE), based on selected parameters, in this study: prediction accuracy, precision, specificity, recall, processing time, and sensitivity. To validate the results of our proposed approach, we performed a sensitivity analysis and cross-checking analysis with the PROMETHEE model. The CNN model, with an outranking net flow of 0.0251, is considered the most favorable model for the early detection of brain tumors. The KNN model, with a net flow of -0.0154, is the least appealing option. The findings of this study support the applicability of the proposed approach for making optimal choices regarding the selection of machine learning models. The decision maker is thus afforded the opportunity to expand the range of considerations which they must rely on in selecting the preferred models for early detection of brain tumors.
Collapse
|
18
|
Tumor Localization and Classification from MRI of Brain using Deep Convolution Neural Network and Salp Swarm Algorithm. Cognit Comput 2023. [DOI: 10.1007/s12559-022-10096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
19
|
Gou F, Liu J, Zhu J, Wu J. A Multimodal Auxiliary Classification System for Osteosarcoma Histopathological Images Based on Deep Active Learning. Healthcare (Basel) 2022; 10:2189. [PMID: 36360530 PMCID: PMC9690420 DOI: 10.3390/healthcare10112189] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 10/29/2023] Open
Abstract
Histopathological examination is an important criterion in the clinical diagnosis of osteosarcoma. With the improvement of hardware technology and computing power, pathological image analysis systems based on artificial intelligence have been widely used. However, classifying numerous intricate pathology images by hand is a tiresome task for pathologists. The lack of labeling data makes the system costly and difficult to build. This study constructs a classification assistance system (OHIcsA) based on active learning (AL) and a generative adversarial network (GAN). The system initially uses a small, labeled training set to train the classifier. Then, the most informative samples from the unlabeled images are selected for expert annotation. To retrain the network, the final chosen images are added to the initial labeled dataset. Experiments on real datasets show that our proposed method achieves high classification performance with an AUC value of 0.995 and an accuracy value of 0.989 using a small amount of labeled data. It reduces the cost of building a medical system. Clinical diagnosis can be aided by the system's findings, which can also increase the effectiveness and verifiable accuracy of doctors.
Collapse
Affiliation(s)
- Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
20
|
Brain tumor segmentation and classification using hybrid deep CNN with LuNetClassifier. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07934-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
21
|
Tummala S, Kadry S, Bukhari SAC, Rauf HT. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Curr Oncol 2022; 29:7498-7511. [PMID: 36290867 PMCID: PMC9600395 DOI: 10.3390/curroncol29100590] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 09/29/2022] [Accepted: 10/04/2022] [Indexed: 01/13/2023] Open
Abstract
The automated classification of brain tumors plays an important role in supporting radiologists in decision making. Recently, vision transformer (ViT)-based deep neural network architectures have gained attention in the computer vision research domain owing to the tremendous success of transformer models in natural language processing. Hence, in this study, the ability of an ensemble of standard ViT models for the diagnosis of brain tumors from T1-weighted (T1w) magnetic resonance imaging (MRI) is investigated. Pretrained and finetuned ViT models (B/16, B/32, L/16, and L/32) on ImageNet were adopted for the classification task. A brain tumor dataset from figshare, consisting of 3064 T1w contrast-enhanced (CE) MRI slices with meningiomas, gliomas, and pituitary tumors, was used for the cross-validation and testing of the ensemble ViT model's ability to perform a three-class classification task. The best individual model was L/32, with an overall test accuracy of 98.2% at 384 × 384 resolution. The ensemble of all four ViT models demonstrated an overall testing accuracy of 98.7% at the same resolution, outperforming individual model's ability at both resolutions and their ensembling at 224 × 224 resolution. In conclusion, an ensemble of ViT models could be deployed for the computer-aided diagnosis of brain tumors based on T1w CE MRI, leading to radiologist relief.
Collapse
Affiliation(s)
- Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University—AP, Amaravati 522503, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 36, Lebanon
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, New York, NY 11439, USA
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| |
Collapse
|
22
|
Brain tumor detection using deep ensemble model with wavelet features. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00699-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
23
|
An attention-guided convolutional neural network for automated classification of brain tumor from MRI. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07742-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
24
|
Zahid U, Ashraf I, Khan MA, Alhaisoni M, Yahya KM, Hussein HS, Alshazly H. BrainNet: Optimal Deep Learning Feature Fusion for Brain Tumor Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1465173. [PMID: 35965745 PMCID: PMC9371837 DOI: 10.1155/2022/1465173] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/05/2022] [Indexed: 12/14/2022]
Abstract
Early detection of brain tumors can save precious human life. This work presents a fully automated design to classify brain tumors. The proposed scheme employs optimal deep learning features for the classification of FLAIR, T1, T2, and T1CE tumors. Initially, we normalized the dataset to pass them to the ResNet101 pretrained model to perform transfer learning for our dataset. This approach results in fine-tuning the ResNet101 model for brain tumor classification. The problem with this approach is the generation of redundant features. These redundant features degrade accuracy and cause computational overhead. To tackle this problem, we find optimal features by utilizing differential evaluation and particle swarm optimization algorithms. The obtained optimal feature vectors are then serially fused to get a single-fused feature vector. PCA is applied to this fused vector to get the final optimized feature vector. This optimized feature vector is fed as input to various classifiers to classify tumors. Performance is analyzed at various stages. Performance results show that the proposed technique achieved a speedup of 25.5x in prediction time on the medium neural network with an accuracy of 94.4%. These results show significant improvement over the state-of-the-art techniques in terms of computational overhead by maintaining approximately the same accuracy.
Collapse
Affiliation(s)
- Usman Zahid
- Department of Computer Engineering, HITEC University, Taxila 47080, Pakistan
| | - Imran Ashraf
- Department of Computer Engineering, HITEC University, Taxila 47080, Pakistan
| | | | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Khawaja M. Yahya
- Department of Electrical Engineering, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Hany S. Hussein
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha 62529, Saudi Arabia
- Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81528, Egypt
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
| |
Collapse
|
25
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
26
|
Lin Y, Jiang J, Ma Z, Chen D, Guan Y, You H, Cheng X, Liu B, Luo G. KIEGLFN: A unified acne grading framework on face images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106911. [PMID: 35640393 DOI: 10.1016/j.cmpb.2022.106911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 04/26/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Grading the severity level is an extremely important procedure for correct diagnoses and personalized treatment schemes for acne. However, the acne grading criteria are not unified in the medical field. This work aims to develop an acne diagnosis system that can be generalized to various criteria. METHODS A unified acne grading framework that can be generalized to apply referring to different grading criteria is developed. It imitates the global estimation of the dermatologist diagnosis in two steps. First, an adaptive image preprocessing method effectively filters meaningless information and enhances key information. Next, an innovative network structure fuses global deep features with local features to simulate the dermatologists' comparison of local skin and global observation. In addition, a transfer fine-tuning strategy is proposed to transfer prior knowledge on one criterion to another criterion, which effectively improves the framework performance in case of insufficient data. RESULTS The Preprocessing method effectively filters meaningless areas and improves the performance of downstream models.The framework reaches accuracies of 84.52% and 59.35% on two datasets separately. CONCLUSIONS The application of the framework on acne grading exceeds the state-of-the-art method by 1.71%, reaches the diagnostic level of a professional dermatologist and the transfer fine-tuning strategy improves the accuracy of 6.5% on the small data.
Collapse
Affiliation(s)
- Yi Lin
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| | - Jingchi Jiang
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| | - Zhaoyang Ma
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| | - Dongxin Chen
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| | - Yi Guan
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| | - Haiyan You
- Heilongjiang Provincial Hospital, Harbin, 150001, Heilongjiang, China.
| | - Xue Cheng
- Heilongjiang Provincial Hospital, Harbin, 150001, Heilongjiang, China.
| | - Bingmei Liu
- Fourth Hospital of Harbin Medical University, Harbin, 150001, Heilongjiang, China.
| | - Gongning Luo
- Harbin Institute of Technology, Harbin, 150001, Heilongjiang China.
| |
Collapse
|