1
|
Bilal A, Liu X, Shafiq M, Ahmed Z, Long H. NIMEQ-SACNet: A novel self-attention precision medicine model for vision-threatening diabetic retinopathy using image data. Comput Biol Med 2024; 171:108099. [PMID: 38364659 DOI: 10.1016/j.compbiomed.2024.108099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/02/2024] [Accepted: 02/02/2024] [Indexed: 02/18/2024]
Abstract
In the realm of precision medicine, the potential of deep learning is progressively harnessed to facilitate intricate clinical decision-making, especially when navigating multifaceted datasets encompassing Omics, Clinical, image, device, social, and environmental dimensions. This study accentuates the criticality of image data, given its instrumental role in detecting and classifying vision-threatening diabetic retinopathy (VTDR) - a predominant global contributor to vision impairment. The timely identification of VTDR is a linchpin for efficacious interventions and the mitigation of vision loss. Addressing this, This study introduces "NIMEQ-SACNet," a novel hybrid model by the prowess of the Enhanced Quantum-Inspired Binary Grey Wolf Optimizer (EQI-BGWO) with a self-attention capsule network. The proposed approach is characterized by two pivotal advancements: firstly, the augmentation of the Binary Grey Wolf Optimization through Quantum Computing methodologies, and secondly, the deployment of the enhanced EQI-BGWO to adeptly calibrate the SACNet's parameters, culminating in a notable uplift in VTDR classification accuracy. The proposed model's ability to handle binary, 5-stage, and 7-stage VTDR classifications adroitly is noteworthy. Rigorous assessments on the fundus image dataset, underscored by metrics such as Accuracy, Sensitivity, Specificity, Precision, F1-Score, and MCC, bear testament to NIMEQ-SACNet's pre-eminence over prevailing algorithms and classification frameworks.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Xiaowen Liu
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Sichuan, China
| | - Zohaib Ahmed
- Department of Criminology and Forensic Sciences, Lahore Garrison University, Lahore, Pakistan
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China.
| |
Collapse
|
2
|
Vanmathi P, Jose D. An ensemble-based serial cascaded attention network and improved variational auto encoder for breast cancer prognosis prediction using data. Comput Methods Biomech Biomed Engin 2024; 27:98-115. [PMID: 38006210 DOI: 10.1080/10255842.2023.2280883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023]
Abstract
Breast cancer is one of the most common types of cancer in women and it produces a huge amount of death rate in the world. Early recognition is lessening its impact. The early recognition of breast cancer could convince patients to receive surgical therapy, which will significantly improve the chance of restoration. This information is used by the machine learning technique to find links between them and appraise our forecasts of fresh occurrences. Later recognition of breast cancer can lead to death. An accurate prescient framework for breast cancer prediction is urgently needed in the current era. In order to accomplish the objective, an adaptive ensemble model is proposed for breast cancer prognosis prediction using data. At the initial stage, the raw data are fetched from benchmark datasets. It is then followed by data cleaning and preprocessing. Subsequently, the pre-processed data is fed into the Improved Variational Autoencoder (IVAE), where the deep features are extracted. Finally, the resultant features are given as input to the Ensemble-based Serial Cascaded Attention Network (ESCANet), which is built with Deep Temporal Convolution Network (DTCN), Bi-directional Long Short-Term Memory (BiLSTM), and Recurrent Neural Network (RNN). The effectiveness of the model is validated and compared with conventional methodologies. Therefore, the results elucidate that the proposed methodology achieves extensive results; thus, it increases the system's efficiency.
Collapse
Affiliation(s)
- P Vanmathi
- Full time Research Scholar, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| | - Deepa Jose
- Professor, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| |
Collapse
|
3
|
Ferreira ACBH, Ferreira DD, Barbosa BHG, Aline de Oliveira U, Aparecida Padua E, Oliveira Chiarini F, Baena de Moraes Lopes MH. Neural network-based method to stratify people at risk for developing diabetic foot: A support system for health professionals. PLoS One 2023; 18:e0288466. [PMID: 37440514 DOI: 10.1371/journal.pone.0288466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Diabetes Mellitus (DM) is a chronic disease with a high worldwide prevalence. Diabetic foot is one of the DM complications and compromises health and quality of life, due to the risk of lower limb amputation. This work aimed to build a risk classification system for the evolution of diabetic foot, using Artificial Neural Networks (ANN). METHODS This methodological study used two databases, one for system design (training and validation) containing 250 participants with DM and another for testing, containing 141 participants. Each subject answered a questionnaire with 54 questions about foot care and sociodemographic information. Participants from both databases were classified by specialists as high or low risk for diabetic foot. Supervised ANN (multi-layer Perceptron-MLP) models were exploited and a smartphone app was built. The app returns a personalized report indicating self-care for each user. The System Usability Scale (SUS) was used for the usability evaluation. RESULTS MLP models were built and, based on the principle of parsimony, the simplest model was chosen to be implemented in the application. The model achieved accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 76%, 91%, 89%, and 79%, respectively, for the test data. The app presented good usability (93.33 points on a scale from 0 to 100). CONCLUSIONS The study showed that the proposed model has satisfactory performance and is simple, considering that it requires only 10 variables. This simplicity facilitates its use by health professionals and patients with diabetes.
Collapse
Affiliation(s)
- Ana Cláudia Barbosa Honório Ferreira
- School of Nursing, Universidade Estadual de Campinas, Campinas, São Paulo, Brazil
- University Center of Lavras, Unilavras, Lavras, Minas Gerais, Brazil
| | | | | | | | | | | | | |
Collapse
|
4
|
Soria C, Arroyo Y, Torres AM, Redondo MÁ, Basar C, Mateo J. Method for Classifying Schizophrenia Patients Based on Machine Learning. J Clin Med 2023; 12:4375. [PMID: 37445410 DOI: 10.3390/jcm12134375] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 06/21/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
Schizophrenia is a chronic and severe mental disorder that affects individuals in various ways, particularly in their ability to perceive, process, and respond to stimuli. This condition has a significant impact on a considerable number of individuals. Consequently, the study, analysis, and characterization of this pathology are of paramount importance. Electroencephalography (EEG) is frequently utilized in the diagnostic assessment of various brain disorders due to its non-intrusiveness, excellent resolution and ease of placement. However, the manual analysis of electroencephalogram (EEG) recordings can be a complex and time-consuming task for healthcare professionals. Therefore, the automated analysis of EEG recordings can help alleviate the burden on doctors and provide valuable insights to support clinical diagnosis. Many studies are working along these lines. In this research paper, the authors propose a machine learning (ML) method based on the eXtreme Gradient Boosting (XGB) algorithm for analyzing EEG signals. The study compares the performance of the proposed XGB-based approach with four other supervised ML systems. According to the results, the proposed XGB-based method demonstrates superior performance, with an AUC value of 0.94 and an accuracy value of 0.94, surpassing the other compared methods. The implemented system exhibits high accuracy and robustness in accurately classifying schizophrenia patients based on EEG recordings. This method holds the potential to be implemented as a valuable complementary tool for clinical use in hospitals, supporting clinicians in their clinical diagnosis of schizophrenia.
Collapse
Affiliation(s)
- Carmen Soria
- Institute of Technology, University of Castilla-La Mancha, 16071 Cuenca, Spain
- Clinical Neurophysiology Service, Virgen de la Luz Hospital, 16002 Cuenca, Spain
| | - Yoel Arroyo
- Faculty of Social Sciences and Information Technology, University of Castilla-La Mancha, 45600 Talavera de la Reina, Spain
| | - Ana María Torres
- Institute of Technology, University of Castilla-La Mancha, 16071 Cuenca, Spain
| | - Miguel Ángel Redondo
- School of Informatics, University of Castilla-La Mancha, 13071 Ciudad Real, Spain
| | - Christoph Basar
- Faculty of Human and Health Sciences, University of Bremen, 28359 Bremen, Germany
| | - Jorge Mateo
- Institute of Technology, University of Castilla-La Mancha, 16071 Cuenca, Spain
| |
Collapse
|
5
|
Li H, Zhong J, Lin L, Chen Y, Shi P. Semi-supervised nuclei segmentation based on multi-edge features fusion attention network. PLoS One 2023; 18:e0286161. [PMID: 37228137 DOI: 10.1371/journal.pone.0286161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
The morphology of the nuclei represents most of the clinical pathological information, and nuclei segmentation is a vital step in current automated histopathological image analysis. Supervised machine learning-based segmentation models have already achieved outstanding performance with sufficiently precise human annotations. Nevertheless, outlining such labels on numerous nuclei is extremely professional needing and time consuming. Automatic nuclei segmentation with minimal manual interventions is highly needed to promote the effectiveness of clinical pathological researches. Semi-supervised learning greatly reduces the dependence on labeled samples while ensuring sufficient accuracy. In this paper, we propose a Multi-Edge Feature Fusion Attention Network (MEFFA-Net) with three feature inputs including image, pseudo-mask and edge, which enhances its learning ability by considering multiple features. Only a few labeled nuclei boundaries are used to train annotations on the remaining mostly unlabeled data. The MEFFA-Net creates more precise boundary masks for nucleus segmentation based on pseudo-masks, which greatly reduces the dependence on manual labeling. The MEFFA-Block focuses on the nuclei outline and selects features conducive to segment, making full use of the multiple features in segmentation. Experimental results on public multi-organ databases including MoNuSeg, CPM-17 and CoNSeP show that the proposed model has the mean IoU segmentation evaluations of 0.706, 0.751, and 0.722, respectively. The model also achieves better results than some cutting-edge methods while the labeling work is reduced to 1/8 of common supervised strategies. Our method provides a more efficient and accurate basis for nuclei segmentations and further quantifications in pathological researches.
Collapse
Affiliation(s)
- Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Jing Zhong
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
6
|
CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. INT J INTELL SYST 2023. [DOI: 10.1155/2023/2345835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model’s performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
Collapse
|
7
|
Ahmad M, Sanawar S, Alfandi O, Qadri SF, Saeed IA, Khan S, Hayat B, Ahmad A. Facial expression recognition using lightweight deep learning modeling. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:8208-8225. [PMID: 37161193 DOI: 10.3934/mbe.2023357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Facial expression is a type of communication and is useful in many areas of computer vision, including intelligent visual surveillance, human-robot interaction and human behavior analysis. A deep learning approach is presented to classify happy, sad, angry, fearful, contemptuous, surprised and disgusted expressions. Accurate detection and classification of human facial expression is a critical task in image processing due to the inconsistencies amid the complexity, including change in illumination, occlusion, noise and the over-fitting problem. A stacked sparse auto-encoder for facial expression recognition (SSAE-FER) is used for unsupervised pre-training and supervised fine-tuning. SSAE-FER automatically extracts features from input images, and the softmax classifier is used to classify the expressions. Our method achieved an accuracy of 92.50% on the JAFFE dataset and 99.30% on the CK+ dataset. SSAE-FER performs well compared to the other comparative methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Tobe Camp, Abbottabad-22060, Pakistan
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Saira Sanawar
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Omar Alfandi
- College of Technological Innovation at Zayed University in Abu Dhabi, UAE
| | - Syed Furqan Qadri
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Iftikhar Ahmed Saeed
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Salabat Khan
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
| | - Bashir Hayat
- Department of Computer Science, Institute of Management Sciences, Peshawar, Pakistan
| | - Arshad Ahmad
- Department of IT & CS, Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur 22620, Pakistan
| |
Collapse
|
8
|
Automated Lung Cancer Segmentation in Tissue Micro Array Analysis Histopathological Images Using a Prototype of Computer-Assisted Diagnosis. J Pers Med 2023; 13:jpm13030388. [PMID: 36983570 PMCID: PMC10051974 DOI: 10.3390/jpm13030388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2023] [Accepted: 02/16/2023] [Indexed: 02/25/2023] Open
Abstract
Background: Lung cancer is a fatal disease that kills approximately 85% of those diagnosed with it. In recent years, advances in medical imaging have greatly improved the acquisition, storage, and visualization of various pathologies, making it a necessary component in medicine today. Objective: Develop a computer-aided diagnostic system to detect lung cancer early by segmenting tumor and non-tumor tissue on Tissue Micro Array Analysis (TMA) histopathological images. Method: The prototype computer-aided diagnostic system was developed to segment tumor areas, non-tumor areas, and fundus on TMA histopathological images. Results: The system achieved an average accuracy of 83.4% and an F-measurement of 84.4% in segmenting tumor and non-tumor tissue. Conclusion: The computer-aided diagnostic system provides a second diagnostic opinion to specialists, allowing for more precise diagnoses and more appropriate treatments for lung cancer.
Collapse
|
9
|
da Silva DS, Nascimento CS, Jagatheesaperumal SK, de Albuquerque VHC. Mammogram Image Enhancement Techniques for Online Breast Cancer Detection and Diagnosis. SENSORS (BASEL, SWITZERLAND) 2022; 22:8818. [PMID: 36433415 PMCID: PMC9697415 DOI: 10.3390/s22228818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/09/2022] [Accepted: 11/10/2022] [Indexed: 06/16/2023]
Abstract
Breast cancer is the type of cancer with the highest incidence and global mortality of female cancers. Thus, the adaptation of modern technologies that assist in medical diagnosis in order to accelerate, automate and reduce the subjectivity of this process are of paramount importance for an efficient treatment. Therefore, this work aims to propose a robust platform to compare and evaluate the proposed strategies for improving breast ultrasound images and compare them with state-of-the-art techniques by classifying them as benign, malignant and normal. Investigations were performed on a dataset containing a total of 780 images of tumor-affected persons, divided into benign, malignant and normal. A data augmentation technique was used to scale up the corpus of images available in the chosen dataset. For this, novel image enhancement techniques were used and the Multilayer Perceptrons, k-Nearest Neighbor and Support Vector Machines algorithms were used for classification. From the promising outcomes of the conducted experiments, it was observed that the bilateral algorithm together with the SVM classifier achieved the best result for the classification of breast cancer, with an overall accuracy of 96.69% and an accuracy for the detection of malignant nodules of 95.11%. Therefore, it was found that the application of image enhancement methods can help in the detection of breast cancer at a much earlier stage with better accuracy in detection.
Collapse
Affiliation(s)
- Daniel S. da Silva
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Caio S. Nascimento
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Senthil K. Jagatheesaperumal
- Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi 626005, TN, India
| | | |
Collapse
|
10
|
Kumar A, Singh Sodhi S. Classification of data on stacked autoencoder using modified sigmoid activation function. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A Neural Network is one of the techniques by which we classify data. In this paper, we have proposed an effectively stacked autoencoder with the help of a modified sigmoid activation function. We have made a two-layer stacked autoencoder with a modified sigmoid activation function. We have compared our autoencoder to the existing autoencoder technique. In the existing autoencoder technique, we generally use the logsigmoid activation function. But in multiple cases using this technique, we cannot achieve better results. In that case, we may use our technique for achieving better results. Our proposed autoencoder may achieve better results compared to this existing autoencoder technique. The reason behind this is that our modified sigmoid activation function gives more variations for different input values. We have tested our proposed autoencoder on the iris, glass, wine, ovarian, and digit image datasets for comparison propose. The existing autoencoder technique has achieved 96% accuracy on the iris, 91% accuracy on wine, 95.4% accuracy on ovarian, 96.3% accuracy on glass, and 98.7% accuracy on digit (image) dataset. Our proposed autoencoder has achieved 100% accuracy on the iris, wine, ovarian, and glass, and 99.4% accuracy on digit (image) datasets. For more verification of the effeteness of our proposed autoencoder, we have taken three more datasets. They are abalone, thyroid, and chemical datasets. Our proposed autoencoder has achieved 100% accuracy on the abalone and chemical, and 96% accuracy on thyroid datasets.
Collapse
Affiliation(s)
- Arvind Kumar
- Computer Science, and Enginering USICT GGSIPU, Delhi
| | | |
Collapse
|
11
|
Latif G, Morsy H, Hassan A, Alghazo J. Novel Coronavirus and Common Pneumonia Detection from CT Scans Using Deep Learning-Based Extracted Features. Viruses 2022; 14:v14081667. [PMID: 36016288 PMCID: PMC9414828 DOI: 10.3390/v14081667] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/23/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Computer Science Department, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
- Department of Computer Sciences and Mathematics, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada
- Correspondence: or
| | - Hamdy Morsy
- Department of Applied Natural Sciences, College of Community, Qassim University, Buraydah 52571, Saudi Arabia;
- Department of Electronics and communications, College of Engineering, Helwan University, Cairo 11792, Egypt
| | - Asmaa Hassan
- Faculty of Medicine, Helwan University, Helwan 11795, Egypt;
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA;
| |
Collapse
|
12
|
Zhao W, Sun Y, Kuang K, Yang J, Li G, Ni B, Jiang Y, Jiang B, Liu J, Li M. ViSTA: A Novel Network Improving Lung Adenocarcinoma Invasiveness Prediction from Follow-Up CT Series. Cancers (Basel) 2022; 14:cancers14153675. [PMID: 35954342 PMCID: PMC9367560 DOI: 10.3390/cancers14153675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 07/17/2022] [Accepted: 07/20/2022] [Indexed: 11/21/2022] Open
Abstract
Simple Summary Assessing follow-up computed tomography(CT) series is of great importance in clinical practice for lung nodule diagnosis. Deep learning is a thriving data mining method in medical imaging and has obtained surprising results. However, previous studies mostly focused on the analysis of single static time points instead of the entire follow-up series and required regular intervals between CT examinations. In the current study, we propose a new deep learning framework, named ViSTA, that can better evaluate tumor invasiveness using irregularly serial follow-up CT images to avoid aggressive procedures or delay diagnosis in clinical practice. ViSTA provides a new solution for irregularly sampled data. ViSTA delivers superior performance compared with other static or serial deep learning models. The proposed ViSTA framework is capable of improving performance close to the human level in the prediction of invasiveness of lung adenocarcinoma while being transferrable to other tasks analyzing serial medical data. Abstract To investigate the value of the deep learning method in predicting the invasiveness of early lung adenocarcinoma based on irregularly sampled follow-up computed tomography (CT) scans. In total, 351 nodules were enrolled in the study. A new deep learning network based on temporal attention, named Visual Simple Temporal Attention (ViSTA), was proposed to process irregularly sampled follow-up CT scans. We conducted substantial experiments to investigate the supplemental value in predicting the invasiveness using serial CTs. A test set composed of 69 lung nodules was reviewed by three radiologists. The performance of the model and radiologists were compared and analyzed. We also performed a visual investigation to explore the inherent growth pattern of the early adenocarcinomas. Among counterpart models, ViSTA showed the best performance (AUC: 86.4% vs. 60.6%, 75.9%, 66.9%, 73.9%, 76.5%, 78.3%). ViSTA also outperformed the model based on Volume Doubling Time (AUC: 60.6%). ViSTA scored higher than two junior radiologists (accuracy of 81.2% vs. 75.4% and 71.0%) and came close to the senior radiologist (85.5%). Our proposed model using irregularly sampled follow-up CT scans achieved promising accuracy in evaluating the invasiveness of the early stage lung adenocarcinoma. Its performance is comparable with senior experts and better than junior experts and traditional deep learning models. With further validation, it can potentially be applied in clinical practice.
Collapse
Affiliation(s)
- Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Yingli Sun
- Department of Radiology, Huadong Hospital, Fudan University, Shanghai 200040, China;
| | - Kaiming Kuang
- Dianei Technology, Shanghai 200051, China; (K.K.); (J.Y.)
| | - Jiancheng Yang
- Dianei Technology, Shanghai 200051, China; (K.K.); (J.Y.)
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Ge Li
- Department of Radiology, The Xiangya Hospital, Central South University, Changsha 410008, China;
| | - Bingbing Ni
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Yingjia Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Bo Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
- Radiology Quality Control Center, Changsha 410011, China
- Correspondence: (J.L.); (M.L.); Tel.: +86-137-8708-5002 (J.L.); +86-138-1662-0371 (M.L.); Fax: +86-0731-85292116 (J.L.); +86-21-57643271 (M.L.)
| | - Ming Li
- Department of Radiology, Huadong Hospital, Fudan University, Shanghai 200040, China;
- Institute of Functional and Molecular Medical Imaging, Fudan University, Shanghai 200437, China
- Correspondence: (J.L.); (M.L.); Tel.: +86-137-8708-5002 (J.L.); +86-138-1662-0371 (M.L.); Fax: +86-0731-85292116 (J.L.); +86-21-57643271 (M.L.)
| |
Collapse
|
13
|
Deep Learning Model for Grading Metastatic Epidural Spinal Cord Compression on Staging CT. Cancers (Basel) 2022; 14:cancers14133219. [PMID: 35804990 PMCID: PMC9264856 DOI: 10.3390/cancers14133219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 02/02/2023] Open
Abstract
Background: Metastatic epidural spinal cord compression (MESCC) is a disastrous complication of advanced malignancy. Deep learning (DL) models for automatic MESCC classification on staging CT were developed to aid earlier diagnosis. Methods: This retrospective study included 444 CT staging studies from 185 patients with suspected MESCC who underwent MRI spine studies within 60 days of the CT studies. The DL model training/validation dataset consisted of 316/358 (88%) and the test set of 42/358 (12%) CT studies. Training/validation and test datasets were labeled in consensus by two subspecialized radiologists (6 and 11-years-experience) using the MRI studies as the reference standard. Test sets were labeled by the developed DL models and four radiologists (2−7 years of experience) for comparison. Results: DL models showed almost-perfect interobserver agreement for classification of CT spine images into normal, low, and high-grade MESCC, with kappas ranging from 0.873−0.911 (p < 0.001). The DL models (lowest κ = 0.873, 95% CI 0.858−0.887) also showed superior interobserver agreement compared to two of the four radiologists for three-class classification, including a specialist (κ = 0.820, 95% CI 0.803−0.837) and general radiologist (κ = 0.726, 95% CI 0.706−0.747), both p < 0.001. Conclusion: DL models for the MESCC classification on a CT showed comparable to superior interobserver agreement to radiologists and could be used to aid earlier diagnosis.
Collapse
|
14
|
Deep Learning to Measure the Intensity of Indocyanine Green in Endometriosis Surgeries with Intestinal Resection. J Pers Med 2022; 12:jpm12060982. [PMID: 35743768 PMCID: PMC9224804 DOI: 10.3390/jpm12060982] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 06/11/2022] [Accepted: 06/15/2022] [Indexed: 01/03/2023] Open
Abstract
Endometriosis is a gynecological pathology that affects between 6 and 15% of women of childbearing age. One of the manifestations is intestinal deep infiltrating endometriosis. This condition may force patients to resort to surgical treatment, often ending in resection. The level of blood perfusion at the anastomosis is crucial for its outcome, for this reason, indocyanine green (ICG), a fluorochrome that green stains the structures where it is present, is injected during surgery. This study proposes a novel method based on deep learning algorithms for quantifying the level of blood perfusion in anastomosis. Firstly, with a deep learning algorithm based on the U-Net, models capable of automatically segmenting the intestine from the surgical videos were generated. Secondly, blood perfusion level, from the already segmented video frames, was quantified. The frames were characterized using textures, precisely nine first- and second-order statistics, and then two experiments were carried out. In the first experiment, the differences in the perfusion between the two-anastomosis parts were determined, and in the second, it was verified that the ICG variation could be captured through the textures. The best model when segmenting has an accuracy of 0.92 and a dice coefficient of 0.96. It is concluded that segmentation of the bowel using the U-Net was successful, and the textures are appropriate descriptors for characterization of the blood perfusion in the images where ICG is present. This might help to predict whether postoperative complications will occur during surgery, enabling clinicians to act on this information.
Collapse
|
15
|
Ahmad M, Qadri SF, Ashraf MU, Subhi K, Khan S, Zareen SS, Qadri S. Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2665283. [PMID: 35634046 PMCID: PMC9132625 DOI: 10.1155/2022/2665283] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/06/2022] [Indexed: 12/11/2022]
Abstract
Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
- Department of Computer Science and IT, The University of Lahore, Sargodha Campus, 40100, Lahore, Pakistan
| | - Syed Furqan Qadri
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - M. Usman Ashraf
- Department of Computer Science, GC Women University, Sialkot 51310, Pakistan
| | - Khalid Subhi
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Salabat Khan
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - Syeda Shamaila Zareen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Salman Qadri
- Department of Computer Science, MNS University of Agriculture, Multan 60650, Pakistan
| |
Collapse
|
16
|
An Attention-Preserving Network-Based Method for Assisted Segmentation of Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10101665] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Osteosarcoma is a malignant bone tumor that is extremely dangerous to human health. Not only does it require a large amount of work, it is also a complicated task to outline the lesion area in an image manually, using traditional methods. With the development of computer-aided diagnostic techniques, more and more researchers are focusing on automatic segmentation techniques for osteosarcoma analysis. However, existing methods ignore the size of osteosarcomas, making it difficult to identify and segment smaller tumors. This is very detrimental to the early diagnosis of osteosarcoma. Therefore, this paper proposes a Contextual Axial-Preserving Attention Network (CaPaN)-based MRI image-assisted segmentation method for osteosarcoma detection. Based on the use of Res2Net, a parallel decoder is added to aggregate high-level features which effectively combines the local and global features of osteosarcoma. In addition, channel feature pyramid (CFP) and axial attention (A-RA) mechanisms are used. A lightweight CFP can extract feature mapping and contextual information of different sizes. A-RA uses axial attention to distinguish tumor tissues by mining, which reduces computational costs and thus improves the generalization performance of the model. We conducted experiments using a real dataset provided by the Second Xiangya Affiliated Hospital and the results showed that our proposed method achieves better segmentation results than alternative models. In particular, our method shows significant advantages with respect to small target segmentation. Its precision is about 2% higher than the average values of other models. For the segmentation of small objects, the DSC value of CaPaN is 0.021 higher than that of the commonly used U-Net method.
Collapse
|
17
|
Stadlbauer A, Marhold F, Oberndorfer S, Heinz G, Buchfelder M, Kinfe TM, Meyer-Bäse A. Radiophysiomics: Brain Tumors Classification by Machine Learning and Physiological MRI Data. Cancers (Basel) 2022; 14:cancers14102363. [PMID: 35625967 PMCID: PMC9139355 DOI: 10.3390/cancers14102363] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/04/2022] [Accepted: 05/09/2022] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The pretreatment diagnosis of contrast-enhancing brain tumors is still challenging in clinical neuro-oncology due to their very similar appearance on conventional MRI. A precise initial characterization, however, is essential to initiate appropriate treatment management, which can substantially differ between brain tumor entities. To overcome the disadvantage of the low specificity of conventional MRI, several new neuroimaging methods have been developed and validated over the past decades. This increasing amount of diagnostic information makes a timely evaluation without computational support impossible in a clinical setting. Artificial intelligence methods such as machine learning offer new options to support clinicians. In this study, we combined nine common machine learning algorithms with a physiological MRI technique (we named this approach “radiophysiomics”) to investigate the effectiveness of the multiclass classification of contrast-enhancing brain tumors in a clinical setting. We were able to demonstrate that radiophysiomics could be helpful in the routine diagnostics of contrast-enhancing brain tumors, but further automation using deep neural networks is required. Abstract The precise initial characterization of contrast-enhancing brain tumors has significant consequences for clinical outcomes. Various novel neuroimaging methods have been developed to increase the specificity of conventional magnetic resonance imaging (cMRI) but also the increased complexity of data analysis. Artificial intelligence offers new options to manage this challenge in clinical settings. Here, we investigated whether multiclass machine learning (ML) algorithms applied to a high-dimensional panel of radiomic features from advanced MRI (advMRI) and physiological MRI (phyMRI; thus, radiophysiomics) could reliably classify contrast-enhancing brain tumors. The recently developed phyMRI technique enables the quantitative assessment of microvascular architecture, neovascularization, oxygen metabolism, and tissue hypoxia. A training cohort of 167 patients suffering from one of the five most common brain tumor entities (glioblastoma, anaplastic glioma, meningioma, primary CNS lymphoma, or brain metastasis), combined with nine common ML algorithms, was used to develop overall 135 classifiers. Multiclass classification performance was investigated using tenfold cross-validation and an independent test cohort. Adaptive boosting and random forest in combination with advMRI and phyMRI data were superior to human reading in accuracy (0.875 vs. 0.850), precision (0.862 vs. 0.798), F-score (0.774 vs. 0.740), AUROC (0.886 vs. 0.813), and classification error (5 vs. 6). The radiologists, however, showed a higher sensitivity (0.767 vs. 0.750) and specificity (0.925 vs. 0.902). We demonstrated that ML-based radiophysiomics could be helpful in the clinical routine diagnosis of contrast-enhancing brain tumors; however, a high expenditure of time and work for data preprocessing requires the inclusion of deep neural networks.
Collapse
Affiliation(s)
- Andreas Stadlbauer
- Institute of Medical Radiology, University Clinic St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
- Correspondence:
| | - Franz Marhold
- Department of Neurosurgery, University Clinic of St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Stefan Oberndorfer
- Department of Neurology, University Clinic of St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Gertraud Heinz
- Institute of Medical Radiology, University Clinic St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Michael Buchfelder
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
| | - Thomas M. Kinfe
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
- Division of Functional Neurosurgery and Stereotaxy, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany
| | - Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, FL 32306-4120, USA;
| |
Collapse
|
18
|
Fully Automatic Segmentation, Identification and Preoperative Planning for Nasal Surgery of Sinuses Using Semi-Supervised Learning and Volumetric Reconstruction. MATHEMATICS 2022. [DOI: 10.3390/math10071189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The aim of this study is to develop an automatic segmentation algorithm based on paranasal sinus CT images, which realizes automatic identification and segmentation of the sinus boundary and its inflamed proportions, as well as the reconstruction of normal sinus and inflamed site volumes. Our goal is to overcome the current clinical dilemma of manually calculating the inflammatory sinus volume, which is objective and ineffective. A semi-supervised learning algorithm using pseudo-labels for self-training was proposed to train convolutional neural networks, which consisted of SENet, MobileNet, and ResNet. An aggregate of 175 CT sets was analyzed, 50 of which were from patients who subsequently underwent sinus surgery. A 3D view and volume-based modified Lund-Mackay score were determined and compared with traditional scores. Compared to state-of-the-art networks, our modifications achieved significant improvements in both sinus segmentation and classification, with an average pixel accuracy of 99.67%, an MIoU of 89.75%, and a Dice coefficient of 90.79%. The fully automatic nasal sinus volume reconstruction system was successfully obtained the relevant detailed information by accurately acquiring the nasal sinus contour edges in the CT images. The accuracy of our algorithm has been validated and the results can be effectively applied to actual clinical medicine or forensic research.
Collapse
|