1
|
Jin Y, Li J, Fan Z, Hua X, Wang T, Du S, Xi X, Li L. Recognition of regions of stroke injury using multi-modal frequency features of electroencephalogram. Front Neurosci 2024; 18:1404816. [PMID: 38915308 PMCID: PMC11194428 DOI: 10.3389/fnins.2024.1404816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 05/24/2024] [Indexed: 06/26/2024] Open
Abstract
Objective Nowadays, increasingly studies are attempting to analyze strokes in advance. The identification of brain damage areas is essential for stroke rehabilitation. Approach We proposed Electroencephalogram (EEG) multi-modal frequency features to classify the regions of stroke injury. The EEG signals were obtained from stroke patients and healthy subjects, who were divided into right-sided brain injury group, left-sided brain injury group, bilateral brain injury group, and healthy controls. First, the wavelet packet transform was used to perform a time-frequency analysis of the EEG signal and extracted a set of features (denoted as WPT features). Then, to explore the nonlinear phase coupling information of the EEG signal, phase-locked values (PLV) and partial directed correlations (PDC) were extracted from the brain network, and the brain network produced a second set of features noted as functional connectivity (FC) features. Furthermore, we fused the extracted multiple features and used the resnet50 convolutional neural network to classify the fused multi-modal (WPT + FC) features. Results The classification accuracy of our proposed methods was up to 99.75%. Significance The proposed multi-modal frequency features can be used as a potential indicator to distinguish regions of brain injury in stroke patients, and are potentially useful for the optimization of decoding algorithms for brain-computer interfaces.
Collapse
Affiliation(s)
- Yan Jin
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| | - Jing Li
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| | - Zhuyao Fan
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| | - Xian Hua
- Jinhua People’s Hospital, Jinhua, China
| | - Ting Wang
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| | - Shunlan Du
- Affiliated Dongyang Hospital of Wenzhou Medical University, Dongyang, China
| | - Xugang Xi
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| | - Lihua Li
- Institute of Intelligent Control and Robotics, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
2
|
Kibriya H, Amin R. A residual network-based framework for COVID-19 detection from CXR images. Neural Comput Appl 2022; 35:8505-8516. [PMID: 36536673 PMCID: PMC9754308 DOI: 10.1007/s00521-022-08127-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
In late 2019, a new Coronavirus disease (COVID-19) appeared in Wuhan, Hubei Province, China. The virus began to spread throughout many countries, affecting a large population. Polymerase chain reaction is currently being utilized to diagnose COVID-19 in suspected patients; however, its sensitivity is quite low. The researchers also developed automated approaches for reliably and timely identifying COVID-19 from X-ray images. However, traditional machine learning-based image classification algorithms necessitate manual image segmentation and feature extraction, which is a time-consuming task. Due to promising results and robust performance, Convolutional Neural Network (CNN)-based techniques are being used widely to classify COVID-19 from Chest X-rays (CXR). This study explores CNN-based COVID-19 classification methods. A series of experiments aimed at COVID-19 detection and classification validates the viability of our proposed framework. Initially, the dataset is preprocessed and then fed into two Residual Network (ResNet) architectures for deep feature extraction, such as ResNet18 and ResNet50, whereas support vector machines with its multiple kernels, including Quadratic, Linear, Gaussian and Cubic, are used to classify these features. The experimental results suggest that the proposed framework efficiently detects COVID-19 from CXR images. The proposed framework obtained the best accuracy of 97.3% using ResNet50.
Collapse
Affiliation(s)
- Hareem Kibriya
- grid.442854.bDepartment of Computer Sciences, University of Engineering and Technology, Taxila, Pakistan
| | - Rashid Amin
- grid.442854.bDepartment of Computer Sciences, University of Engineering and Technology, Taxila, Pakistan
- Department of Computer Science, University of Chakwal, Chakwal, 48800, Pakistan
| |
Collapse
|
3
|
Alderfer S, Sun J, Tahtamouni L, Prasad A. Morphological signatures of actin organization in single cells accurately classify genetic perturbations using CNNs with transfer learning. SOFT MATTER 2022; 18:8342-8354. [PMID: 36222484 DOI: 10.1039/d2sm01000c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The actin cytoskeleton plays essential roles in countless cell processes, from cell division to migration to signaling. In cancer cells, cytoskeletal dynamics, cytoskeletal filament organization, and overall cell morphology are known to be altered substantially. We hypothesize that actin fiber organization and cell shape may carry specific signatures of genetic or signaling perturbations. We used convolutional neural networks (CNNs) on a small fluorescence microscopy image dataset of retinal pigment epithelial (RPE) cells and triple-negative breast cancer (TNBC) cells for identifying morphological signatures in cancer cells. Using a transfer learning approach, CNNs could be trained to accurately distinguish between normal and oncogenically transformed RPE cells with an accuracy of about 95% or better at the single cell level. Furthermore, CNNs could distinguish transformed cell lines differing by an oncogenic mutation from each other and could also detect knockdown of cofilin in TNBC cells, indicating that each single oncogenic mutation or cytoskeletal perturbation produces a unique signature in actin morphology. Application of the Local Interpretable Model-Agnostic Explanations (LIME) method for visually interpreting the CNN results revealed features of the global actin structure relevant for some cells and classification tasks. Interestingly, many of these features were supported by previous biological observation. Actin fiber organization is thus a sensitive marker for cell identity, and identification of its perturbations could be very useful for assaying cell phenotypes, including disease states.
Collapse
Affiliation(s)
- Sydney Alderfer
- Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, CO 80523, USA.
- School of Biomedical Engineering, Colorado State University, Fort Collins, CO 80523, USA
| | - Jiangyu Sun
- Department of Biochemistry and Molecular Biology, Colorado State University, Fort Collins, CO 80523, USA
| | - Lubna Tahtamouni
- Department of Biochemistry and Molecular Biology, Colorado State University, Fort Collins, CO 80523, USA
- Department of Biology and Biotechnology, The Hashemite University, Zarqa, Jordan
| | - Ashok Prasad
- Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, CO 80523, USA.
- School of Biomedical Engineering, Colorado State University, Fort Collins, CO 80523, USA
| |
Collapse
|
4
|
An improved transformer network for skin cancer classification. Comput Biol Med 2022; 149:105939. [PMID: 36037629 DOI: 10.1016/j.compbiomed.2022.105939] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 11/23/2022]
Abstract
BACKGROUND Use of artificial intelligence to identify dermoscopic images has brought major breakthroughs in recent years to the early diagnosis and early treatment of skin cancer, the incidence of which is increasing year by year worldwide and poses a great threat to human health. Achievements have been made in the research of skin cancer image classification by using the deep backbone of the convolutional neural network (CNN). This approach, however, only extracts the features of small objects in the image, and cannot locate the important parts. OBJECTIVES As a result, researchers of the paper turn to vision transformers (VIT) which has demonstrated powerful performance in traditional classification tasks. The self-attention is to improve the value of important features and suppress the features that cause noise. Specifically, an improved transformer network named SkinTrans is proposed. INNOVATIONS To verify its efficiency, a three step procedure is followed. Firstly, a VIT network is established to verify the effectiveness of SkinTrans in skin cancer classification. Then multi-scale and overlapping sliding windows are used to serialize the image and multi-scale patch embedding is carried out which pay more attention to multi-scale features. Finally, contrastive learning is used which makes the similar data of skin cancer encode similarly so that the encoding results of different data are as different as possible. MAIN RESULTS The experiment is carried out based on two datasets, namely (1) HAM10000: a large dataset of multi-source dermatoscopic images of common skin cancers; (2)A clinical dataset of skin cancer collected by dermoscopy. The model proposed has achieved 94.3% accuracy on HAM10000 and 94.1% accuracy on our datasets, which verifies the efficiency of SkinTrans. CONCLUSIONS The transformer network has not only achieved good results in natural language but also achieved ideal results in the field of vision, which also lays a good foundation for skin cancer classification based on multimodal data. This paper is convinced that it will be of interest to dermatologists, clinical researchers, computer scientists and researchers in other related fields, and provide greater convenience for patients.
Collapse
|
5
|
Deep Residual Learning Image Recognition Model for Skin Cancer Disease Detection and Classification. ACTA INFORMATICA PRAGENSIA 2022. [DOI: 10.18267/j.aip.189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
6
|
Liu Y, Huang K, Yang Y, Wu Y, Gao W. Prediction of Tumor Mutation Load in Colorectal Cancer Histopathological Images Based on Deep Learning. Front Oncol 2022; 12:906888. [PMID: 35686098 PMCID: PMC9171017 DOI: 10.3389/fonc.2022.906888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 04/18/2022] [Indexed: 02/05/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most prevalent malignancies, and immunotherapy can be applied to CRC patients of all ages, while its efficacy is uncertain. Tumor mutational burden (TMB) is important for predicting the effect of immunotherapy. Currently, whole-exome sequencing (WES) is a standard method to measure TMB, but it is costly and inefficient. Therefore, it is urgent to explore a method to assess TMB without WES to improve immunotherapy outcomes. In this study, we propose a deep learning method, DeepHE, based on the Residual Network (ResNet) model. On images of tissue, DeepHE can efficiently identify and analyze characteristics of tumor cells in CRC to predict the TMB. In our study, we used ×40 magnification images and grouped them by patients followed by thresholding at the 10th and 20th quantiles, which significantly improves the performance. Also, our model is superior compared with multiple models. In summary, deep learning methods can explore the association between histopathological images and genetic mutations, which will contribute to the precise treatment of CRC patients.
Collapse
Affiliation(s)
- Yongguang Liu
- Department of Anorectal Surgery, Weifang People’s Hospital, Weifang, China
| | - Kaimei Huang
- Genies (Beijing) Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Yachao Yang
- School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Yan Wu
- Genies (Beijing) Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Wei Gao
- Department of Internal Medicine-Oncology, Fujian Cancer Hospital and Fujian Medical University Cancer Hospital, Fuzhou, China
- *Correspondence: Wei Gao,
| |
Collapse
|
7
|
Loraksa C, Mongkolsomlit S, Nimsuk N, Uscharapong M, Kiatisevi P. Effectiveness of Learning Systems from Common Image File Types to Detect Osteosarcoma Based on Convolutional Neural Networks (CNNs) Models. J Imaging 2021; 8:jimaging8010002. [PMID: 35049843 PMCID: PMC8779891 DOI: 10.3390/jimaging8010002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/13/2021] [Accepted: 12/23/2021] [Indexed: 12/18/2022] Open
Abstract
Osteosarcoma is a rare bone cancer which is more common in children than in adults and has a high chance of metastasizing to the patient’s lungs. Due to initiated cases, it is difficult to diagnose and hard to detect the nodule in a lung at the early state. Convolutional Neural Networks (CNNs) are effectively applied for early state detection by considering CT-scanned images. Transferring patients from small hospitals to the cancer specialized hospital, Lerdsin Hospital, poses difficulties in information sharing because of the privacy and safety regulations. CD-ROM media was allowed for transferring patients’ data to Lerdsin Hospital. Digital Imaging and Communications in Medicine (DICOM) files cannot be stored on a CD-ROM. DICOM must be converted into other common image formats, such as BMP, JPG and PNG formats. Quality of images can affect the accuracy of the CNN models. In this research, the effect of different image formats is studied and experimented. Three popular medical CNN models, VGG-16, ResNet-50 and MobileNet-V2, are considered and used for osteosarcoma detection. The positive and negative class images are corrected from Lerdsin Hospital, and 80% of all images are used as a training dataset, while the rest are used to validate the trained models. Limited training images are simulated by reducing images in the training dataset. Each model is trained and validated by three different image formats, resulting in 54 testing cases. F1-Score and accuracy are calculated and compared for the models’ performance. VGG-16 is the most robust of all the formats. PNG format is the most preferred image format, followed by BMP and JPG formats, respectively.
Collapse
Affiliation(s)
- Chanunya Loraksa
- Medical Engineering, Faculty of Engineering, Thammasat University, Pathum Thani 12121, Thailand;
- Correspondence: ; Tel.: +66-(0)63-241-5888
| | | | - Nitikarn Nimsuk
- Medical Engineering, Faculty of Engineering, Thammasat University, Pathum Thani 12121, Thailand;
| | - Meenut Uscharapong
- Department of Medical Services, Lerdsin Hospital, Ministry of Public Health in Thailand, Bangkok 10500, Thailand; (M.U.); (P.K.)
| | - Piya Kiatisevi
- Department of Medical Services, Lerdsin Hospital, Ministry of Public Health in Thailand, Bangkok 10500, Thailand; (M.U.); (P.K.)
| |
Collapse
|
8
|
Comparative Analysis of Skin Cancer (Benign vs. Malignant) Detection Using Convolutional Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5895156. [PMID: 34931137 PMCID: PMC8684510 DOI: 10.1155/2021/5895156] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 11/10/2021] [Accepted: 11/24/2021] [Indexed: 01/24/2023]
Abstract
We live in a world where people are suffering from many diseases. Cancer is the most threatening of them all. Among all the variants of cancer, skin cancer is spreading rapidly. It happens because of the abnormal growth of skin cells. The increase in ultraviolet radiation on the Earth's surface is also helping skin cancer spread in every corner of the world. Benign and malignant types are the most common skin cancers people suffer from. People go through expensive and time-consuming treatments to cure skin cancer but yet fail to lower the mortality rate. To reduce the mortality rate, early detection of skin cancer in its incipient phase is helpful. In today's world, deep learning is being used to detect diseases. The convolutional neural network (CNN) helps to find skin cancer through image classification more accurately. This research contains information about many CNN models and a comparison of their working processes for finding the best results. Pretrained models like VGG16, Support Vector Machine (SVM), ResNet50, and self-built models (sequential) are used to analyze the process of CNN models. These models work differently as there are variations in their layer numbers. Depending on their layers and work processes, some models work better than others. An image dataset of benign and malignant data has been taken from Kaggle. In this dataset, there are 6594 images of benign and malignant skin cancer. Using different approaches, we have gained accurate results for VGG16 (93.18%), SVM (83.48%), ResNet50 (84.39%), Sequential_Model_1 (74.24%), Sequential_Model_2 (77.00%), and Sequential_Model_3 (84.09%). This research compares these outcomes based on the model's work process. Our comparison includes model layer numbers, working process, and precision. The VGG16 model has given us the highest accuracy of 93.18%.
Collapse
|