1
|
Lee JS, Wu WK. Breast Tumor Tissue Image Classification Using Single-Task Meta Learning with Auxiliary Network. Cancers (Basel) 2024; 16:1362. [PMID: 38611040 PMCID: PMC11010930 DOI: 10.3390/cancers16071362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 03/25/2024] [Accepted: 03/27/2024] [Indexed: 04/14/2024] Open
Abstract
Breast cancer has a high mortality rate among cancers. If the type of breast tumor can be correctly diagnosed at an early stage, the survival rate of the patients will be greatly improved. Considering the actual clinical needs, the classification model of breast pathology images needs to have the ability to make a correct classification, even in facing image data with different characteristics. The existing convolutional neural network (CNN)-based models for the classification of breast tumor pathology images lack the requisite generalization capability to maintain high accuracy when confronted with pathology images of varied characteristics. Consequently, this study introduces a new classification model, STMLAN (Single-Task Meta Learning with Auxiliary Network), which integrates Meta Learning and an auxiliary network. Single-Task Meta Learning was proposed to endow the model with generalization ability, and the auxiliary network was used to enhance the feature characteristics of breast pathology images. The experimental results demonstrate that the STMLAN model proposed in this study improves accuracy by at least 1.85% in challenging multi-classification tasks compared to the existing methods. Furthermore, the Silhouette Score corresponding to the features learned by the model has increased by 31.85%, reflecting that the proposed model can learn more discriminative features, and the generalization ability of the overall model is also improved.
Collapse
Affiliation(s)
- Jiann-Shu Lee
- Department of Computer Science and Information Engineering, National University of Tainan, Tainan 700, Taiwan;
| | | |
Collapse
|
2
|
Yang K, Song J, Liu M, Xue L, Liu S, Yin X, Liu K. TBACkp: HER2 expression status classification network focusing on intrinsic subenvironmental characteristics of breast cancer liver metastases. Comput Biol Med 2024; 170:108002. [PMID: 38277921 DOI: 10.1016/j.compbiomed.2024.108002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
The HER2 expression status in breast cancer liver metastases is a crucial indicator for the diagnosis, treatment, and prognosis assessment of patients. And typical diagnosis involves assessing the HER2 expression status through invasive procedures like biopsy. However, this method has certain drawbacks, such as being difficult in obtaining tissue samples and requiring long examination periods. To address these limitations, we propose an AI-aided diagnostic model. This model enables rapid diagnosis. It diagnoses a patient's HER2 expression status on the basis of preprocessed images, which is the region of the lesion extracted from a CT image rather than from an actual tissue sample. The algorithm of the model adopts a parallel structure, including a Branch Block and a Trunk Block. The Branch Block is responsible for extracting the gradient characteristics between the tumor sub-environments, and the Trunk Block is for fusing the characteristics extracted by the Branch Block. The Branch Block contains CNN with self-attention, which combines the advantages of CNN and self-attention to extract more meticulous and comprehensive image features. And the Trunk Block is so designed that it fuses the extracted image feature information without affecting the transmission of the original image features. The Conv-Attention is used to calculate the attention in the Trunk Block, which uses kernel dot product and is responsible for providing the weight for the self-attention in the process of using convolution induced deviation calculation. Combined with the structure of the model and the method used, we refer to this model as TBACkp. The dataset comprises the enhanced abdominal CT images of 151 patients with liver metastases from breast cancer, together with the corresponding HER2 expression levels for each patient. The experimental results are as follows: (AUC: 0.915, ACC: 0.854, specificity: 0.809, precision: 0.863, recall: 0.881, F1-score: 0.872). The results demonstrate that this method can accurately assess the HER2 expression status in patients when compared with other advanced deep learning model.
Collapse
Affiliation(s)
- Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Jie Song
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Meng Liu
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Shuang Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Xiaoping Yin
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China; Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Hebei University, Baoding, China; The Outstanding Young Scientific Research and Innovation Team of Hebei University, Baoding, China.
| | - Kun Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China.
| |
Collapse
|
3
|
Cai WL, Cheng M, Wang Y, Xu PH, Yang X, Sun ZW, Wang-Jun Yan. Prediction and related genes of cancer distant metastasis based on deep learning. Comput Biol Med 2024; 168:107664. [PMID: 38000245 DOI: 10.1016/j.compbiomed.2023.107664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/27/2023] [Accepted: 10/31/2023] [Indexed: 11/26/2023]
Abstract
Cancer metastasis is one of the main causes of cancer progression and difficulty in treatment. Genes play a key role in the process of cancer metastasis, as they can influence tumor cell invasiveness, migration ability and fitness. At the same time, there is heterogeneity in the organs of cancer metastasis. Breast cancer, prostate cancer, etc. tend to metastasize in the bone. Previous studies have pointed out that the occurrence of metastasis is closely related to which tissue is transferred to and genes. In this paper, we identified genes associated with cancer metastasis to different tissues based on LASSO and Pearson correlation coefficients. In total, we identified 45 genes associated with bone metastases, 89 genes associated with lung metastases, and 86 genes associated with liver metastases. Through the expression of these genes, we propose a CNN-based model to predict the occurrence of metastasis. We call this method MDCNN, which introduces a modulation mechanism that allows the weights of convolution kernels to be adjusted at different positions and feature maps, thereby adaptively changing the convolution operation at different positions. Experiments have proved that MDCNN has achieved satisfactory prediction accuracy in bone metastasis, lung metastasis and liver metastasis, and is better than other 4 methods of the same kind. We performed enrichment analysis and immune infiltration analysis on bone metastasis-related genes, and found multiple pathways and GO terms related to bone metastasis, and found that the abundance of macrophages and monocytes was the highest in patients with bone metastasis.
Collapse
Affiliation(s)
- Wei-Luo Cai
- Department of Musculoskeletal Surgery, Fudan University Shanghai Cancer Center, China
| | - Mo Cheng
- Department of Musculoskeletal Surgery, Fudan University Shanghai Cancer Center, China
| | - Yi Wang
- Department of Gastrointestinal Surgical Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, China
| | - Pei-Hang Xu
- Department of Musculoskeletal Surgery, Fudan University Shanghai Cancer Center, China
| | - Xi Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, China; Department of Oncology, Shanghai Medical College, Fudan University, China.
| | - Zheng-Wang Sun
- Department of Musculoskeletal Surgery, Fudan University Shanghai Cancer Center, China.
| | - Wang-Jun Yan
- Department of Musculoskeletal Surgery, Fudan University Shanghai Cancer Center, China.
| |
Collapse
|
4
|
Alatrany AS, Khan W, Hussain AJ, Mustafina J, Al-Jumeily D. Transfer Learning for Classification of Alzheimer's Disease Based on Genome Wide Data. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2700-2711. [PMID: 37018274 DOI: 10.1109/tcbb.2022.3233869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Alzheimer's disease (AD) is a type of brain disorder that is regarded as a degenerative disease because the corresponding symptoms aggravate with the time progression. Single nucleotide polymorphisms (SNPs) have been identified as relevant biomarkers for this condition. This study aims to identify SNPs biomarkers associated with the AD in order to perform a reliable classification of AD. In contrast to existing related works, we utilize deep transfer learning with varying experimental analysis for reliable classification of AD. For this purpose, the convolutional neural networks (CNN) are firstly trained over the genome-wide association studies (GWAS) dataset requested from the AD neuroimaging initiative. We then employ the deep transfer learning for further training of our CNN (as base model) over a different AD GWAS dataset, to extract the final set of features. The extracted features are then fed into Support Vector Machine for classification of AD. Detailed experiments are performed using multiple datasets and varying experimental configurations. The statistical outcomes indicate an accuracy of 89% which is a significant improvement when benchmarked with existing related works.
Collapse
|
5
|
Ziyambe B, Yahya A, Mushiri T, Tariq MU, Abbas Q, Babar M, Albathan M, Asim M, Hussain A, Jabbar S. A Deep Learning Framework for the Prediction and Diagnosis of Ovarian Cancer in Pre- and Post-Menopausal Women. Diagnostics (Basel) 2023; 13:diagnostics13101703. [PMID: 37238188 DOI: 10.3390/diagnostics13101703] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/17/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face limitations, including subjectivity, inter-observer variability, and extended testing times. This study proposes a novel convolutional neural network (CNN) algorithm for predicting and diagnosing ovarian cancer, addressing these limitations. In this paper, CNN was trained on a histopathological image dataset, divided into training and validation subsets and augmented before training. The model achieved a remarkable accuracy of 94%, with 95.12% of cancerous cases correctly identified and 93.02% of healthy cells accurately classified. The significance of this study lies in overcoming the challenges associated with the human expert examination, such as higher misclassification rates, inter-observer variability, and extended analysis times. This study presents a more accurate, efficient, and reliable approach to predicting and diagnosing ovarian cancer. Future research should explore recent advances in this field to enhance the effectiveness of the proposed method further.
Collapse
Affiliation(s)
- Blessed Ziyambe
- Department of Electrical Engineering, Harare Polytechnic College, Causeway Harare P.O. Box CY407, Zimbabwe
| | - Abid Yahya
- Department of Electrical, Computer and Telecommunications Engineering, Botswana International University of Science and Technology, Palapye 10071, Botswana
| | - Tawanda Mushiri
- Department of Industrial and Mechatronics Engineering, Faculty of Engineering & the Built Environment, University of Zimbabwe, Mt. Pleasant, 630 Churchill Avenue, Harare, Zimbabwe
| | | | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Muhammad Babar
- Robotics and Internet of Things Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Muhammad Asim
- EIAS Data Science Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| | - Sohail Jabbar
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| |
Collapse
|
6
|
Lu X, Wang X, Zhang W, Wen A, Ren Y. An end-to-end model for ECG signals classification based on residual attention network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
7
|
Wang J, Zheng Y, Ma J, Li X, Wang C, Gee J, Wang H, Huang W. Information bottleneck-based interpretable multitask network for breast cancer classification and segmentation. Med Image Anal 2023; 83:102687. [PMID: 36436356 DOI: 10.1016/j.media.2022.102687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 09/19/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model's decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model's decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net.
Collapse
Affiliation(s)
- Junxia Wang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China; Shanghai AI Laboratory, No. 701 Yunjin Road, Xuhui District, Shanghai, 200433, China.
| | - Jun Ma
- School of Cyber Science and Engineering, Southeast University, No. 2 Southeast University Road, Jiangning District, Nanjing, 211189, China
| | - Xinmeng Li
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Chongjing Wang
- China Academy of Information and Communications Technology, No. 52 Huayuan North Road, Haidian District, Beijing 100191, China
| | - James Gee
- Penn Image Computing and Science Laboratory, University of Pennsylvania, PA 19104, USA
| | - Haipeng Wang
- Institute of Information Fusion, Naval Aviation University, Erma Road Yantai Shandong, Yantai 264001, China.
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China.
| |
Collapse
|
8
|
Towards computational solutions for precision medicine based big data healthcare system using deep learning models: A review. Comput Biol Med 2022; 149:106020. [DOI: 10.1016/j.compbiomed.2022.106020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 12/14/2022]
|
9
|
Gopatoti A, Vijayalakshmi P. CXGNet: A tri-phase chest X-ray image classification for COVID-19 diagnosis using deep CNN with enhanced grey-wolf optimizer. Biomed Signal Process Control 2022; 77:103860. [PMID: 35692695 PMCID: PMC9167923 DOI: 10.1016/j.bspc.2022.103860] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 05/17/2022] [Accepted: 06/04/2022] [Indexed: 11/15/2022]
Abstract
The coronavirus disease 2019 (COVID-19) epidemic had a significant impact on daily life in many nations and global public health. COVID's quick spread has become one of the biggest disruptive calamities in the world. In the fight against COVID-19, it's critical to keep a close eye on the initial stage of infection in patients. Furthermore, early COVID-19 discovery by precise diagnosis, especially in patients with no evident symptoms, may reduce the patient's death rate and can stop the spread of COVID-19. When compared to CT images, chest X-ray (CXR) images are now widely employed for COVID-19 diagnosis since CXR images contain more robust features of the lung. Furthermore, radiologists can easily diagnose CXR images because of its operating speed and low cost, and it is promising for emergency situations and therapy. This work proposes a tri-stage CXR image based COVID-19 classification model using deep learning convolutional neural networks (DLCNN) with an optimal feature selection technique named as enhanced grey-wolf optimizer with genetic algorithm (EGWO-GA), which is denoted as CXGNet. The proposed CXGNet is implemented as multiple classes, such as 4-class, 3-class, and 2-class models based on the diseases. Extensive simulation outcome discloses the superiority of the proposed CXGNet model with enhanced classification accuracy of 94.00% for the 4-class model, 97.05% of accuracy for the 3-class model, and 100% accuracy for the 2-class model as compared to conventional methods.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
10
|
Fan M, Yuan C, Huang G, Xu M, Wang S, Gao X, Li L. A framework for deep multitask learning with multiparametric magnetic resonance imaging for the joint prediction of histological characteristics in breast cancer. IEEE J Biomed Health Inform 2022; 26:3884-3895. [PMID: 35635826 DOI: 10.1109/jbhi.2022.3179014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The clinical management and decision-making process related to breast cancer are based on multiple histological indicators. This study aims to jointly predict the Ki-67 expression level, luminal A subtype and histological grade molecular biomarkers using a new deep multitask learning method with multiparametric magnetic resonance imaging. A multitask learning network structure was proposed by introducing a common-task layer and task-specific layers to learn the high-level features that are common to all tasks and related to a specific task, respectively. A network pretrained with knowledge from the ImageNet dataset was used and fine-tuned with MRI data. Information from multiparametric MR images was fused using the strategy at the feature and decision levels. The area under the receiver operating characteristic curve (AUC) was used to measure model performance. For single-task learning using a single image series, the deep learning model generated AUCs of 0.752, 0.722, and 0.596 for the Ki-67, luminal A and histological grade prediction tasks, respectively. The performance was improved by freezing the first 5 convolutional layers, using 20% shared layers and fusing multiparametric series at the feature level, which achieved AUCs of 0.819, 0.799 and 0.747 for Ki-67, luminal A and histological grade prediction tasks, respectively. Our study showed advantages in jointly predicting correlated clinical biomarkers using a deep multitask learning framework with an appropriate number of fine-tuned convolutional layers by taking full advantage of common and complementary imaging features. Multiparametric image series-based multitask learning could be a promising approach for the multiple clinical indicator-based management of breast cancer.
Collapse
|
11
|
Wang B, Dai Z, Kong D, Yu L, Zheng J, Li P. Boosting semi-supervised network representation learning with pseudo-multitasking. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02844-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
12
|
Liu Z, Wang R, Zhang W. Improving the generalization of unsupervised feature learning by using data from different sources on gene expression data for cancer diagnosis. Med Biol Eng Comput 2022; 60:1055-1073. [DOI: 10.1007/s11517-022-02522-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 01/30/2022] [Indexed: 10/19/2022]
|
13
|
Rabbi F, Dabbagh SR, Angin P, Yetisen AK, Tasoglu S. Deep Learning-Enabled Technologies for Bioimage Analysis. MICROMACHINES 2022; 13:mi13020260. [PMID: 35208385 PMCID: PMC8880650 DOI: 10.3390/mi13020260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 02/05/2023]
Abstract
Deep learning (DL) is a subfield of machine learning (ML), which has recently demonstrated its potency to significantly improve the quantification and classification workflows in biomedical and clinical applications. Among the end applications profoundly benefitting from DL, cellular morphology quantification is one of the pioneers. Here, we first briefly explain fundamental concepts in DL and then we review some of the emerging DL-enabled applications in cell morphology quantification in the fields of embryology, point-of-care ovulation testing, as a predictive tool for fetal heart pregnancy, cancer diagnostics via classification of cancer histology images, autosomal polycystic kidney disease, and chronic kidney diseases.
Collapse
Affiliation(s)
- Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
| | - Sajjad Rahmani Dabbagh
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
| | - Pelin Angin
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK;
| | - Savas Tasoglu
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
- Institute of Biomedical Engineering, Boğaziçi University, Çengelköy, Istanbul 34684, Turkey
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569 Stuttgart, Germany
- Correspondence:
| |
Collapse
|
14
|
Wang J, Su G, Yan X, Zhang W, Jia J, Yan B. Predicting cytotoxicity of binary pollutants towards a human cell panel in environmental water by experimentation and deep learning methods. CHEMOSPHERE 2022; 287:132324. [PMID: 34563777 DOI: 10.1016/j.chemosphere.2021.132324] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/12/2021] [Accepted: 09/20/2021] [Indexed: 06/13/2023]
Abstract
Biological assays are useful in water quality evaluation by providing the overall toxicity of chemical mixtures in environmental waters. However, it is impossible to elucidate the source of toxicity and some lethal combination of pollutants simply using biological assays. As facile and cost-effective methods, computation model-based toxicity assessments are complementary technologies. Herein, we predicted the human health risk of binary pollutant mixtures (i.e., binary combinations of As(III), Cd(II), Cr(VI), Pb(II) and F(I)) in water using in vitro biological assays and deep learning methods. By employing a human cell panel containing human stomach, colon, liver, and kidney cell lines, we assessed the human health risk mimicking cellular responses after oral exposures of environmental water containing pollutants. Based on the experimental cytotoxicity data in pure water, multi-task deep learning was applied to predict cellular response of binary pollutant mixtures in environmental water. Using additive descriptors and single pollutant toxicity data in pure water, the established deep learning model could predict the toxicity of most binary mixtures in environmental water, with coefficient of determination (R2) > 0.65 and root mean squared error (RMSE) < 0.22. Further combining the experimental data on synergistic and antagonistic effects of pollutant mixtures, deep learning helped improve the predictive ability of the model (R2 > 0.74 and RMSE <0.17). Moreover, predictive models allowed us identify a number of toxicity source-related physiochemical properties. This study illustrates the combination of experimental findings and deep learning methods in the water quality evaluation.
Collapse
Affiliation(s)
- Jiahui Wang
- School of Chemistry and Chemical Engineering, Shandong University, Jinan, 250100, China
| | - Gaoxing Su
- School of Pharmacy, Nantong University, Nantong, 226001, China.
| | - Xiliang Yan
- Key Laboratory for Water Quality and Conservation of the Pearl River Delta, Ministry of Education, Institute of Environmental Research at Greater Bay, Guangzhou University, Guangzhou, 510006, China.
| | - Wei Zhang
- Key Laboratory for Water Quality and Conservation of the Pearl River Delta, Ministry of Education, Institute of Environmental Research at Greater Bay, Guangzhou University, Guangzhou, 510006, China
| | - Jianbo Jia
- Key Laboratory for Water Quality and Conservation of the Pearl River Delta, Ministry of Education, Institute of Environmental Research at Greater Bay, Guangzhou University, Guangzhou, 510006, China
| | - Bing Yan
- Key Laboratory for Water Quality and Conservation of the Pearl River Delta, Ministry of Education, Institute of Environmental Research at Greater Bay, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
15
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
16
|
Subhalakshmi RT, Appavu Alias Balamurugan S, Sasikala S. Automatic Segmentation and Classification of COVID-19 CT Image Using Deep Learning and Multi-Scale Recurrent Neural Network Based Classifier. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from
the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of
computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis
from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN)
based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior
performance with the maximum sensitivity, specificity, and accuracy.
Collapse
Affiliation(s)
- R. T. Subhalakshmi
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar 626115, India
| | | | - S. Sasikala
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai 625009, Tamil Nadu, India
| |
Collapse
|
17
|
Vaz JM, Balaji S. Convolutional neural networks (CNNs): concepts and applications in pharmacogenomics. Mol Divers 2021; 25:1569-1584. [PMID: 34031788 PMCID: PMC8342355 DOI: 10.1007/s11030-021-10225-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 04/21/2021] [Indexed: 12/17/2022]
Abstract
Convolutional neural networks (CNNs) have been used to extract information from various datasets of different dimensions. This approach has led to accurate interpretations in several subfields of biological research, like pharmacogenomics, addressing issues previously faced by other computational methods. With the rising attention for personalized and precision medicine, scientists and clinicians have now turned to artificial intelligence systems to provide them with solutions for therapeutics development. CNNs have already provided valuable insights into biological data transformation. Due to the rise of interest in precision and personalized medicine, in this review, we have provided a brief overview of the possibilities of implementing CNNs as an effective tool for analyzing one-dimensional biological data, such as nucleotide and protein sequences, as well as small molecular data, e.g., simplified molecular-input line-entry specification, InChI, binary fingerprints, etc., to categorize the models based on their objective and also highlight various challenges. The review is organized into specific research domains that participate in pharmacogenomics for a more comprehensive understanding. Furthermore, the future intentions of deep learning are outlined.
Collapse
Affiliation(s)
- Joel Markus Vaz
- Department of Biotechnology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
| | - S Balaji
- Department of Biotechnology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| |
Collapse
|
18
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
19
|
A new composite approach for COVID-19 detection in X-ray images using deep features. Appl Soft Comput 2021; 111:107669. [PMID: 34248447 PMCID: PMC8255192 DOI: 10.1016/j.asoc.2021.107669] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 05/23/2021] [Accepted: 06/25/2021] [Indexed: 11/23/2022]
Abstract
The new type of coronavirus, COVID 19, appeared in China at the end of 2019. It has become a pandemic that is spreading all over the world in a very short time. The detection of this disease, which has serious health and socio-economic damages, is of vital importance. COVID-19 detection is performed by applying PCR and serological tests. Additionally, COVID detection is possible using X-ray and computed tomography images. Disease detection has an important position in scientific researches that includes artificial intelligence methods. The combined models, which consist of different phases, are frequently used for classification problems. In this paper, a new combined approach is proposed to detect COVID-19 cases using deep features obtained from X-ray images. Two main variances of the approach can be presented as single layer-based (SLB) and feature fusion-based (FFB). SLB model consists of pre-processing, deep feature extraction, post-processing, and classification phases. On the other side, the FFB model consists of pre-processing, deep feature extraction, feature fusion, post-processing, and classification phases. Four different SLB and six different FFB models were developed according to the number and binary combination of layers used in the feature extraction phase. Each model is employed for binary and multi-class classification experiments. According to experimental results, the accuracy performance for COVID-19 and no-findings classification of the proposed FFB3 model is 99.52%, which is better than the best performance accuracy (of 98.08%) in the literature. Concurrently, for multi-class classification, the proposed FFB3 model has an accuracy performance of 87.64% outperforming the best existing work (which reported an 87.02% classification performance). Various metrics, including sensitivity, specificity, precision, and F1-score metrics are used for performance analysis. For all performance metrics, the FFB3 model recorded a higher success rate than existing work in the literature. To the best of our knowledge, these accuracy rates are the best in the literature for the dataset and data split type (five-fold cross-validation). Composite models (SLBs and FFBs), which are generated in this paper, are successful ways to detect COVID-19. Experimental results show that feature extraction, pre-processing, post-processing, and hyperparameter tuning are the steps are necessary to obtain a higher success. For prospective works, different types of pre-trained models and other hyperparameter tuning methods can be implemented.
Collapse
|
20
|
Jaya Ant lion optimization-driven Deep recurrent neural network for cancer classification using gene expression data. Med Biol Eng Comput 2021; 59:1005-1021. [PMID: 33851321 DOI: 10.1007/s11517-021-02350-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 03/17/2021] [Indexed: 10/21/2022]
Abstract
Cancer is one of the deadly diseases prevailing worldwide and the patients with cancer are rescued only when the cancer is detected at the very early stage. Early detection of cancer is essential as, in the final stage, the chance of survival is limited. The symptoms of cancers are rigorous and therefore, all the symptoms should be studied properly before the diagnosis. Thus, an automatic prediction system is necessary for classifying cancer as malignant or benign. Hence, this paper introduces the novel strategy based on the JayaAnt lion optimization-based Deep recurrent neural network (JayaALO-based DeepRNN) for cancer classification. The steps followed in the developed model are data normalization, data transformation, feature dimension detection, and classification. The first step is data normalization. The goal of data normalization is to eliminate data redundancy and to mitigate the storage of objects in a relational database that maintains the same information in several places. After that, the data transformation is carried out based on log transformation that generates the patterns using more interpretable and helps fulfill the supposition, and to reduce skew. Also, the non-negative matrix factorization is employed for reducing the feature dimension. Finally, the proposed JayaALO-based DeepRNN method effectively classifies cancer based on the reduced dimension features to produce a satisfactory result. Thus, the resulted output of the proposed JayaALO-based DeepRNN is employed for cancer classification. The proposed JayaALO-based DeepRNN showed improved results with maximal accuracy of 95.97%, maximal sensitivity of 95.95%, and maximal specificity of 96.96%. The goal of this research is to devise the cancer classification strategy using the proposed JayaALO-based DeepRNN. It is required to detect the cancer at an early stage to prevent the destruction caused to the other organs. The developed model involves four phases to perform the cancer classification, namely data normalization, data transformation, feature dimension detection, and the classification. Initially, the input images are gathered and are adapted to perform data normalization. The normalized data is fed to the data transformation, which will be performed using log transformation. The obtained transformed data is fed to feature dimension reduction which is performed using non-negative matrix factorization. The reduced features will be employed in DeepRNN for cancer classification. The training of DeepRNN is done using the proposed JayaALO, which is designed by combining ALO and the Jaya algorithm the block diagram of the proposed cancer classification approach using JayaALO-based DeepRNN approach is given below.
Collapse
|
21
|
COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning. SENSORS 2021; 21:s21041480. [PMID: 33672585 PMCID: PMC8078171 DOI: 10.3390/s21041480] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 02/15/2021] [Accepted: 02/17/2021] [Indexed: 12/15/2022]
Abstract
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).
Collapse
|
22
|
Zhang Y, Yang Y, Zhou W, Wang H, Ouyang X. Multi-city traffic flow forecasting via multi-task learning. APPL INTELL 2021. [DOI: 10.1007/s10489-020-02074-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
23
|
Abuhmed T, El-Sappagh S, Alonso JM. Robust hybrid deep learning models for Alzheimer’s progression detection. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2020.106688] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
24
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 83] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
25
|
El-Sappagh S, Alonso JM, Islam SMR, Sultan AM, Kwak KS. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease. Sci Rep 2021; 11:2660. [PMID: 33514817 PMCID: PMC7846613 DOI: 10.1038/s41598-021-82098-3] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 12/29/2020] [Indexed: 01/30/2023] Open
Abstract
Alzheimer's disease (AD) is the most common type of dementia. Its diagnosis and progression detection have been intensively studied. Nevertheless, research studies often have little effect on clinical practice mainly due to the following reasons: (1) Most studies depend mainly on a single modality, especially neuroimaging; (2) diagnosis and progression detection are usually studied separately as two independent problems; and (3) current studies concentrate mainly on optimizing the performance of complex machine learning models, while disregarding their explainability. As a result, physicians struggle to interpret these models, and feel it is hard to trust them. In this paper, we carefully develop an accurate and interpretable AD diagnosis and progression detection model. This model provides physicians with accurate decisions along with a set of explanations for every decision. Specifically, the model integrates 11 modalities of 1048 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) real-world dataset: 294 cognitively normal, 254 stable mild cognitive impairment (MCI), 232 progressive MCI, and 268 AD. It is actually a two-layer model with random forest (RF) as classifier algorithm. In the first layer, the model carries out a multi-class classification for the early diagnosis of AD patients. In the second layer, the model applies binary classification to detect possible MCI-to-AD progression within three years from a baseline diagnosis. The performance of the model is optimized with key markers selected from a large set of biological and clinical measures. Regarding explainability, we provide, for each layer, global and instance-based explanations of the RF classifier by using the SHapley Additive exPlanations (SHAP) feature attribution framework. In addition, we implement 22 explainers based on decision trees and fuzzy rule-based systems to provide complementary justifications for every RF decision in each layer. Furthermore, these explanations are represented in natural language form to help physicians understand the predictions. The designed model achieves a cross-validation accuracy of 93.95% and an F1-score of 93.94% in the first layer, while it achieves a cross-validation accuracy of 87.08% and an F1-Score of 87.09% in the second layer. The resulting system is not only accurate, but also trustworthy, accountable, and medically applicable, thanks to the provided explanations which are broadly consistent with each other and with the AD medical literature. The proposed system can help to enhance the clinical understanding of AD diagnosis and progression processes by providing detailed insights into the effect of different modalities on the disease risk.
Collapse
Affiliation(s)
- Shaker El-Sappagh
- Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, 15782, Santiago de Compostela, Spain.
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, 13518, Egypt.
| | - Jose M Alonso
- Centro Singular de Investigación en Tecnoloxías Intelixentes, Universidade de Santiago de Compostela, 15703, Santiago, Spain
| | - S M Riazul Islam
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 05006, Korea
| | - Ahmad M Sultan
- Gastrointestinal Surgical Center, Faculty of Medicine, Mansoura University, Mansura, 35516, Egypt
| | - Kyung Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon, 22212, South Korea.
| |
Collapse
|
26
|
Multimodal multitask deep learning model for Alzheimer’s disease progression detection based on time series data. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.087] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
27
|
Classification of Microarray Gene Expression Data Using an Infiltration Tactics Optimization (ITO) Algorithm. Genes (Basel) 2020; 11:genes11070819. [PMID: 32708429 PMCID: PMC7397166 DOI: 10.3390/genes11070819] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 07/08/2020] [Accepted: 07/09/2020] [Indexed: 11/17/2022] Open
Abstract
A number of different feature selection and classification techniques have been proposed in literature including parameter-free and parameter-based algorithms. The former are quick but may result in local maxima while the latter use dataset-specific parameter-tuning for higher accuracy. However, higher accuracy may not necessarily mean higher reliability of the model. Thus, generalized optimization is still a challenge open for further research. This paper presents a warzone inspired "infiltration tactics" based optimization algorithm (ITO)-not to be confused with the ITO algorithm based on the Itõ Process in the field of Stochastic calculus. The proposed ITO algorithm combines parameter-free and parameter-based classifiers to produce a high-accuracy-high-reliability (HAHR) binary classifier. The algorithm produces results in two phases: (i) Lightweight Infantry Group (LIG) converges quickly to find non-local maxima and produces comparable results (i.e., 70 to 88% accuracy) (ii) Followup Team (FT) uses advanced tuning to enhance the baseline performance (i.e., 75 to 99%). Every soldier of the ITO army is a base model with its own independently chosen Subset selection method, pre-processing, and validation methods and classifier. The successful soldiers are combined through heterogeneous ensembles for optimal results. The proposed approach addresses a data scarcity problem, is flexible to the choice of heterogeneous base classifiers, and is able to produce HAHR models comparable to the established MAQC-II results.
Collapse
|
28
|
Menaga D, Revathi S. AN EMPIRICAL STUDY OF CANCER CLASSIFICATION TECHNIQUES BASED ON THE NEURAL NETWORKS. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2020. [DOI: 10.4015/s1016237220500131] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Cancer is one of the most common dreadful diseases prevailing worldwide, and patients with cancer are rescued only when the cancer is detected at a very early stage. Early detection of cancer is appropriate as in the fourth stage, but the chance of survival is limited. The symptoms of cancers are rigorous, and therefore, all the symptoms should be studied properly before the diagnosis. Thus, an automatic prediction system is necessary for classifying the tumor, i.e. malignant or benign tumor. Over the past few years, cancer classification is increased rapidly, but there is no general technique to find novel cancer classes (class discovery) or to assign tumors to known classes. Accordingly, this survey analyzes distinct cancer classification techniques. Thus, this review article provides a detailed review of 50 research papers presenting the suggested cancer classification techniques, like Deep learning-based techniques, Neural network-based techniques, and Hybrid techniques. Moreover, an elaborative analysis and discussion are made based on the year of publication, utilized datasets, accuracy range, evaluation metrics, implementation tool, and adopted classification methods. Eventually, the research gaps and issues of various cancer classification schemes are presented for extending the researchers towards a better future scope.
Collapse
Affiliation(s)
- D. Menaga
- B.S. Abdur Rahman Crescent Institute of Science and Technology, Seethakathi Estate G.S.T Main Road Vandalur, Chennai, Tamil Nadu 600048, India
| | - S. Revathi
- B.S. Abdur Rahman Crescent Institute of Science and Technology, Seethakathi Estate G.S.T Main Road Vandalur, Chennai, Tamil Nadu 600048, India
| |
Collapse
|
29
|
Coccia M. Deep learning technology for improving cancer care in society: New directions in cancer imaging driven by artificial intelligence. TECHNOLOGY IN SOCIETY 2020; 60:101198. [DOI: 10.1016/j.techsoc.2019.101198] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
30
|
|
31
|
Ontiveros-Robles E, Melin P. Toward a development of general type-2 fuzzy classifiers applied in diagnosis problems through embedded type-1 fuzzy classifiers. Soft comput 2019. [DOI: 10.1007/s00500-019-04157-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|