1
|
Fan Z, Zhang X, Ruan S, Thorstad W, Gay H, Song P, Wang X, Li H. A medical image classification method based on self-regularized adversarial learning. Med Phys 2024; 51:8232-8246. [PMID: 39078069 DOI: 10.1002/mp.17320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 06/10/2024] [Accepted: 06/20/2024] [Indexed: 07/31/2024] Open
Abstract
BACKGROUND Deep learning (DL) techniques have been extensively applied in medical image classification. The unique characteristics of medical imaging data present challenges, including small labeled datasets, severely imbalanced class distribution, and significant variations in imaging quality. Recently, generative adversarial network (GAN)-based classification methods have gained attention for their ability to enhance classification accuracy by incorporating realistic GAN-generated images as data augmentation. However, the performance of these GAN-based methods often relies on high-quality generated images, while large amounts of training data are required to train GAN models to achieve optimal performance. PURPOSE In this study, we propose an adversarial learning-based classification framework to achieve better classification performance. Innovatively, GAN models are employed as supplementary regularization terms to support classification, aiming to address the challenges described above. METHODS The proposed classification framework, GAN-DL, consists of a feature extraction network (F-Net), a classifier, and two adversarial networks, specifically a reconstruction network (R-Net) and a discriminator network (D-Net). The F-Net extracts features from input images, and the classifier uses these features for classification tasks. R-Net and D-Net have been designed following the GAN architecture. R-Net employs the extracted feature to reconstruct the original images, while D-Net is tasked with the discrimination between the reconstructed image and the original images. An iterative adversarial learning strategy is designed to guide model training by incorporating multiple network-specific loss functions. These loss functions, serving as supplementary regularization, are automatically derived during the reconstruction process and require no additional data annotation. RESULTS To verify the model's effectiveness, we performed experiments on two datasets, including a COVID-19 dataset with 13 958 chest x-ray images and an oropharyngeal squamous cell carcinoma (OPSCC) dataset with 3255 positron emission tomography images. Thirteen classic DL-based classification methods were implemented on the same datasets for comparison. Performance metrics included precision, sensitivity, specificity, andF 1 $F_1$ -score. In addition, we conducted ablation studies to assess the effects of various factors on model performance, including the network depth of F-Net, training image size, training dataset size, and loss function design. Our method achieved superior performance than all comparative methods. On the COVID-19 dataset, our method achieved95.4 % ± 0.6 % $95.4\%\pm 0.6\%$ ,95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ ,97.7 % ± 0.4 % $97.7\%\pm 0.4\%$ , and95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ in terms of precision, sensitivity, specificity, andF 1 $F_1$ -score, respectively. It achieved96.2 % ± 0.7 % $96.2\%\pm 0.7\%$ across all these metrics on the OPSCC dataset. The study to investigate the effects of two adversarial networks highlights the crucial role of D-Net in improving model performance. Ablation studies further provide an in-depth understanding of our methodology. CONCLUSION Our adversarial-based classification framework leverages GAN-based adversarial networks and an iterative adversarial learning strategy to harness supplementary regularization during training. This design significantly enhances classification accuracy and mitigates overfitting issues in medical image datasets. Moreover, its modular design not only demonstrates flexibility but also indicates its potential applicability to various clinical contexts and medical imaging applications.
Collapse
Affiliation(s)
- Zong Fan
- Department of Bioengineering, University of Illinois Urbana-Champaign, Illinois, USA
| | - Xiaohui Zhang
- Department of Bioengineering, University of Illinois Urbana-Champaign, Illinois, USA
| | - Su Ruan
- Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen, France
| | - Wade Thorstad
- Department of Radiation Oncology, Washington University in St. Louis, Missouri, USA
| | - Hiram Gay
- Department of Radiation Oncology, Washington University in St. Louis, Missouri, USA
| | - Pengfei Song
- Department of Electrical & Computer Engineering, University of Illinois Urbana-Champaign, Illinois, USA
| | - Xiaowei Wang
- Department of Pharmacology and Bioengineering, University of Illinois at Chicago, Illinois, USA
| | - Hua Li
- Department of Bioengineering, University of Illinois Urbana-Champaign, Illinois, USA
- Department of Radiation Oncology, Washington University in St. Louis, Missouri, USA
- Cancer Center at Illinois, Urbana, Illinois, USA
| |
Collapse
|
2
|
Jiang T, Shen C, Ding P, Luo L. Data augmentation based on the WGAN-GP with data block to enhance the prediction of genes associated with RNA methylation pathways. Sci Rep 2024; 14:26321. [PMID: 39487188 PMCID: PMC11530642 DOI: 10.1038/s41598-024-77107-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 10/21/2024] [Indexed: 11/04/2024] Open
Abstract
RNA methylation modification influences various processes in the human body and has gained increasing attention from scholars. Predicting genes associated with RNA methylation pathways can significantly aid biologists in studying RNA methylation processes. Several prediction methods have been investigated, but their performance is still limited by the scarcity of positive samples. To address the challenge of data imbalance in RNA methylation-associated gene prediction tasks, this study employed a generative adversarial network to learn the feature distribution of the original dataset. The quality of synthetic samples was controlled using the Classifier Two-Sample Test (CTST). These synthetic samples were then added to the data blocks to mitigate class distribution imbalance. Experimental results demonstrated that integrating the synthetic samples generated by our proposed model with the original data enhances the prediction performance of various classifiers, outperforming other oversampling methods. Moreover, gene ontology (GO) enrichment analyses further demonstrate the effectiveness of the predicted genes associated with RNA methylation pathways. The model generating gene samples with PyTorch is available at https://github.com/heyheyheyheyhey1/WGAN-GP_RNA_methylation.
Collapse
Affiliation(s)
- Tuo Jiang
- School of Computer Science, University of South China, Hengyang, 421001, Hunan, China
| | - Cong Shen
- Department of Mathematics, National University of Singapore, Singapore, 119076, Singapore
| | - Pingjian Ding
- School of Computer Science, University of South China, Hengyang, 421001, Hunan, China.
| | - Lingyun Luo
- School of Computer Science, University of South China, Hengyang, 421001, Hunan, China.
- Hunan Medical Big Data International Science and Technology Innovation Cooperation Base, Hengyang, 421001, Hunan, China.
| |
Collapse
|
3
|
Zhang C, Zhu J. AML leukocyte classification method for small samples based on ACGAN. BIOMED ENG-BIOMED TE 2024; 69:491-499. [PMID: 38547466 DOI: 10.1515/bmt-2024-0028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 03/13/2024] [Indexed: 10/06/2024]
Abstract
Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.
Collapse
Affiliation(s)
- Chenxuan Zhang
- School of Artificial Intelligence, 232838 Chongqing University of Technology , Chongqing, PR.China
| | - Junlin Zhu
- College of Computer Science and Cyber Security, 47908 Chengdu University of Technology , Chengdu, P.R. China
| |
Collapse
|
4
|
Sun Y, Tian Y, Zhang Y, Yu M, Su X, Wang Q, Guo J, Lu Y, Ren L. A double-branch convolutional neural network model for species identification based on multi-modal data. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2024; 318:124454. [PMID: 38788500 DOI: 10.1016/j.saa.2024.124454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 04/15/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024]
Abstract
For species identification analysis, methods based on deep learning are becoming prevalent due to their data-driven and task-oriented nature. The most commonly used convolutional neural network (CNN) model has been well applied in Raman spectra recognition. However, when faced with similar molecules or functional groups, the features of overlapping peaks and weak peaks may not be fully extracted using the CNN model, which can potentially hinder accurate species identification. Based on these practical challenges, the fusion of multi-modal data can effectively meet the comprehensive and accurate analysis of actual samples when compared with single-modal data. In this study, we propose a double-branch CNN model by integrating Raman and image multi-modal data, named SI-DBNet. In addition, we have developed a one-dimensional convolutional neural network combining dilated convolutions and efficient channel attention mechanisms for spectral branching. The effectiveness of the model has been demonstrated using the Grad-CAM method to visualize the key regions concerned by the model. When compared to single-modal and multi-modal classification methods, our SI-DBNet model achieved superior performance with a classification accuracy of 98.8%. The proposed method provided a new reference for species identification based on multi-modal data fusion.
Collapse
Affiliation(s)
- Yuxin Sun
- College of Computer Science and Technology, Qingdao University, Qingdao 266071, China; College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Ye Tian
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Yiyi Zhang
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Mengting Yu
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Xiaoquan Su
- College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
| | - Qi Wang
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Jinjia Guo
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Yuan Lu
- College of Physics and Opto-electronic Engineering, Ocean University of China, Qingdao 266100, China
| | - Lihui Ren
- College of Computer Science and Technology, Qingdao University, Qingdao 266071, China; Single-Cell Center, Qingdao Institute of BioEnergy and Bioprocess Technology, Chinese Academy of Sciences, Qingdao 266101, China.
| |
Collapse
|
5
|
Aksoy A. An Innovative Hybrid Model for Automatic Detection of White Blood Cells in Clinical Laboratories. Diagnostics (Basel) 2024; 14:2093. [PMID: 39335772 PMCID: PMC11431813 DOI: 10.3390/diagnostics14182093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 09/15/2024] [Accepted: 09/17/2024] [Indexed: 09/30/2024] Open
Abstract
Background: Microscopic examination of peripheral blood is a standard practice in clinical medicine. Although manual examination is considered the gold standard, it presents several disadvantages, such as interobserver variability, being quite time-consuming, and requiring well-trained professionals. New automatic digital algorithms have been developed to eliminate the disadvantages of manual examination and improve the workload of clinical laboratories. Objectives: Regular analysis of peripheral blood cells and careful interpretation of their results are critical for protecting individual health and early diagnosis of diseases. Because many diseases can occur due to this, this study aims to detect white blood cells automatically. Methods: A hybrid model has been developed for this purpose. In the developed model, feature extraction has been performed with MobileNetV2 and EfficientNetb0 architectures. In the next step, the neighborhood component analysis (NCA) method eliminated unnecessary features in the feature maps so that the model could work faster. Then, different features of the same image were combined, and the extracted features were combined to increase the model's performance. Results: The optimized feature map was classified into different classifiers in the last step. The proposed model obtained a competitive accuracy value of 95.6%. Conclusions: The results obtained in the proposed model show that the proposed model can be used in the detection of white blood cells.
Collapse
Affiliation(s)
- Aziz Aksoy
- Department of Bioengineering, Malatya Turgut Ozal University, 44200 Malatya, Turkey
| |
Collapse
|
6
|
Vaickus LJ, Kerr DA, Velez Torres JM, Levy J. Artificial Intelligence Applications in Cytopathology: Current State of the Art. Surg Pathol Clin 2024; 17:521-531. [PMID: 39129146 DOI: 10.1016/j.path.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of "atypical" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA.
| | - Darcy A Kerr
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA. https://twitter.com/darcykerrMD
| | - Jaylou M Velez Torres
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Joshua Levy
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Cedars-Sinai Medical Center, 8700 Beverly Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
7
|
Rao Y, Zhang Q, Wang X, Xue X, Ma W, Xu L, Xing S. Automated diagnosis of adenoid hypertrophy with lateral cephalogram in children based on multi-scale local attention. Sci Rep 2024; 14:18619. [PMID: 39127777 PMCID: PMC11316792 DOI: 10.1038/s41598-024-69827-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 08/09/2024] [Indexed: 08/12/2024] Open
Abstract
Adenoid hypertrophy can lead to adenoidal mouth breathing, which can result in "adenoid face" and, in severe cases, can even lead to respiratory tract obstruction. The Fujioka ratio method, which calculates the ratio of adenoid (A) to nasopharyngeal (N) space in an adenoidal-cephalogram (A/N), is a well-recognized and effective technique for detecting adenoid hypertrophy. However, this process is time-consuming and relies on personal experience, so a fully automated and standardized method needs to be designed. Most of the current deep learning-based methods for automatic diagnosis of adenoids are CNN-based methods, which are more sensitive to features similar to adenoids in lateral views and can affect the final localization results. In this study, we designed a local attention-based method for automatic diagnosis of adenoids, which takes AdeBlock as the basic module, fuses the spatial and channel information of adenoids through two-branch local attention computation, and combines the downsampling method without losing spatial information. Our method achieved mean squared error (MSE) 0.0023, mean radial error (MRE) 1.91, and SD (standard deviation) 7.64 on the three hospital datasets, outperforming other comparative methods.
Collapse
Affiliation(s)
- Yanying Rao
- Department of Radiology, Fujian Children's Hospital (Fujian Branch of Shanghai Children's Medical Center), College of Clinical Medicine for Obstetrics & Gynecology and Pediatrics, Fujian Medical University, Fuzhou, 350014, Fujian, China
- Department of Radiology, Fujian Maternity and Child Health Hospital, College of Clinical Medicine for Obstetrics & Gynecology and Pediatrics, Fujian Medical University, Fuzhou, 350005, Fujian, China
| | - Qiuyun Zhang
- Department of Otorhinolaryngology, Fujian Maternity and Child Health Hospital, College of Clinical Medicine for Obstetrics & Gynecology and Pediatrics, Fujian Medical University, Fujian, 350005, China
- Department of Otorhinolaryngology, Fujian Children's Hospital (Fujian Branch of Shanghai Children's Medical Center), College of Clinical Medicine for Obstetrics & Gynecology and Pediatrics, Fujian Medical University, Fujian, 350014, China
| | - Xiaowei Wang
- Department of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350116, China
| | - Xiaoling Xue
- Department of Radiology, Fujian Maternity and Child Health Hospital, College of Clinical Medicine for Obstetrics & Gynecology and Pediatrics, Fujian Medical University, Fuzhou, 350005, Fujian, China
| | - Wenjing Ma
- Department of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350116, China
| | - Lin Xu
- Department of Radiology, Shanghai Children's Medical Center, Shanghai Jiao Tong University School of Medicine, Shanghai, 200127, China
| | - Shuli Xing
- Department of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350116, China.
- Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian, 350116, China.
| |
Collapse
|
8
|
Özcan ŞN, Uyar T, Karayeğen G. Comprehensive data analysis of white blood cells with classification and segmentation by using deep learning approaches. Cytometry A 2024; 105:501-520. [PMID: 38563259 DOI: 10.1002/cyto.a.24839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/14/2024] [Accepted: 03/25/2024] [Indexed: 04/04/2024]
Abstract
Deep learning approaches have frequently been used in the classification and segmentation of human peripheral blood cells. The common feature of previous studies was that they used more than one dataset, but used them separately. No study has been found that combines more than two datasets to use together. In classification, five types of white blood cells were identified by using a mixture of four different datasets. In segmentation, four types of white blood cells were determined, and three different neural networks, including CNN (Convolutional Neural Network), UNet and SegNet, were applied. The classification results of the presented study were compared with those of related studies. The balanced accuracy was 98.03%, and the test accuracy of the train-independent dataset was determined to be 97.27%. For segmentation, accuracy rates of 98.9% for train-dependent dataset and 92.82% for train-independent dataset for the proposed CNN were obtained in both nucleus and cytoplasm detection. In the presented study, the proposed method showed that it could detect white blood cells from a train-independent dataset with high accuracy. Additionally, it is promising as a diagnostic tool that can be used in the clinical field, with successful results in classification and segmentation.
Collapse
Affiliation(s)
- Şeyma Nur Özcan
- Biomedical Engineering Department, Başkent University, Ankara, Turkey
| | - Tansel Uyar
- Biomedical Engineering Department, Başkent University, Ankara, Turkey
| | - Gökay Karayeğen
- Biomedical Equipment Technology, Vocational School of Technical Sciences, Başkent University, Ankara, Turkey
| |
Collapse
|
9
|
Zhang C, Wang S, Han Y, Zheng A, Liu G, Meng K, Yang P, Chen Z. Effects of Crude Extract of Glycyrrhiza Radix and Atractylodes macrocephala on Immune and Antioxidant Capacity of SPF White Leghorn Chickens in an Oxidative Stress Model. Antioxidants (Basel) 2024; 13:578. [PMID: 38790683 PMCID: PMC11118435 DOI: 10.3390/antiox13050578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/29/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024] Open
Abstract
The natural edible characteristics of Chinese herbs have led more and more people to study them as an alternative product to antibiotics. In this study, crude extracts of Glycyrrhiza radix and Atractylodes macrocephala (abbreviated as GRAM) with glycyrrhizic acid content not less than 0.2 mg/g were selected to evaluate the effects of GRAM on the immune and antioxidant capacity of model animals. Thirty 21-day-old male Leghorn chickens were weighed and randomly assigned to one of three groups of ten animals each. The treatments comprised a control group (CON), in which saline was injected at day 31, day 33, and day 35, an LPS-treated group (LPS), in which LPS (0.5 mg/kg of BW) was injected at day 31, day 33, and day 35, and finally a GRAM and LPS-treated group, (G-L) in which a GRAM-treated diet (at GRAM 2 g/kg) was fed from day 21 to day 35 with LPS injection (0.5 mg/kg of BW) at day 31, day 33, and day 35. The results of diarrhea grade and serum antioxidant measurement showed that the LPS group had obvious diarrhea symptoms, serum ROS and MDA were significantly increased, and T-AOC was significantly decreased. The oxidative stress model of LPS was successfully established. The results of immune and antioxidant indexes showed that feeding GRAM significantly decreased levels of the pro-inflammatory factors TNF-α, IL-1β, and IL-6 (p < 0.05) and significantly increased levels of the anti-inflammatory factors IL-4 and IL-10 and levels of the antioxidant enzymes GSH-Px and CAT (p < 0.05). GRAM resisted the influence of LPS on ileum morphology, liver, and immune organs and maintained normal index values for ileum morphology, liver, and immune organs. In summary, this study confirmed the antidiarrheal effect of GRAM, which improved the immune and antioxidant capacity of model animals by regulating inflammatory cytokine levels and antioxidant enzyme activity in poultry.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Peilong Yang
- Key Laboratory for Feed Biotechnology of the Ministry of Agriculture and Rural Affairs, Institute of Feed Research, Chinese Academy of Agriculture Sciences, Beijing 100081, China; (C.Z.); (S.W.); (Y.H.); (A.Z.); (G.L.); (K.M.)
| | - Zhimin Chen
- Key Laboratory for Feed Biotechnology of the Ministry of Agriculture and Rural Affairs, Institute of Feed Research, Chinese Academy of Agriculture Sciences, Beijing 100081, China; (C.Z.); (S.W.); (Y.H.); (A.Z.); (G.L.); (K.M.)
| |
Collapse
|
10
|
Peng K, Peng Y, Liao H, Yang Z, Feng W. Automated bone marrow cell classification through dual attention gates dense neural networks. J Cancer Res Clin Oncol 2023; 149:16971-16981. [PMID: 37740765 DOI: 10.1007/s00432-023-05384-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/31/2023] [Indexed: 09/25/2023]
Abstract
PURPOSE The morphology of bone marrow cells is essential in identifying malignant hematological disorders. The automatic classification model of bone marrow cell morphology based on convolutional neural networks shows considerable promise in terms of diagnostic efficiency and accuracy. However, due to the lack of acceptable accuracy in bone marrow cell classification algorithms, automatic classification of bone marrow cells is now infrequently used in clinical facilities. To address the issue of precision, in this paper, we propose a Dual Attention Gates DenseNet (DAGDNet) to construct a novel efficient, and high-precision bone marrow cell classification model for enhancing the classification model's performance even further. METHODS DAGDNet is constructed by embedding a novel dual attention gates (DAGs) mechanism in the architecture of DenseNet. DAGs are used to filter and highlight the position-related features in DenseNet to improve the precision and recall of neural network-based cell classifiers. We have constructed a dataset of bone marrow cell morphology from the First Affiliated Hospital of Chongqing Medical University, which mainly consists of leukemia samples, to train and test our proposed DAGDNet together with the bone marrow cell classification dataset. RESULTS When evaluated on a multi-center dataset, experimental results show that our proposed DAGDNet outperforms image classification models such as DenseNet and ResNeXt in bone marrow cell classification performance. The mean precision of DAGDNet on the Munich Leukemia Laboratory dataset is 88.1%, achieving state-of-the-art performance while still maintaining high efficiency. CONCLUSION Our data demonstrate that the DAGDNet can improve the efficacy of automatic bone marrow cell classification and can be exploited as an assisting diagnosis tool in clinical applications. Moreover, the DAGDNet is also an efficient model that can swiftly inspect a large number of bone marrow cells and offers the benefit of reducing the probability of an incorrect diagnosis.
Collapse
Affiliation(s)
- Kaiyi Peng
- Department of Clinical Hematology, Key Laboratory of Laboratory Medical Diagnostics Designated by the Ministry of Education, School of Laboratory Medicine, Chongqing Medical University, No. 1, Yixueyuan Road, Chongqing, 400016, China
| | - Yuhang Peng
- Department of Clinical Hematology, Key Laboratory of Laboratory Medical Diagnostics Designated by the Ministry of Education, School of Laboratory Medicine, Chongqing Medical University, No. 1, Yixueyuan Road, Chongqing, 400016, China
| | - Hedong Liao
- Department of Hematology, The First Affiliated Hospital of Chongqing Medical University, No. 1, Youyi Road, Chongqing, 400016, China
| | - Zesong Yang
- Department of Hematology, The First Affiliated Hospital of Chongqing Medical University, No. 1, Youyi Road, Chongqing, 400016, China.
| | - Wenli Feng
- Department of Clinical Hematology, Key Laboratory of Laboratory Medical Diagnostics Designated by the Ministry of Education, School of Laboratory Medicine, Chongqing Medical University, No. 1, Yixueyuan Road, Chongqing, 400016, China.
| |
Collapse
|
11
|
Alshahrani H, Sharma G, Anand V, Gupta S, Sulaiman A, Elmagzoub MA, Reshan MSA, Shaikh A, Azar AT. An Intelligent Attention-Based Transfer Learning Model for Accurate Differentiation of Bone Marrow Stains to Diagnose Hematological Disorder. Life (Basel) 2023; 13:2091. [PMID: 37895472 PMCID: PMC10607952 DOI: 10.3390/life13102091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/17/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023] Open
Abstract
Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.
Collapse
Affiliation(s)
- Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - Gunjan Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - M. A. Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia;
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Automated Systems and Soft Computing Lab (ASSCL), Prince Sultan University, Riyadh 11586, Saudi Arabia
| |
Collapse
|
12
|
Houssein EH, Mohamed O, Abdel Samee N, Mahmoud NF, Talaat R, Al-Hejri AM, Al-Tam RM. Using deep DenseNet with cyclical learning rate to classify leukocytes for leukemia identification. Front Oncol 2023; 13:1230434. [PMID: 37771437 PMCID: PMC10523295 DOI: 10.3389/fonc.2023.1230434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Accepted: 08/15/2023] [Indexed: 09/30/2023] Open
Abstract
Background The examination, counting, and classification of white blood cells (WBCs), also known as leukocytes, are essential processes in the diagnosis of many disorders, including leukemia, a kind of blood cancer characterized by the uncontrolled proliferation of carcinogenic leukocytes in the marrow of the bone. Blood smears can be chemically or microscopically studied to better understand hematological diseases and blood disorders. Detecting, identifying, and categorizing the many blood cell types are essential for disease diagnosis and therapy planning. A theoretical and practical issue. However, methods based on deep learning (DL) have greatly helped blood cell classification. Materials and Methods Images of blood cells in a microscopic smear were collected from GitHub, a public source that uses the MIT license. An end-to-end computer-aided diagnosis (CAD) system for leukocytes has been created and implemented as part of this study. The introduced system comprises image preprocessing and enhancement, image segmentation, feature extraction and selection, and WBC classification. By combining the DenseNet-161 and the cyclical learning rate (CLR), we contribute an approach that speeds up hyperparameter optimization. We also offer the one-cycle technique to rapidly optimize all hyperparameters of DL models to boost training performance. Results The dataset has been split into two sets: approximately 80% of the data (9,966 images) for the training set and 20% (2,487 images) for the validation set. The validation set has 623, 620, 620, and 624 eosinophil, lymphocyte, monocyte, and neutrophil images, whereas the training set has 2,497, 2,483, 2,487, and 2,499, respectively. The suggested method has 100% accuracy on the training set of images and 99.8% accuracy on the testing set. Conclusion Using a combination of the recently developed pretrained convolutional neural network (CNN), DenseNet, and the one fit cycle policy, this study describes a technique of training for the classification of WBCs for leukemia detection. The proposed method is more accurate compared to the state of the art.
Collapse
Affiliation(s)
- Essam H. Houssein
- Faculty of Computers and Information, Minia University, Minia, Egypt
| | - Osama Mohamed
- Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef, Egypt
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Rawan Talaat
- Biotechnology & Genetics Department, Agriculture Engineering, Ain Shams University, Cairo, Egypt
| | - Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded, Ma-harashtra, India
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded, Ma-harashtra, India
| |
Collapse
|
13
|
Rivas-Posada E, Chacon-Murguia MI. Automatic base-model selection for white blood cell image classification using meta-learning. Comput Biol Med 2023; 163:107200. [PMID: 37393786 DOI: 10.1016/j.compbiomed.2023.107200] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 06/09/2023] [Accepted: 06/19/2023] [Indexed: 07/04/2023]
Abstract
Healthcare has benefited from the implementation of deep-learning models to solve medical image classification tasks. For example, White Blood Cell (WBC) image analysis is used to diagnose different pathologies like leukemia. However, medical datasets are mostly imbalanced, inconsistent, and costly to collect. Hence, it is difficult to select an adequate model to overcome the mentioned drawbacks. Therefore, we propose a novel methodology to automatically select models to solve WBC classification tasks. These tasks contain images collected using different staining methods, microscopes, and cameras. The proposed methodology includes meta- and base-level learnings. At the meta-level, we implemented meta-models based on prior-models to acquire meta-knowledge by solving meta-tasks using the shades of gray color constancy method. To determine the best models to solve new WBC tasks we developed an algorithm that uses the meta-knowledge and the Centered Kernel Alignment metric. Next, a learning rate finder method is employed to adapt the selected models. The adapted models (base-models) are used in an ensemble learning approach achieving accuracy and balanced accuracy scores of 98.29 and 97.69 in the Raabin dataset; 100 in the BCCD dataset; 99.57 and 99.51 in the UACH dataset, respectively. The results in all datasets outperform most of the state-of-the-art models, which demonstrates our methodology's advantage of automatically selecting the best model to solve WBC tasks. The findings also indicate that our methodology can be extended to other medical image classification tasks where is difficult to select an adequate deep-learning model to solve new tasks with imbalanced, limited, and out-of-distribution data.
Collapse
Affiliation(s)
- Eduardo Rivas-Posada
- Tecnologico Nacional de Mexico / I T Chihuahua, Visual Perception Lab, Ave. Tecnologico #2909, Chihuahua, 31310, Mexico.
| | - Mario I Chacon-Murguia
- Tecnologico Nacional de Mexico / I T Chihuahua, Visual Perception Lab, Ave. Tecnologico #2909, Chihuahua, 31310, Mexico.
| |
Collapse
|
14
|
Zou X, Zhai J, Qian S, Li A, Tian F, Cao X, Wang R. Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15244-15264. [PMID: 37679179 DOI: 10.3934/mbe.2023682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Ultrasonography is a widely used medical imaging technique for detecting breast cancer. While manual diagnostic methods are subject to variability and time-consuming, computer-aided diagnostic (CAD) methods have proven to be more efficient. However, current CAD approaches neglect the impact of noise and artifacts on the accuracy of image analysis. To enhance the precision of breast ultrasound image analysis for identifying tissues, organs and lesions, we propose a novel approach for improved tumor classification through a dual-input model and global average pooling (GAP)-guided attention loss function. Our approach leverages a convolutional neural network with transformer architecture and modifies the single-input model for dual-input. This technique employs a fusion module and GAP operation-guided attention loss function simultaneously to supervise the extraction of effective features from the target region and mitigate the effect of information loss or redundancy on misclassification. Our proposed method has three key features: (i) ResNet and MobileViT are combined to enhance local and global information extraction. In addition, a dual-input channel is designed to include both attention images and original breast ultrasound images, mitigating the impact of noise and artifacts in ultrasound images. (ii) A fusion module and GAP operation-guided attention loss function are proposed to improve the fusion of dual-channel feature information, as well as supervise and constrain the weight of the attention mechanism on the fused focus region. (iii) Using the collected uterine fibroid ultrasound dataset to train ResNet18 and load the pre-trained weights, our experiments on the BUSI and BUSC public datasets demonstrate that the proposed method outperforms some state-of-the-art methods. The code will be publicly released at https://github.com/425877/Improved-Breast-Ultrasound-Tumor-Classification.
Collapse
Affiliation(s)
- Xiao Zou
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Jintao Zhai
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Shengyou Qian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Ang Li
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Feng Tian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Xiaofei Cao
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| | - Runmin Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| |
Collapse
|
15
|
Shu L, Zhong K, Chen N, Gu W, Shang W, Liang J, Ren J, Hong H. Predicting the severity of white matter lesions among patients with cerebrovascular risk factors based on retinal images and clinical laboratory data: a deep learning study. Front Neurol 2023; 14:1168836. [PMID: 37492851 PMCID: PMC10363667 DOI: 10.3389/fneur.2023.1168836] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 06/20/2023] [Indexed: 07/27/2023] Open
Abstract
Background and purpose As one common feature of cerebral small vascular disease (cSVD), white matter lesions (WMLs) could lead to reduction in brain function. Using a convenient, cheap, and non-intrusive method to detect WMLs could substantially benefit to patient management in the community screening, especially in the settings of availability or contraindication of magnetic resonance imaging (MRI). Therefore, this study aimed to develop a useful model to incorporate clinical laboratory data and retinal images using deep learning models to predict the severity of WMLs. Methods Two hundred fifty-nine patients with any kind of neurological diseases were enrolled in our study. Demographic data, retinal images, MRI, and laboratory data were collected for the patients. The patients were assigned to the absent/mild and moderate-severe WMLs groups according to Fazekas scoring system. Retinal images were acquired by fundus photography. A ResNet deep learning framework was used to analyze the retinal images. A clinical-laboratory signature was generated from laboratory data. Two prediction models, a combined model including demographic data, the clinical-laboratory signature, and the retinal images and a clinical model including only demographic data and the clinical-laboratory signature, were developed to predict the severity of WMLs. Results Approximately one-quarter of the patients (25.6%) had moderate-severe WMLs. The left and right retinal images predicted moderate-severe WMLs with area under the curves (AUCs) of 0.73 and 0.94. The clinical-laboratory signature predicted moderate-severe WMLs with an AUC of 0.73. The combined model showed good performance in predicting moderate-severe WMLs with an AUC of 0.95, while the clinical model predicted moderate-severe WMLs with an AUC of 0.78. Conclusion Combined with retinal images from conventional fundus photography and clinical laboratory data are reliable and convenient approach to predict the severity of WMLs and are helpful for the management and follow-up of WMLs patients.
Collapse
Affiliation(s)
- Liming Shu
- Department of Neurology, Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China
- Department of Neurology, Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Kaiyi Zhong
- Department of Neurology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Nanya Chen
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Wenxin Gu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Wenjing Shang
- Department of Neurology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jiahui Liang
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, Guangzhou, China
- Guangdong Key Laboratory of Non-human Primate Research, Guangdong-Hongkong-Macau Institute of CNS Regeneration, Jinan University, Guangzhou, China
| | - Jiangtao Ren
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Hua Hong
- Department of Neurology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
16
|
Qiao Z, Li L, Zhao X, Liu L, Zhang Q, Hechmi S, Atri M, Li X. An enhanced Runge Kutta boosted machine learning framework for medical diagnosis. Comput Biol Med 2023; 160:106949. [PMID: 37159961 DOI: 10.1016/j.compbiomed.2023.106949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 03/27/2023] [Accepted: 04/15/2023] [Indexed: 05/11/2023]
Abstract
With the development and maturity of machine learning methods, medical diagnosis aided with machine learning methods has become a popular method to assist doctors in diagnosing and treating patients. However, machine learning methods are greatly affected by their hyperparameters, for instance, the kernel parameter in kernel extreme learning machine (KELM) and the learning rate in residual neural networks (ResNet). If the hyperparameters are appropriately set, the performance of the classifier can be significantly improved. To boost the performance of the machine learning methods, this paper proposes to improve the Runge Kutta optimizer (RUN) to adaptively adjust the hyperparameters of the machine learning methods for medical diagnosis purposes. Although RUN has a solid mathematical theoretical foundation, there are still some performance defects when dealing with complex optimization problems. To remedy these defects, this paper proposes a new enhanced RUN method with a grey wolf mechanism and an orthogonal learning mechanism called GORUN. The superior performance of the GORUN was validated against other well-established optimizers on IEEE CEC 2017 benchmark functions. Then, the proposed GORUN is employed to optimize the machine learning models, including the KELM and ResNet, to construct robust models for medical diagnosis. The performance of the proposed machine learning framework was validated on several medical data sets, and the experimental results have demonstrated its superiority.
Collapse
Affiliation(s)
- Zenglin Qiao
- School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China.
| | - Lynn Li
- China Telecom Stocks Co.,Ltd., Hangzhou Branch, Hangzhou, 310000, China.
| | - Xinchao Zhao
- School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China.
| | - Lei Liu
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.
| | - Qian Zhang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, Zhejiang, 325035, China.
| | - Shili Hechmi
- Dept. Computer Sciences, Tabuk University, Tabuk, Saudi Arabia.
| | - Mohamed Atri
- College of Computer Science, King Khalid University, Abha, Saudi Arabia.
| | - Xiaohua Li
- Library, Wenzhou University, Wenzhou, Zhejiang, 325035, China.
| |
Collapse
|
17
|
Deng K, Liu H, Yang L, Addepalli S, Zhao Y. Classification of barely visible impact damage in composite laminates using deep learning and pulsed thermographic inspection. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08293-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
AbstractWith the increasingly comprehensive utilisation of Carbon Fibre-Reinforced Polymers (CFRP) in modern industry, defects detection and characterisation of these materials have become very important and draw significant research attention. During the past 10 years, Artificial Intelligence (AI) technologies have been attractive in this area due to their outstanding ability in complex data analysis tasks. Most current AI-based studies on damage characterisation in this field focus on damage segmentation and depth measurement, which also faces the bottleneck of lacking adequate experimental data for model training. This paper proposes a new framework to understand the relationship between Barely Visible Impact Damage features occurring in typical CFRP laminates to their corresponding controlled drop-test impact energy using a Deep Learning approach. A parametric study consisting of one hundred CFRP laminates with known material specification and identical geometric dimensions were subjected to drop-impact tests using five different impact energy levels. Then Pulsed Thermography was adopted to reveal the subsurface impact damage in these specimens and recorded damage patterns in temporal sequences of thermal images. A convolutional neural network was then employed to train models that aim to classify captured thermal photos into different groups according to their corresponding impact energy levels. Testing results of models trained from different time windows and lengths were evaluated, and the best classification accuracy of 99.75% was achieved. Finally, to increase the transparency of the proposed solution, a salience map is introduced to understand the learning source of the produced models.
Collapse
|
18
|
Automated Bone Marrow Cell Classification for Haematological Disease Diagnosis Using Siamese Neural Network. Diagnostics (Basel) 2022; 13:diagnostics13010112. [PMID: 36611404 PMCID: PMC9818919 DOI: 10.3390/diagnostics13010112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/10/2022] [Accepted: 12/13/2022] [Indexed: 12/31/2022] Open
Abstract
The critical structure and nature of different bone marrow cells which form a base in the diagnosis of haematological ailments requires a high-grade classification which is a very prolonged approach and accounts for human error if performed manually, even by field experts. Therefore, the aim of this research is to automate the process to study and accurately classify the structure of bone marrow cells which will help in the diagnosis of haematological ailments at a much faster and better rate. Various machine learning algorithms and models, such as CNN + SVM, CNN + XGB Boost and Siamese network, were trained and tested across a dataset of 170,000 expert-annotated cell images from 945 patients' bone marrow smears with haematological disorders. The metrics used for evaluation of this research are accuracy of model, precision and recall of all the different classes of cells. Based on these performance metrics the CNN + SVM, CNN + XGB, resulted in 32%, 28% accuracy, respectively, and therefore these models were discarded. Siamese neural resulted in 91% accuracy and 84% validation accuracy. Moreover, the weighted average recall values of the Siamese neural network were 92% for training and 91% for validation. Hence, the final results are based on Siamese neural network model as it was outperforming all the other algorithms used in this research.
Collapse
|
19
|
Chen X, Peng Y, Guo Y, Sun J, Li D, Cui J. MLRD-Net: 3D multiscale local cross-channel residual denoising network for MRI-based brain tumor segmentation. Med Biol Eng Comput 2022; 60:3377-3395. [DOI: 10.1007/s11517-022-02673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 09/17/2022] [Indexed: 11/11/2022]
|
20
|
Tamang T, Baral S, Paing MP. Classification of White Blood Cells: A Comprehensive Study Using Transfer Learning Based on Convolutional Neural Networks. Diagnostics (Basel) 2022; 12:diagnostics12122903. [PMID: 36552910 PMCID: PMC9777002 DOI: 10.3390/diagnostics12122903] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 11/24/2022] Open
Abstract
White blood cells (WBCs) in the human immune system defend against infection and protect the body from external hazardous objects. They are comprised of neutrophils, eosinophils, basophils, monocytes, and lymphocytes, whereby each accounts for a distinct percentage and performs specific functions. Traditionally, the clinical laboratory procedure for quantifying the specific types of white blood cells is an integral part of a complete blood count (CBC) test, which aids in monitoring the health of people. With the advancements in deep learning, blood film images can be classified in less time and with high accuracy using various algorithms. This paper exploits a number of state-of-the-art deep learning models and their variations based on CNN architecture. A comparative study on model performance based on accuracy, F1-score, recall, precision, number of parameters, and time was conducted, and DenseNet161 was found to demonstrate a superior performance among its counterparts. In addition, advanced optimization techniques such as normalization, mixed-up augmentation, and label smoothing were also employed on DenseNet to further refine its performance.
Collapse
Affiliation(s)
- Thinam Tamang
- Madan Bhandari Memorial College, New Baneshwor, Kathmandu 44600, Nepal
| | - Sushish Baral
- Department of Robotics and AI, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
- Correspondence: (S.B.); (M.P.P.)
| | - May Phu Paing
- Department of Biomedical Engineering, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
- Correspondence: (S.B.); (M.P.P.)
| |
Collapse
|
21
|
Chola C, Muaad AY, Bin Heyat MB, Benifa JVB, Naji WR, Hemachandran K, Mahmoud NF, Samee NA, Al-Antari MA, Kadah YM, Kim TS. BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification. Diagnostics (Basel) 2022; 12:diagnostics12112815. [PMID: 36428875 PMCID: PMC9689932 DOI: 10.3390/diagnostics12112815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/03/2022] [Accepted: 11/12/2022] [Indexed: 11/19/2022] Open
Abstract
Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.
Collapse
Affiliation(s)
- Channabasava Chola
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
| | - Abdullah Y. Muaad
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
- Centre for VLSI and Embedded System Technologies, International Institute of Information Technology, Hyderabad 500032, India
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW 2770, Australia
| | - J. V. Bibal Benifa
- Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kerala 686635, India
| | - Wadeea R. Naji
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - K. Hemachandran
- Department of Artificial Intelligence, Woxsen University, Hyderabad 502345, India
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Mugahed A. Al-Antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| |
Collapse
|
22
|
A Method for Expanding the Training Set of White Blood Cell Images. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1267080. [DOI: 10.1155/2022/1267080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 04/08/2022] [Accepted: 04/15/2022] [Indexed: 11/10/2022]
Abstract
In medicine, the count of different types of white blood cells can be used as the basis for diagnosing certain diseases or evaluating the treatment effects of diseases. The recognition and counting of white blood cells have important clinical significance. But the effect of recognition based on machine learning is affected by the size of the training set. At present, researchers mainly rely on image rotation and cropping to expand the dataset. These methods either add features to the white blood cell image or require manual intervention and are inefficient. In this paper, a method for expanding the training set of white blood cell images is proposed. After rotating the image at any angle, Canny is used to extract the edge of the black area caused by the rotation and then fill the black area to achieve the purpose of expanding the training set. The experimental results show that after using the method proposed in this paper to expand the training set to train the three models of ResNet, MobileNet, and ShuffleNet, and comparing the original dataset and the method trained by the simple rotated image expanded dataset, the recognition accuracy of the three models is obviously improved without manual intervention.
Collapse
|
23
|
Efficient tooth gingival margin line reconstruction via adversarial learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Development of a plug-and-play anti-noise module for fault diagnosis of rotating machines in nuclear power plants. PROGRESS IN NUCLEAR ENERGY 2022. [DOI: 10.1016/j.pnucene.2022.104344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
25
|
Foo KY, Newman K, Fang Q, Gong P, Ismail HM, Lakhiani DD, Zilkens R, Dessauvagie BF, Latham B, Saunders CM, Chin L, Kennedy BF. Multi-class classification of breast tissue using optical coherence tomography and attenuation imaging combined via deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:3380-3400. [PMID: 35781967 PMCID: PMC9208580 DOI: 10.1364/boe.455110] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/23/2022] [Accepted: 04/25/2022] [Indexed: 05/27/2023]
Abstract
We demonstrate a convolutional neural network (CNN) for multi-class breast tissue classification as adipose tissue, benign dense tissue, or malignant tissue, using multi-channel optical coherence tomography (OCT) and attenuation images, and a novel Matthews correlation coefficient (MCC)-based loss function that correlates more strongly with performance metrics than the commonly used cross-entropy loss. We hypothesized that using multi-channel images would increase tumor detection performance compared to using OCT alone. 5,804 images from 29 patients were used to fine-tune a pre-trained ResNet-18 network. Adding attenuation images to OCT images yields statistically significant improvements in several performance metrics, including benign dense tissue sensitivity (68.0% versus 59.6%), malignant tissue positive predictive value (PPV) (79.4% versus 75.5%), and total accuracy (85.4% versus 83.3%), indicating that the additional contrast from attenuation imaging is most beneficial for distinguishing between benign dense tissue and malignant tissue.
Collapse
Affiliation(s)
- Ken Y. Foo
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Kyle Newman
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Qi Fang
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Peijun Gong
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Hina M. Ismail
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Devina D. Lakhiani
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Renate Zilkens
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA 6009, Australia
| | - Benjamin F. Dessauvagie
- Division of Pathology and Laboratory Medicine, Medical School, The University of Western Australia, Perth, WA 6009, Australia
- PathWest, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
| | - Bruce Latham
- PathWest, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
- School of Medicine, The University of Notre Dame, Fremantle, WA 6160, Australia
| | - Christobel M. Saunders
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA 6009, Australia
- Breast Centre, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
- Breast Clinic, Royal Perth Hospital, Perth, WA 6000, Australia
- Department of Surgery, Melbourne Medical School, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Lixin Chin
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Brendan F. Kennedy
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
- Australian Research Council Centre for Personalised Therapeutics Technologies, Perth, WA 6000, Australia
| |
Collapse
|
26
|
Leng B, Leng M, Ge M, Dong W. Knowledge distillation-based deep learning classification network for peripheral blood leukocytes. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
27
|
Anita Davamani K, Rene Robin C, Doreen Robin D, Jani Anbarasi L. Adaptive blood cell segmentation and hybrid Learning-based blood cell classification: A Meta-heuristic-based model. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Feeding Material Identification for a Crusher Based on Deep Learning for Status Monitoring and Fault Diagnosis. MINERALS 2022. [DOI: 10.3390/min12030380] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
In large coal preparation plants with a capacity of 30 million tons/year, the belt speed can reach 7 m/s and the thickness of the material layer can reach 500 mm. Therefore, in high-throughput and complex environments, the problem exists that harmful feeding materials such as iron and gangue are not easily detected, and thus fault diagnosis in the crushers lags behind. Therefore, it is necessary to extract the equipment operation signals from the noisy production environment and identify the feeding materials. Currently, there is no systematic research on signal processing and image classification of crusher feeding materials, while the convolutional neural network (CNN) is outstanding in computer vision. In this paper, sound and vibration signals of the feeding materials are denoised by spectral subtraction and transformed into feature images by continuous wavelet transforms. Then, an image classification model based on CNN is built for these feature images to study its classification mechanism and performance. The results show that the model classification accuracy is respectively 84.0%, 93.5% and 80.1% in coal–iron–wood classification, coal–iron classification, and coal–wood classification. The good classification performance for coal, iron and wood can satisfy the practical demands to remove the harmful feeding materials, which provides the core technical support for the establishment of operating status monitoring and fault diagnosis system of crushing equipment.
Collapse
|
29
|
Synthesis of Microscopic Cell Images Obtained from Bone Marrow Aspirate Smears through Generative Adversarial Networks. BIOLOGY 2022; 11:biology11020276. [PMID: 35205142 PMCID: PMC8869175 DOI: 10.3390/biology11020276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 01/26/2022] [Accepted: 02/01/2022] [Indexed: 02/07/2023]
Abstract
Simple Summary This paper proposes a hybrid generative adversarial networks model—WGAN-GP-AC—to generate synthetic microscopic cell images. We generate the synthetic data for the cell types containing fewer data to obtain a balanced dataset. A balanced dataset would help enhance the classification accuracy of each cell type and help with an easy and quick diagnosis that is critical for leukemia patients. In this work, we combine images from three datasets to form a single concrete dataset with variations of multiple microscopic cell images. We provide experimental results that prove the correlation between the original and our synthetically generated data. We also deliver classification results to showcase that the generated synthetic data can be used for real-life experiments and the advancement of the medical domain. Abstract Every year approximately 1.24 million people are diagnosed with blood cancer. While the rate increases each year, the availability of data for each kind of blood cancer remains scarce. It is essential to produce enough data for each blood cell type obtained from bone marrow aspirate smears to diagnose rare types of cancer. Generating data would help easy and quick diagnosis, which are the most critical factors in cancer. Generative adversarial networks (GAN) are the latest emerging framework for generating synthetic images and time-series data. This paper takes microscopic cell images, preprocesses them, and uses a hybrid GAN architecture to generate synthetic images of the cell types containing fewer data. We prepared a single dataset with expert intervention by combining images from three different sources. The final dataset consists of 12 cell types and has 33,177 microscopic cell images. We use the discriminator architecture of auxiliary classifier GAN (AC-GAN) and combine it with the Wasserstein GAN with gradient penalty model (WGAN-GP). We name our model as WGAN-GP-AC. The discriminator in our proposed model works to identify real and generated images and classify every image with a cell type. We provide experimental results demonstrating that our proposed model performs better than existing individual and hybrid GAN models in generating microscopic cell images. We use the generated synthetic data with classification models, and the results prove that the classification rate increases significantly. Classification models achieved 0.95 precision and 0.96 recall value for synthetic data, which is higher than the original, augmented, or combined datasets.
Collapse
|
30
|
Bao P, Chen Z, Wang J, Dai D. Multiple agents’ spatiotemporal data generation based on recurrent regression dual discriminator GAN. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.10.048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
31
|
|
32
|
Yuan J, Ran X, Liu K, Yao C, Yao Y, Wu H, Liu Q. Machine learning applications on neuroimaging for diagnosis and prognosis of epilepsy: A review. J Neurosci Methods 2021; 368:109441. [PMID: 34942271 DOI: 10.1016/j.jneumeth.2021.109441] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 10/23/2021] [Accepted: 12/11/2021] [Indexed: 02/07/2023]
Abstract
Machine learning is playing an increasingly important role in medical image analysis, spawning new advances in the clinical application of neuroimaging. There have been some reviews on machine learning and epilepsy before, and they mainly focused on electrophysiological signals such as electroencephalography (EEG) and stereo electroencephalography (SEEG), while neglecting the potential of neuroimaging in epilepsy research. Neuroimaging has its important advantages in confirming the range of the epileptic region, which is essential in presurgical evaluation and assessment after surgery. However, it is difficult for EEG to locate the accurate epilepsy lesion region in the brain. In this review, we emphasize the interaction between neuroimaging and machine learning in the context of epilepsy diagnosis and prognosis. We start with an overview of epilepsy and typical neuroimaging modalities used in epilepsy clinics, MRI, DWI, fMRI, and PET. Then, we elaborate two approaches in applying machine learning methods to neuroimaging data: (i) the conventional machine learning approach combining manual feature engineering and classifiers, (ii) the deep learning approach, such as the convolutional neural networks and autoencoders. Subsequently, the application of machine learning on epilepsy neuroimaging, such as segmentation, localization, and lateralization tasks, as well as tasks directly related to diagnosis and prognosis are looked into in detail. Finally, we discuss the current achievements, challenges, and potential future directions in this field, hoping to pave the way for computer-aided diagnosis and prognosis of epilepsy.
Collapse
Affiliation(s)
- Jie Yuan
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, PR China
| | - Xuming Ran
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, PR China
| | - Keyin Liu
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, PR China
| | - Chen Yao
- Shenzhen Second People's Hospital, Shenzhen 518035, PR China
| | - Yi Yao
- Shenzhen Children's Hospital, Shenzhen 518017, PR China
| | - Haiyan Wu
- Centre for Cognitive and Brain Sciences and Department of Psychology, University of Macau, Taipa, Macau
| | - Quanying Liu
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, PR China.
| |
Collapse
|
33
|
Saleem S, Amin J, Sharif M, Anjum MA, Iqbal M, Wang SH. A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00473-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
AbstractWhite blood cells (WBCs) are a portion of the immune system which fights against germs. Leukemia is the most common blood cancer which may lead to death. It occurs due to the production of a large number of immature WBCs in the bone marrow that destroy healthy cells. To overcome the severity of this disease, it is necessary to diagnose the shapes of immature cells at an early stage that ultimately reduces the modality rate of the patients. Recently different types of segmentation and classification methods are presented based upon deep-learning (DL) models but still have some limitations. This research aims to propose a modified DL approach for the accurate segmentation of leukocytes and their classification. The proposed technique includes two core steps: preprocessing-based classification and segmentation. In preprocessing, synthetic images are generated using a generative adversarial network (GAN) and normalized by color transformation. The optimal deep features are extracted from each blood smear image using pretrained deep models i.e., DarkNet-53 and ShuffleNet. More informative features are selected by principal component analysis (PCA) and fused serially for classification. The morphological operations based on color thresholding with the deep semantic method are utilized for leukemia segmentation of classified cells. The classification accuracy achieved with ALL-IDB and LISC dataset is 100% and 99.70% for the classification of leukocytes i.e., blast, no blast, basophils, neutrophils, eosinophils, lymphocytes, and monocytes, respectively. Whereas semantic segmentation achieved 99.10% and 98.60% for average and global accuracy, respectively. The proposed method achieved outstanding outcomes as compared to the latest existing research works.
Collapse
|
34
|
Zhao C, Shuai R, Ma L, Liu W, Wu M. Segmentation of dermoscopy images based on deformable 3D convolution and ResU-NeXt +. Med Biol Eng Comput 2021; 59:1815-1832. [PMID: 34304370 DOI: 10.1007/s11517-021-02397-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/16/2021] [Indexed: 11/25/2022]
Abstract
Melanoma is one of the most dangerous skin cancers. The current melanoma segmentation is mainly based on FCNs (fully connected networks) and U-Net. Nevertheless, these two kinds of neural networks are prone to parameter redundancy, and the gradient of neural networks disappears that occurs when the neural network backpropagates as the neural network gets deeper, which will reduce the Jaccard index of the skin lesion image segmentation model. To solve the above problems and improve the survival rate of melanoma patients, an improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. The new modules in D3DC-ResU-NeXt++ can replace ordinary modules in the existing 2D convolutional neural networks (CNNs) that can be trained efficiently through standard backpropagation with high segmentation accuracy. In particular, we introduce a new data preprocessing method with dilation, crop operation, resizing, and hair removal (DCRH), which improves the Jaccard index of skin lesion image segmentation. Because rectified Adam (RAdam) does not easily fall into a local optimal solution and can converge quickly in segmentation model training, we also introduce RAdam as the training optimizer. The experiments show that our model has excellent performance on the segmentation of the ISIC2018 Task I dataset, and the Jaccard index achieves 86.84%. The proposed method improves the Jaccard index of segmentation of skin lesion images and can also assist dermatological doctors in determining and diagnosing the types of skin lesions and the boundary between lesions and normal skin, so as to improve the survival rate of skin cancer patients. Overview of the proposed model. An improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. D3DC-ResU-NeXt++ has strong spatial geometry processing capabilities, it is used to segment the skin lesion sample image; DCRH and transfer learning are used to preprocess the data set and D3DC-ResU-NeXt++ respectively, which can highlight the difference between the lesion area and the normal skin, and enhance the segmentation efficiency and robustness of the neural network; RAdam is used to speed up the convergence speed of neural network and improve the efficiency of segmentation.
Collapse
Affiliation(s)
- Chen Zhao
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China
| | - Renjun Shuai
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China.
| | - Li Ma
- Nanjing Health Information Center, Nanjing, 210003, China
| | - Wenjia Liu
- Changzhou No. 2 People's Hospital affiliated with Nanjing Medical University, Changzhou, 213003, China
| | - Menglin Wu
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China
| |
Collapse
|
35
|
Multiclassification of Endoscopic Colonoscopy Images Based on Deep Transfer Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2485934. [PMID: 34306173 PMCID: PMC8272675 DOI: 10.1155/2021/2485934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/27/2021] [Accepted: 06/09/2021] [Indexed: 11/17/2022]
Abstract
With the continuous improvement of human living standards, dietary habits are constantly changing, which brings various bowel problems. Among them, the morbidity and mortality rates of colorectal cancer have maintained a significant upward trend. In recent years, the application of deep learning in the medical field has become increasingly spread aboard and deep. In a colonoscopy, Artificial Intelligence based on deep learning is mainly used to assist in the detection of colorectal polyps and the classification of colorectal lesions. But when it comes to classification, it can lead to confusion between polyps and other diseases. In order to accurately diagnose various diseases in the intestines and improve the classification accuracy of polyps, this work proposes a multiclassification method for medical colonoscopy images based on deep learning, which mainly classifies the four conditions of polyps, inflammation, tumor, and normal. In view of the relatively small number of data sets, the network firstly trained by transfer learning on ImageNet was used as the pretraining model, and the prior knowledge learned from the source domain learning task was applied to the classification task about intestinal illnesses. Then, we fine-tune the model to make it more suitable for the task of intestinal classification by our data sets. Finally, the model is applied to the multiclassification of medical colonoscopy images. Experimental results show that the method in this work can significantly improve the recognition rate of polyps while ensuring the classification accuracy of other categories, so as to assist the doctor in the diagnosis of surgical resection.
Collapse
|
36
|
Abstract
The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.
Collapse
|
37
|
Identifying Leaf Phenology of Deciduous Broadleaf Forests from PhenoCam Images Using a Convolutional Neural Network Regression Method. REMOTE SENSING 2021. [DOI: 10.3390/rs13122331] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Vegetation phenology plays a key role in influencing ecosystem processes and biosphere-atmosphere feedbacks. Digital cameras such as PhenoCam that monitor vegetation canopies in near real-time provide continuous images that record phenological and environmental changes. There is a need to develop methods for automated and effective detection of vegetation dynamics from PhenoCam images. Here we developed a method to predict leaf phenology of deciduous broadleaf forests from individual PhenoCam images using deep learning approaches. We tested four convolutional neural network regression (CNNR) networks on their ability to predict vegetation growing dates based on PhenoCam images at 56 sites in North America. In the one-site experiment, the predicted phenology dated to after the leaf-out events agree well with the observed data, with a coefficient of determination (R2) of nearly 0.999, a root mean square error (RMSE) of up to 3.7 days, and a mean absolute error (MAE) of up to 2.1 days. The method developed achieved lower accuracies in the all-site experiment than in the one-site experiment, and the achieved R2 was 0.843, RMSE was 25.2 days, and MAE was 9.3 days in the all-site experiment. The model accuracy increased when the deep networks used the region of interest images rather than the entire images as inputs. Compared to the existing methods that rely on time series of PhenoCam images for studying leaf phenology, we found that the deep learning method is a feasible solution to identify leaf phenology of deciduous broadleaf forests from individual PhenoCam images.
Collapse
|
38
|
Gao W, Li M, Wu R, Du W, Zhang S, Yin S, Chen Z, Huang H. The design and application of an automated microscope developed based on deep learning for fungal detection in dermatology. Mycoses 2020; 64:245-251. [PMID: 33174310 DOI: 10.1111/myc.13209] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 11/29/2022]
Abstract
BACKGROUND Light microscopy to study the infection of fungi in skin specimens is time-consuming and requires automation. OBJECTIVE We aimed to design and explore the application of an automated microscope for fungal detection in skin specimens. METHODS An automated microscope was designed, and a deep learning model was selected. Skin, nail and hair samples were collected. The sensitivity and the specificity of the automated microscope for fungal detection were calculated by taking the results of human inspectors as the gold standard. RESULTS An automated microscope was built, and an image processing model based on the ResNet-50 was trained. A total of 292 samples were collected including 236 skin samples, 50 nail samples and six hair samples. The sensitivities of the automated microscope for fungal detection in skin, nails and hair were 99.5%, 95.2% and 60%, respectively, and the specificities were 91.4%, 100% and 100%, respectively. CONCLUSION The automated microscope we developed is as skilful as human inspectors for fungal detection in skin and nail samples; however, its performance in hair samples needs to be improved.
Collapse
Affiliation(s)
- Wenchao Gao
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Meirong Li
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Rong Wu
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Weian Du
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shanlin Zhang
- Guangzhou Wangsheng Intelligent Technology Co., Ltd., Guangzhou, China
| | - Songchao Yin
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhirui Chen
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Huaiqiu Huang
- The Department of Dermatology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|