1
|
Vaickus LJ, Kerr DA, Velez Torres JM, Levy J. Artificial Intelligence Applications in Cytopathology: Current State of the Art. Surg Pathol Clin 2024; 17:521-531. [PMID: 39129146 DOI: 10.1016/j.path.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of "atypical" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA.
| | - Darcy A Kerr
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA. https://twitter.com/darcykerrMD
| | - Jaylou M Velez Torres
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Joshua Levy
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Cedars-Sinai Medical Center, 8700 Beverly Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
2
|
Luo Y, Xu Y, Wang C, Li Q, Fu C, Jiang H. ResNeXt-CC: a novel network based on cross-layer deep-feature fusion for white blood cell classification. Sci Rep 2024; 14:18439. [PMID: 39117714 PMCID: PMC11310349 DOI: 10.1038/s41598-024-69076-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024] Open
Abstract
Accurate diagnosis of white blood cells from cytopathological images is a crucial step in evaluating leukaemia. In recent years, image classification methods based on fully convolutional networks have drawn extensive attention and achieved competitive performance in medical image classification. In this paper, we propose a white blood cell classification network called ResNeXt-CC for cytopathological images. First, we transform cytopathological images from the RGB color space to the HSV color space so as to precisely extract the texture features, color changes and other details of white blood cells. Second, since cell classification primarily relies on distinguishing local characteristics, we design a cross-layer deep-feature fusion module to enhance our ability to extract discriminative information. Third, the efficient attention mechanism based on the ECANet module is used to promote the feature extraction capability of cell details. Finally, we combine the modified softmax loss function and the central loss function to train the network, thereby effectively addressing the problem of class imbalance and improving the network performance. The experimental results on the C-NMC 2019 dataset show that our proposed method manifests obvious advantages over the existing classification methods, including ResNet-50, Inception-V3, Densenet121, VGG16, Cross ViT, Token-to-Token ViT, Deep ViT, and simple ViT about 5.5-20.43% accuracy, 3.6-23.56% F1-score, 3.5-25.71% AUROC and 8.1-36.98% specificity, respectively.
Collapse
Affiliation(s)
- Yang Luo
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Ying Xu
- Anshan Central Hospital, Anshan, 114000, Liaoning, China
| | - Changbin Wang
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Qiuju Li
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, Liaoning, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang, 110819, Liaoning, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, Liaoning, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
3
|
Ye Z, Zhao Y, Chen M, Lu Q, Wang J, Cui X, Wang H, Xue P, Jiang Y. Distribution and diagnostic value of single and multiple high-risk HPV infections in detection of cervical intraepithelial neoplasia: A retrospective multicenter study in China. J Med Virol 2024; 96:e29835. [PMID: 39087721 DOI: 10.1002/jmv.29835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 06/24/2024] [Accepted: 07/18/2024] [Indexed: 08/02/2024]
Abstract
The risk associated with single and multiple human papillomavirus (HPV) infections in cervical intraepithelial neoplasia (CIN) remains uncertain. This study aims to explore the distribution and diagnostic significance of the number of high-risk HPV (hr-HPV) infections in detecting CIN, addressing a crucial gap in our understanding. This comprehensive multicenter, retrospective study meticulously analyzed the distribution of single and multiple hr-HPV, the risk of CIN2+, the relationship with CIN, and the impact on the diagnostic performance of colposcopy using demographic information, clinical histories, and tissue samples. The composition of a single infection was predominantly HPV16, 52, 58, 18, and 51, while HPV16 and 33 were identified as the primary causes of CIN2+. The primary instances of dual infection were mainly observed in combinations such as HPV16/18, HPV16/52, and HPV16/58, while HPV16/33 was identified as the primary cause of CIN2+. The incidence of hr-HPV infections shows a dose-response relationship with the risk of CIN (p for trend <0.001). Compared to single hr-HPV, multiple hr-HPV infections were associated with increased risks of CIN1 (1.44, 95% confidence interval [CI]: 1.20-1.72), CIN2 (1.70, 95% CI: 1.38-2.09), and CIN3 (1.08, 95% CI: 0.86-1.37). The colposcopy-based specificity of single hr-HPV (93.4, 95% CI: 92.4-94.4) and multiple hr-HPV (92.9, 95% CI: 90.8-94.6) was significantly lower than negative (97.9, 95% CI: 97.0-98.5) in detecting high-grade squamous intraepithelial lesion or worse (HSIL+). However, the sensitivity of single hr-HPV (73.5, 95% CI: 70.8-76.0) and multiple hr-HPV (71.8, 95% CI: 67.0-76.2) was higher than negative (62.0, 95% CI: 51.0-71.9) in detecting HSIL+. We found that multiple hr-HPV infections increase the risk of developing CIN lesions compared to a single infection. Colposcopy for HSIL+ detection showed high sensitivity and low specificity for hr-HPV infection. Apart from HPV16, this study also found that HPV33 is a major pathogenic genotype.
Collapse
Affiliation(s)
- Zichen Ye
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuankai Zhao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qu Lu
- School of Health Policy and Management, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jiahui Wang
- School of Health Policy and Management, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xiaoli Cui
- Department of Gynecologic Oncology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, Liaoning Province, China
| | - Huike Wang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- School of Health Policy and Management, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
4
|
Shi J, Shu T, Wu K, Jiang Z, Zheng L, Wang W, Wu H, Zheng Y. Masked hypergraph learning for weakly supervised histopathology whole slide image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 253:108237. [PMID: 38820715 DOI: 10.1016/j.cmpb.2024.108237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 05/16/2024] [Accepted: 05/20/2024] [Indexed: 06/02/2024]
Abstract
BACKGROUND AND OBJECTIVES Graph neural network (GNN) has been extensively used in histopathology whole slide image (WSI) analysis due to the efficiency and flexibility in modelling relationships among entities. However, most existing GNN-based WSI analysis methods only consider the pairwise correlation of patches from one single perspective (e.g. spatial affinity or embedding similarity) yet ignore the intrinsic non-pairwise relationships present in gigapixel WSI, which are likely to contribute to feature learning and downstream tasks. The objective of this study is therefore to explore the non-pairwise relationships in histopathology WSI and exploit them to guide the learning of slide-level representations for better classification performance. METHODS In this paper, we propose a novel Masked HyperGraph Learning (MaskHGL) framework for weakly supervised histopathology WSI classification. Compared with most GNN-based WSI classification methods, MaskHGL exploits the non-pairwise correlations between patches with hypergraph and global message passing conducted by hypergraph convolution. Concretely, multi-perspective hypergraphs are first built for each WSI, then hypergraph attention is introduced into the jointed hypergraph to propagate the non-pairwise relationships and thus yield more discriminative node representation. More importantly, a masked hypergraph reconstruction module is devised to guide the hypergraph learning which can generate more powerful robustness and generalization than the method only using hypergraph modelling. Additionally, a self-attention-based node aggregator is also applied to explore the global correlation of patches in WSI and produce the slide-level representation for classification. RESULTS The proposed method is evaluated on two public TCGA benchmark datasets and one in-house dataset. On the public TCGA-LUNG (1494 WSIs) and TCGA-EGFR (696 WSIs) test set, the area under receiver operating characteristic (ROC) curve (AUC) were 0.9752±0.0024 and 0.7421±0.0380, respectively. On the USTC-EGFR (754 WSIs) dataset, MaskHGL achieved significantly better performance with an AUC of 0.8745±0.0100, which surpassed the second-best state-of-the-art method SlideGraph+ 2.64%. CONCLUSIONS MaskHGL shows a great improvement, brought by considering the intrinsic non-pairwise relationships within WSI, in multiple downstream WSI classification tasks. In particular, the designed masked hypergraph reconstruction module promisingly alleviates the data scarcity and greatly enhances the robustness and classification ability of our MaskHGL. Notably, it has shown great potential in cancer subtyping and fine-grained lung cancer gene mutation prediction from hematoxylin and eosin (H&E) stained WSIs.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Tong Shu
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Kun Wu
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China; Tianmushan Laboratory, Hangzhou, 311115, Zhejiang Province, China
| | - Liping Zheng
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Wei Wang
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Haibo Wu
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
5
|
Li G, Li X, Wang Y, Gong S, Yang Y, Xu C. Detection of Cervical Lesion Cell/Clumps Based on Adaptive Feature Extraction. Bioengineering (Basel) 2024; 11:686. [PMID: 39061768 PMCID: PMC11274185 DOI: 10.3390/bioengineering11070686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 06/20/2024] [Accepted: 06/28/2024] [Indexed: 07/28/2024] Open
Abstract
Automated detection of cervical lesion cell/clumps in cervical cytological images is essential for computer-aided diagnosis. In this task, the shape and size of the lesion cell/clumps appeared to vary considerably, reducing the detection performance of cervical lesion cell/clumps. To address the issue, we propose an adaptive feature extraction network for cervical lesion cell/clumps detection, called AFE-Net. Specifically, we propose the adaptive module to acquire the features of cervical lesion cell/clumps, while introducing the global bias mechanism to acquire the global average information, aiming at combining the adaptive features with the global information to improve the representation of the target features in the model, and thus enhance the detection performance of the model. Furthermore, we analyze the results of the popular bounding box loss on the model and propose the new bounding box loss tendency-IoU (TIoU). Finally, the network achieves the mean Average Precision (mAP) of 64.8% on the CDetector dataset, with 30.7 million parameters. Compared with YOLOv7 of 62.6% and 34.8M, the model improved mAP by 2.2% and reduced the number of parameters by 11.8%.
Collapse
Affiliation(s)
- Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China; (G.L.); (X.L.); (Y.Y.)
| | - Xingguang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China; (G.L.); (X.L.); (Y.Y.)
| | - Yuting Wang
- Department of Gastroenterology, Children’s Hospital of Chongqing Medical University, Chongqing 400014, China;
- National Clinical Research Center for Child Health and Disorders, Chongqing 400014, China
- Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing 400014, China
- Chongqing Key Laboratory of Child Neurodevelopment and Cognitive Disorders, Chongqing 400014, China
| | - Shu Gong
- Department of Gastroenterology, Children’s Hospital of Chongqing Medical University, Chongqing 400014, China;
- National Clinical Research Center for Child Health and Disorders, Chongqing 400014, China
- Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing 400014, China
- Chongqing Key Laboratory of Child Neurodevelopment and Cognitive Disorders, Chongqing 400014, China
| | - Yanting Yang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China; (G.L.); (X.L.); (Y.Y.)
| | - Chuanyun Xu
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| |
Collapse
|
6
|
Bera A, Bhattacharjee D, Krejcar O. PND-Net: plant nutrition deficiency and disease classification using graph convolutional network. Sci Rep 2024; 14:15537. [PMID: 38969738 PMCID: PMC11226607 DOI: 10.1038/s41598-024-66543-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 07/02/2024] [Indexed: 07/07/2024] Open
Abstract
Crop yield production could be enhanced for agricultural growth if various plant nutrition deficiencies, and diseases are identified and detected at early stages. Hence, continuous health monitoring of plant is very crucial for handling plant stress. The deep learning methods have proven its superior performances in the automated detection of plant diseases and nutrition deficiencies from visual symptoms in leaves. This article proposes a new deep learning method for plant nutrition deficiencies and disease classification using a graph convolutional network (GNN), added upon a base convolutional neural network (CNN). Sometimes, a global feature descriptor might fail to capture the vital region of a diseased leaf, which causes inaccurate classification of disease. To address this issue, regional feature learning is crucial for a holistic feature aggregation. In this work, region-based feature summarization at multi-scales is explored using spatial pyramidal pooling for discriminative feature representation. Furthermore, a GCN is developed to capacitate learning of finer details for classifying plant diseases and insufficiency of nutrients. The proposed method, called Plant Nutrition Deficiency and Disease Network (PND-Net), has been evaluated on two public datasets for nutrition deficiency, and two for disease classification using four backbone CNNs. The best classification performances of the proposed PND-Net are as follows: (a) 90.00% Banana and 90.54% Coffee nutrition deficiency; and (b) 96.18% Potato diseases and 84.30% on PlantDoc datasets using Xception backbone. Furthermore, additional experiments have been carried out for generalization, and the proposed method has achieved state-of-the-art performances on two public datasets, namely the Breast Cancer Histopathology Image Classification (BreakHis 40 × : 95.50%, and BreakHis 100 × : 96.79% accuracy) and Single cells in Pap smear images for cervical cancer classification (SIPaKMeD: 99.18% accuracy). Also, the proposed method has been evaluated using five-fold cross validation and achieved improved performances on these datasets. Clearly, the proposed PND-Net effectively boosts the performances of automated health analysis of various plants in real and intricate field environments, implying PND-Net's aptness for agricultural growth as well as human cancer classification.
Collapse
Affiliation(s)
- Asish Bera
- Department of Computer Science and Information Systems, BITS Pilani, Pilani Campus, Pilani, Rajasthan, 333031, India.
| | - Debotosh Bhattacharjee
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, 700032, India
- Faculty of Informatics and Management, University of Hradec Kralove, Hradec Kralove, Czech Republic
| | - Ondrej Krejcar
- Faculty of Informatics and Management, University of Hradec Kralove, Hradec Kralove, Czech Republic
- Skoda Auto University, Na Karmeli 1457, 293 01, Mlada Boleslav, Czech Republic
- Malaysia Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
7
|
Zhang F, Geng J, Zhang DG, Gui J, Su R. Prediction of cancer recurrence based on compact graphs of whole slide images. Comput Biol Med 2023; 167:107663. [PMID: 37931526 DOI: 10.1016/j.compbiomed.2023.107663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/19/2023] [Accepted: 10/31/2023] [Indexed: 11/08/2023]
Abstract
Cancer recurrence is one of the primary causes of patient mortality following treatment, indicating increased aggressiveness of cancer cells and difficulties in achieving a cure. A critical step to improve patients' survival is accurately predicting recurrence status and giving appropriate treatment. Whole Slide Images (WSIs) are a common type of image data in the field of digital pathology, containing high-resolution tissue information. Furthermore, WSIs of primary tumors contain microenvironmental information directly associated with the growth of tumor cells. To effectively utilize this microenvironmental information. Firstly, we represented microenvironmental features of histopathological images as compact graphs. Secondly, this work aims to develop an enhanced lightweight graph neural network called the Adaptive Graph Clustering Network (AGCNet) for predicting cancer recurrence. Experiments are conducted on three cancer datasets from The Cancer Genome Atlas (TCGA), and AGCNet achieved an accuracy of 81.81% in BLCA, 69.66% in PAAD, and 81.96% in STAD. These results indicated that AGCNet is an effective model for predicting cancer recurrence and is expected to be applied in clinical applications.
Collapse
Affiliation(s)
- Fengyun Zhang
- School of Computer Software, College of Intelligence and Computing, Tianjin University, China
| | - Jie Geng
- TianJin Chest Hospital, Tianjin University, TianJin, China
| | - De-Gan Zhang
- Tianjin Key Lab of Intelligent Computing and Novel Software Technology, Tianjin University of Technology, TianJin, China
| | - Jinglong Gui
- School of Computer Software, College of Intelligence and Computing, Tianjin University, China
| | - Ran Su
- School of Computer Software, College of Intelligence and Computing, Tianjin University, China.
| |
Collapse
|
8
|
Wang H, Huang G, Zhao Z, Cheng L, Juncker-Jensen A, Nagy ML, Lu X, Zhang X, Chen DZ. CCF-GNN: A Unified Model Aggregating Appearance, Microenvironment, and Topology for Pathology Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3179-3193. [PMID: 37027573 DOI: 10.1109/tmi.2023.3249343] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Pathology images contain rich information of cell appearance, microenvironment, and topology features for cancer analysis and diagnosis. Among such features, topology becomes increasingly important in analysis for cancer immunotherapy. By analyzing geometric and hierarchically structured cell distribution topology, oncologists can identify densely-packed and cancer-relevant cell communities (CCs) for making decisions. Compared to commonly-used pixel-level Convolution Neural Network (CNN) features and cell-instance-level Graph Neural Network (GNN) features, CC topology features are at a higher level of granularity and geometry. However, topological features have not been well exploited by recent deep learning (DL) methods for pathology image classification due to lack of effective topological descriptors for cell distribution and gathering patterns. In this paper, inspired by clinical practice, we analyze and classify pathology images by comprehensively learning cell appearance, microenvironment, and topology in a fine-to-coarse manner. To describe and exploit topology, we design Cell Community Forest (CCF), a novel graph that represents the hierarchical formulation process of big-sparse CCs from small-dense CCs. Using CCF as a new geometric topological descriptor of tumor cells in pathology images, we propose CCF-GNN, a GNN model that successively aggregates heterogeneous features (e.g., appearance, microenvironment) from cell-instance-level, cell-community-level, into image-level for pathology image classification. Extensive cross-validation experiments show that our method significantly outperforms alternative methods on H&E-stained and immunofluorescence images for disease grading tasks with multiple cancer types. Our proposed CCF-GNN establishes a new topological data analysis (TDA) based method, which facilitates integrating multi-level heterogeneous features of point clouds (e.g., for cells) into a unified DL framework.
Collapse
|
9
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
10
|
Aftab R, Qiang Y, Zhao J, Urrehman Z, Zhao Z. Graph Neural Network for representation learning of lung cancer. BMC Cancer 2023; 23:1037. [PMID: 37884929 PMCID: PMC10601264 DOI: 10.1186/s12885-023-11516-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023] Open
Abstract
The emergence of image-based systems to improve diagnostic pathology precision, involving the intent to label sets or bags of instances, greatly hinges on Multiple Instance Learning for Whole Slide Images(WSIs). Contemporary works have shown excellent performance for a neural network in MIL settings. Here, we examine a graph-based model to facilitate end-to-end learning and sample suitable patches using a tile-based approach. We propose MIL-GNN to employ a graph-based Variational Auto-encoder with a Gaussian mixture model to discover relations between sample patches for the purposes to aggregate patch details into an individual vector representation. Using the classical MIL dataset MUSK and distinguishing two lung cancer sub-types, lung cancer called adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), we exhibit the efficacy of our technique. We achieved a 97.42% accuracy on the MUSK dataset and a 94.3% AUC on the classification of lung cancer sub-types utilizing features.
Collapse
Affiliation(s)
- Rukhma Aftab
- College of Information and Computer, Taiyuan University of Technology, No. 79 Yingze West Street, Taiyuan, 030024 China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, No. 79 Yingze West Street, Taiyuan, 030024 China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, No. 79 Yingze West Street, Taiyuan, 030024 China
| | - Zia Urrehman
- College of Information and Computer, Taiyuan University of Technology, No. 79 Yingze West Street, Taiyuan, 030024 China
| | - Zijuan Zhao
- College of Information and Computer, Taiyuan University of Technology, No. 79 Yingze West Street, Taiyuan, 030024 China
| |
Collapse
|
11
|
Khan A, Han S, Ilyas N, Lee YM, Lee B. CervixFormer: A Multi-scale swin transformer-Based cervical pap-Smear WSI classification framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107718. [PMID: 37451230 DOI: 10.1016/j.cmpb.2023.107718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/05/2023] [Accepted: 07/08/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. METHODS The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. RESULTS In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. CONCLUSIONS In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.
Collapse
Affiliation(s)
- Anwar Khan
- Center for Cancer Biology, Vlaams Instituut voor Biotechnologie (VIB), Belgium; Department of Oncology, Katholieke Universiteit (KU) Leuven, Belgium; Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Seunghyeon Han
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Naveed Ilyas
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea; Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, UAE.
| | - Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, South Korea.
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| |
Collapse
|
12
|
Kaur M, Singh D, Kumar V, Lee HN. MLNet: Metaheuristics-Based Lightweight Deep Learning Network for Cervical Cancer Diagnosis. IEEE J Biomed Health Inform 2023; 27:5004-5014. [PMID: 36399582 DOI: 10.1109/jbhi.2022.3223127] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
One of the leading causes of cancer-related deaths among women is cervical cancer. Early diagnosis and treatment can minimize the complications of this cancer. Recently, researchers have designed and implemented many deep learning-based automated cervical cancer diagnosis models. However, the majority of these models suffer from over-fitting, parameter tuning, and gradient vanishing problems. To overcome these problems, in this paper a metaheuristics-based lightweight deep learning network (MLNet) is proposed. Initially, the hyper-parameters tuning problem of convolutional neural network (CNN) is defined as a multi-objective problem. Particle swarm optimization (PSO) is used to optimally define the CNN architecture. Thereafter, Dynamically hybrid niching differential evolution (DHDE) is utilized to optimize the hyper-parameters of CNN layers. Each particle of PSO and solution of DHDE together represent the possible CNN configuration. F-score is used as a fitness function. The proposed MLNet is trained and validated on three benchmark cervical cancer datasets. On the Herlev dataset, MLNet outperforms the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.6254%, 1.5178%, 1.5780%, 1.7145%, and 1.4890%, respectively. Also, on the SIPaKMeD dataset, MLNet achieves better performance than the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 2.1250%, 2.2455%, 1.9074%, 1.9258%, and 1.8975%, respectively. Finally, on the Mendeley LBC dataset, MLNet achieves better performance than the competitive models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.4680%, 1.5845%, 1.3582%, 1.3926%, and 1.4125%, respectively.
Collapse
|
13
|
Lee YM, Lee B, Cho NH, Park JH. Beyond the Microscope: A Technological Overture for Cervical Cancer Detection. Diagnostics (Basel) 2023; 13:3079. [PMID: 37835821 PMCID: PMC10572593 DOI: 10.3390/diagnostics13193079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023] Open
Abstract
Cervical cancer is a common and preventable disease that poses a significant threat to women's health and well-being. It is the fourth most prevalent cancer among women worldwide, with approximately 604,000 new cases and 342,000 deaths in 2020, according to the World Health Organization. Early detection and diagnosis of cervical cancer are crucial for reducing mortality and morbidity rates. The Papanicolaou smear test is a widely used screening method that involves the examination of cervical cells under a microscope to identify any abnormalities. However, this method is time-consuming, labor-intensive, subjective, and prone to human errors. Artificial intelligence techniques have emerged as a promising alternative to improve the accuracy and efficiency of Papanicolaou smear diagnosis. Artificial intelligence techniques can automatically analyze Papanicolaou smear images and classify them into normal or abnormal categories, as well as detect the severity and type of lesions. This paper provides a comprehensive review of the recent advances in artificial intelligence diagnostics of the Papanicolaou smear, focusing on the methods, datasets, performance metrics, and challenges. The paper also discusses the potential applications and future directions of artificial intelligence diagnostics of the Papanicolaou smear.
Collapse
Affiliation(s)
- Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, Cheonan 31116, Republic of Korea;
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Republic of Korea;
| | - Nam-Hoon Cho
- Department of Pathology, Severance Hospital, College of Medicine, Yonsei University, Seoul 03722, Republic of Korea;
| | - Jae Hyun Park
- Department of Surgery, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Wonju 26492, Republic of Korea
| |
Collapse
|
14
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
15
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
16
|
Lv Z, Cao X, Jin X, Xu S, Deng H. High-accuracy morphological identification of bone marrow cells using deep learning-based Morphogo system. Sci Rep 2023; 13:13364. [PMID: 37591969 PMCID: PMC10435561 DOI: 10.1038/s41598-023-40424-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 08/10/2023] [Indexed: 08/19/2023] Open
Abstract
Accurate identification and classification of bone marrow (BM) nucleated cell morphology are crucial for the diagnosis of hematological diseases. However, the subjective and time-consuming nature of manual identification by pathologists hinders prompt diagnosis and patient treatment. To address this issue, we developed Morphogo, a convolutional neural network-based system for morphological examination. Morphogo was trained using a vast dataset of over 2.8 million BM nucleated cell images. Its performance was evaluated using 508 BM cases that were categorized into five groups based on the degree of morphological abnormalities, comprising a total of 385,207 BM nucleated cells. The results demonstrated Morphogo's ability to identify over 25 different types of BM nucleated cells, achieving a sensitivity of 80.95%, specificity of 99.48%, positive predictive value of 76.49%, negative predictive value of 99.44%, and an overall accuracy of 99.01%. In most groups, Morphogo cell analysis and Pathologists' proofreading showed high intragroup correlation coefficients for granulocytes, erythrocytes, lymphocytes, monocytes, and plasma cells. These findings further validate the practical applicability of the Morphogo system in clinical practice and emphasize its value in assisting pathologists in diagnosing blood disorders.
Collapse
Affiliation(s)
- Zhanwu Lv
- Bone Marrow Chamber, Guangzhou Kingmed Diagnostic Laboratory Group Co., Ltd., Guangzhou, 510330, China.
| | - Xinyi Cao
- Division of Medical Technology Development, Hangzhou Zhiwei Information Technology Co., Ltd., Hangzhou, 310000, China
| | - Xinyi Jin
- Division of Medical Technology Development, Hangzhou Zhiwei Information Technology Co., Ltd., Hangzhou, 310000, China
| | - Shuangqing Xu
- Bone Marrow Chamber, Guangzhou Kingmed Diagnostic Laboratory Group Co., Ltd., Guangzhou, 510330, China
| | - Huangling Deng
- Bone Marrow Chamber, Guangzhou Kingmed Diagnostic Laboratory Group Co., Ltd., Guangzhou, 510330, China
| |
Collapse
|
17
|
Fan Z, Wu X, Li C, Chen H, Liu W, Zheng Y, Chen J, Li X, Sun H, Jiang T, Grzegorzek M, Li C. CAM-VT: A Weakly supervised cervical cancer nest image identification approach using conjugated attention mechanism and visual transformer. Comput Biol Med 2023; 162:107070. [PMID: 37295389 DOI: 10.1016/j.compbiomed.2023.107070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/27/2023] [Accepted: 05/27/2023] [Indexed: 06/12/2023]
Abstract
Cervical cancer is the fourth most common cancer among women, and cytopathological images are often used to screen for this cancer. However, manual examination is very troublesome and the misdiagnosis rate is high. In addition, cervical cancer nest cells are denser and more complex, with high overlap and opacity, increasing the difficulty of identification. The appearance of the computer aided automatic diagnosis system solves this problem. In this paper, a weakly supervised cervical cancer nest image identification approach using Conjugated Attention Mechanism and Visual Transformer (CAM-VT), which can analyze pap slides quickly and accurately. CAM-VT proposes conjugated attention mechanism and visual transformer modules for local and global feature extraction respectively, and then designs an ensemble learning module to further improve the identification capability. In order to determine a reasonable interpretation, comparative experiments are conducted on our datasets. The average accuracy of the validation set of three repeated experiments using CAM-VT framework is 88.92%, which is higher than the optimal result of 22 well-known deep learning models. Moreover, we conduct ablation experiments and extended experiments on Hematoxylin and Eosin stained gastric histopathological image datasets to verify the ability and generalization ability of the framework. Finally, the top 5 and top 10 positive probability values of cervical nests are 97.36% and 96.84%, which have important clinical and practical significance. The experimental results show that the proposed CAM-VT framework has excellent performance in potential cervical cancer nest image identification tasks for practical clinical work.
Collapse
Affiliation(s)
- Zizhen Fan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiangchen Wu
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Changzhong Li
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yuchao Zheng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jing Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
18
|
Zhao L, Hou R, Teng H, Fu X, Han Y, Zhao J. CoADS: Cross attention based dual-space graph network for survival prediction of lung cancer using whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107559. [PMID: 37119773 DOI: 10.1016/j.cmpb.2023.107559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/18/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate overall survival (OS) prediction for lung cancer patients is of great significance, which can help classify patients into different risk groups to benefit from personalized treatment. Histopathology slides are considered the gold standard for cancer diagnosis and prognosis, and many algorithms have been proposed to predict the OS risk. Most methods rely on selecting key patches or morphological phenotypes from whole slide images (WSIs). However, OS prediction using the existing methods exhibits limited accuracy and remains challenging. METHODS In this paper, we propose a novel cross-attention-based dual-space graph convolutional neural network model (CoADS). To facilitate the improvement of survival prediction, we fully take into account the heterogeneity of tumor sections from different perspectives. CoADS utilizes the information from both physical and latent spaces. With the guidance of cross-attention, both the spatial proximity in physical space and the feature similarity in latent space between different patches from WSIs are integrated effectively. RESULTS We evaluated our approach on two large lung cancer datasets of 1044 patients. The extensive experimental results demonstrated that the proposed model outperforms state-of-the-art methods with the highest concordance index. CONCLUSIONS The qualitative and quantitative results show that the proposed method is more powerful for identifying the pathology features associated with prognosis. Furthermore, the proposed framework can be extended to other pathological images for predicting OS or other prognosis indicators, and thus delivering individualized treatment.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; Department of radiation oncology, Shanghai Chest Hospital, Shanghai, China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
19
|
Cervical cell classification with deep-learning algorithms. Med Biol Eng Comput 2023; 61:821-833. [PMID: 36626113 DOI: 10.1007/s11517-022-02745-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 12/18/2022] [Indexed: 01/11/2023]
Abstract
Cervical cancer is a serious threat to the lives and health of women. The accurate analysis of cervical cell smear images is an important diagnostic basis for cancer identification. However, pathological data are often complex and difficult to analyze accurately because pathology images contain a wide variety of cells. To improve the recognition accuracy of cervical cell smear images, we propose a novel deep-learning model based on the improved Faster R-CNN, shallow feature enhancement networks, and generative adversarial networks. First, we used a global average pooling layer to enhance the robustness of the data feature transformation. Second, we designed a shallow feature enhancement network to improve the localization and recognition of weak cells. Finally, we established a data augmentation network to improve the detection capability of the model. The experimental results demonstrate that our proposed methods are superior to CenterNet, YOLOv5, and Faster R-CNN algorithms in some aspects, such as shorter time consumption, higher recognition precision, and stronger adaptive ability. Its maximum accuracy is 99.81%, and the overall mean average precision is 89.4% for the SIPaKMeD and Herlev datasets. Our method provides a useful reference for cervical cell smear image analysis. The missed diagnosis rate and false diagnosis rate are relatively high for cervical cell smear images of different pathologies and stages. Therefore, our algorithms need to be further improved to achieve a better balance. We will use a hyperspectral microscope to obtain more spectral data of cervical cells and input them into deep-learning models for data processing and classification research. First, we sent training samples of cervical cells into our proposed deep-learning model. Then, we used the proposed model to train eight types of cervical cells. Finally, we utilized the trained classifier to test the untrained samples and obtained the classification results. Fig 1. Deep-learning cervical cell classification framework.
Collapse
|
20
|
Deep Learning with Graph Convolutional Networks: An Overview and Latest Applications in Computational Intelligence. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8342104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Convolutional neural networks (CNNs) have received widespread attention due to their powerful modeling capabilities and have been successfully applied in natural language processing, image recognition, and other fields. On the other hand, traditional CNN can only deal with Euclidean spatial data. In contrast, many real-life scenarios, such as transportation networks, social networks, reference networks, and so on, exist in graph data. The creation of graph convolution operators and graph pooling is at the heart of migrating CNN to graph data analysis and processing. With the advancement of the Internet and technology, graph convolution network (GCN), as an innovative technology in artificial intelligence (AI), has received more and more attention. GCN has been widely used in different fields such as image processing, intelligent recommender system, knowledge-based graph, and other areas due to their excellent characteristics in processing non-European spatial data. At the same time, communication networks have also embraced AI technology in recent years, and AI serves as the brain of the future network and realizes the comprehensive intelligence of the future grid. Many complex communication network problems can be abstracted as graph-based optimization problems and solved by GCN, thus overcoming the limitations of traditional methods. This survey briefly describes the definition of graph-based machine learning, introduces different types of graph networks, summarizes the application of GCN in various research fields, analyzes the research status, and gives the future research direction.
Collapse
|
21
|
Developing a Tuned Three-Layer Perceptron Fed with Trained Deep Convolutional Neural Networks for Cervical Cancer Diagnosis. Diagnostics (Basel) 2023; 13:diagnostics13040686. [PMID: 36832174 PMCID: PMC9955324 DOI: 10.3390/diagnostics13040686] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/14/2023] [Accepted: 02/07/2023] [Indexed: 02/15/2023] Open
Abstract
Cervical cancer is one of the most common types of cancer among women, which has higher death-rate than many other cancer types. The most common way to diagnose cervical cancer is to analyze images of cervical cells, which is performed using Pap smear imaging test. Early and accurate diagnosis can save the lives of many patients and increase the chance of success of treatment methods. Until now, various methods have been proposed to diagnose cervical cancer based on the analysis of Pap smear images. Most of the existing methods can be divided into two groups of methods based on deep learning techniques or machine learning algorithms. In this study, a combination method is presented, whose overall structure is based on a machine learning strategy, where the feature extraction stage is completely separate from the classification stage. However, in the feature extraction stage, deep networks are used. In this paper, a multi-layer perceptron (MLP) neural network fed with deep features is presented. The number of hidden layer neurons is tuned based on four innovative ideas. Additionally, ResNet-34, ResNet-50 and VGG-19 deep networks have been used to feed MLP. In the presented method, the layers related to the classification phase are removed in these two CNN networks, and the outputs feed the MLP after passing through a flatten layer. In order to improve performance, both CNNs are trained on related images using the Adam optimizer. The proposed method has been evaluated on the Herlev benchmark database and has provided 99.23 percent accuracy for the two-classes case and 97.65 percent accuracy for the 7-classes case. The results have shown that the presented method has provided higher accuracy than the baseline networks and many existing methods.
Collapse
|
22
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
23
|
Dash S, Sethy PK, Behera SK. Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images. Cancer Inform 2023; 22:11769351231161477. [PMID: 37008072 PMCID: PMC10064461 DOI: 10.1177/11769351231161477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/16/2023] [Indexed: 03/31/2023] Open
Abstract
The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network’s training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.
Collapse
Affiliation(s)
- Srikanta Dash
- Department of Electronics, Sambalpur University, Sambalpur, Odisha, India
| | - Prabira Kumar Sethy
- Department of Electronics, Sambalpur University, Sambalpur, Odisha, India
- Prabira Kumar Sethy, Department of Electronics, Sambalpur University, Jyoti Vihar, Sambalpur, Odisha 768019, India.
| | | |
Collapse
|
24
|
Beeche C, Gezer NS, Iyer K, Almetwali O, Yu J, Zhang Y, Dhupar R, Leader JK, Pu J. Assessing retinal vein occlusion based on color fundus photographs using neural understanding network (NUN). Med Phys 2023; 50:449-464. [PMID: 36184848 PMCID: PMC9868057 DOI: 10.1002/mp.16012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/15/2022] [Accepted: 09/16/2022] [Indexed: 01/26/2023] Open
Abstract
OBJECTIVE To develop and validate a novel deep learning architecture to classify retinal vein occlusion (RVO) on color fundus photographs (CFPs) and reveal the image features contributing to the classification. METHODS The neural understanding network (NUN) is formed by two components: (1) convolutional neural network (CNN)-based feature extraction and (2) graph neural networks (GNN)-based feature understanding. The CNN-based image features were transformed into a graph representation to encode and visualize long-range feature interactions to identify the image regions that significantly contributed to the classification decision. A total of 7062 CFPs were classified into three categories: (1) no vein occlusion ("normal"), (2) central RVO, and (3) branch RVO. The area under the receiver operative characteristic (ROC) curve (AUC) was used as the metric to assess the performance of the trained classification models. RESULTS The AUC, accuracy, sensitivity, and specificity for NUN to classify CFPs as normal, central occlusion, or branch occlusion were 0.975 (± 0.003), 0.911 (± 0.007), 0.983 (± 0.010), and 0.803 (± 0.005), respectively, which outperformed available classical CNN models. CONCLUSION The NUN architecture can provide a better classification performance and a straightforward visualization of the results compared to CNNs.
Collapse
Affiliation(s)
- Cameron Beeche
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Naciye S Gezer
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kartik Iyer
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Omar Almetwali
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Juezhao Yu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Yanchun Zhang
- Shaan’xi Eye Hospital, Xi’an, Shaanxi, 710004, China
| | - Rajeev Dhupar
- Department of Cardiothoracic Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Surgical Services Division, VA Pittsburgh Healthcare System, Pittsburgh, PA 15240
| | - Joseph K. Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
25
|
A. Mansouri R, Ragab M. Equilibrium Optimization Algorithm with Ensemble Learning Based Cervical Precancerous Lesion Classification Model. Healthcare (Basel) 2022; 11:healthcare11010055. [PMID: 36611515 PMCID: PMC9819283 DOI: 10.3390/healthcare11010055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/17/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Recently, artificial intelligence (AI) with deep learning (DL) and machine learning (ML) has been extensively used to automate labor-intensive and time-consuming work and to help in prognosis and diagnosis. AI's role in biomedical and biological imaging is an emerging field of research and reveals future trends. Cervical cell (CCL) classification is crucial in screening cervical cancer (CC) at an earlier stage. Unlike the traditional classification method, which depends on hand-engineered or crafted features, convolution neural network (CNN) usually categorizes CCLs through learned features. Moreover, the latent correlation of images might be disregarded in CNN feature learning and thereby influence the representative capability of the CNN feature. This study develops an equilibrium optimizer with ensemble learning-based cervical precancerous lesion classification on colposcopy images (EOEL-PCLCCI) technique. The presented EOEL-PCLCCI technique mainly focuses on identifying and classifying cervical cancer on colposcopy images. In the presented EOEL-PCLCCI technique, the DenseNet-264 architecture is used for the feature extractor, and the EO algorithm is applied as a hyperparameter optimizer. An ensemble of weighted voting classifications, namely long short-term memory (LSTM) and gated recurrent unit (GRU), is used for the classification process. A widespread simulation analysis is performed on a benchmark dataset to depict the superior performance of the EOEL-PCLCCI approach, and the results demonstrated the betterment of the EOEL-PCLCCI algorithm over other DL models.
Collapse
Affiliation(s)
- Rasha A. Mansouri
- Department of Biochemistry, Faculty of Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Department of Mathematics, Faculty of Science, Al-Azhar University, Naser City, Cairo 11884, Egypt
- Correspondence:
| |
Collapse
|
26
|
Wu N, Jia D, Zhang C, Li Z. Cervical cell classification based on strong feature CNN-LSVM network using Adaboost optimization. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Cervical cancer is one of the most common causes of death in women in the world, and early screening is an effective means of diagnosis and treatment, which can greatly improve the survival rate. Cervical cell classification model is an effective means to assist screening. However, the existing single model, including CNNs and machine learning methods, still has shortcomings such as unclear feature meaning, low accuracy and insufficient supervision. To solve the shortcomings of a single model, a novel framework based on strong feature Convolutional Neural Networks (CNN)-Lagrangian Support Vector Machine (LSVM) model is proposed for the accurate classification of cervical cells. Strong features extracted by hybrid methods are fused with the abstract ones from hidden layers of LeNet-5, then the fused features are processed with dimension reduction and fed into the LSVM classifier optimized by Adaboost for classification. Proposed model is evaluated using the augmented Herlev and private dataset with the metrics including accuracy (Acc), sensitivity (Sn), and specificity (Sp), which outperformed the baselines and state-of-the-art approaches with the Acc of 99.5% and 94.2% in 2&7-class classification, respectively.
Collapse
Affiliation(s)
- Nengkai Wu
- Beijing Jiaotong University, School of Electronics and Information Engineering, Beijing, China
| | - Dongyao Jia
- Beijing Jiaotong University, School of Electronics and Information Engineering, Beijing, China
| | - Chuanwang Zhang
- Beijing Jiaotong University, School of Electronics and Information Engineering, Beijing, China
| | - Ziqi Li
- Beijing Jiaotong University, School of Electronics and Information Engineering, Beijing, China
| |
Collapse
|
27
|
Gao W, Xu C, Li G, Zhang Y, Bai N, Li M. Cervical Cell Image Classification-Based Knowledge Distillation. Biomimetics (Basel) 2022; 7:biomimetics7040195. [PMID: 36412723 PMCID: PMC9680356 DOI: 10.3390/biomimetics7040195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/03/2022] [Accepted: 11/05/2022] [Indexed: 11/12/2022] Open
Abstract
Current deep-learning-based cervical cell classification methods suffer from parameter redundancy and poor model generalization performance, which creates challenges for the intelligent classification of cervical cytology smear images. In this paper, we establish a method for such classification that combines transfer learning and knowledge distillation. This new method not only transfers common features between different source domain data, but also realizes model-to-model knowledge transfer using the unnormalized probability output between models as knowledge. A multi-exit classification network is then introduced as the student network, where a global context module is embedded in each exit branch. A self-distillation method is then proposed to fuse contextual information; deep classifiers in the student network guide shallow classifiers to learn, and multiple classifier outputs are fused using an average integration strategy to form a classifier with strong generalization performance. The experimental results show that the developed method achieves good results using the SIPaKMeD dataset. The accuracy, sensitivity, specificity, and F-measure of the five classifications are 98.52%, 98.53%, 98.68%, 98.59%, respectively. The effectiveness of the method is further verified on a natural image dataset.
Collapse
Affiliation(s)
- Wenjian Gao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
- Correspondence: (C.X.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (C.X.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
28
|
Xu C, Li M, Li G, Zhang Y, Sun C, Bai N. Cervical Cell/Clumps Detection in Cytology Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:diagnostics12102477. [PMID: 36292166 PMCID: PMC9600700 DOI: 10.3390/diagnostics12102477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 12/04/2022] Open
Abstract
Cervical cancer is one of the most common and deadliest cancers among women and poses a serious health risk. Automated screening and diagnosis of cervical cancer will help improve the accuracy of cervical cell screening. In recent years, there have been many studies conducted using deep learning methods for automatic cervical cancer screening and diagnosis. Deep-learning-based Convolutional Neural Network (CNN) models require large amounts of data for training, but large cervical cell datasets with annotations are difficult to obtain. Some studies have used transfer learning approaches to handle this problem. However, such studies used the same transfer learning method that is the backbone network initialization by the ImageNet pre-trained model in two different types of tasks, the detection and classification of cervical cell/clumps. Considering the differences between detection and classification tasks, this study proposes the use of COCO pre-trained models when using deep learning methods for cervical cell/clumps detection tasks to better handle limited data set problem at training time. To further improve the model detection performance, based on transfer learning, we conducted multi-scale training according to the actual situation of the dataset. Considering the effect of bounding box loss on the precision of cervical cell/clumps detection, we analyzed the effects of different bounding box losses on the detection performance of the model and demonstrated that using a loss function consistent with the type of pre-trained model can help improve the model performance. We analyzed the effect of mean and std of different datasets on the performance of the model. It was demonstrated that the detection performance was optimal when using the mean and std of the cervical cell dataset used in the current study. Ultimately, based on backbone Resnet50, the mean Average Precision (mAP) of the network model is 61.6% and Average Recall (AR) is 87.7%. Compared to the current values of 48.8% and 64.0% in the used dataset, the model detection performance is significantly improved by 12.8% and 23.7%, respectively.
Collapse
Affiliation(s)
- Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (M.L.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (M.L.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Chengjie Sun
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
29
|
Auxiliary classification of cervical cells based on multi-domain hybrid deep learning framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
30
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
31
|
Weiss R, Karimijafarbigloo S, Roggenbuck D, Rödiger S. Applications of Neural Networks in Biomedical Data Analysis. Biomedicines 2022; 10:biomedicines10071469. [PMID: 35884772 PMCID: PMC9313085 DOI: 10.3390/biomedicines10071469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/04/2022] Open
Abstract
Neural networks for deep-learning applications, also called artificial neural networks, are important tools in science and industry. While their widespread use was limited because of inadequate hardware in the past, their popularity increased dramatically starting in the early 2000s when it became possible to train increasingly large and complex networks. Today, deep learning is widely used in biomedicine from image analysis to diagnostics. This also includes special topics, such as forensics. In this review, we discuss the latest networks and how they work, with a focus on the analysis of biomedical data, particularly biomarkers in bioimage data. We provide a summary on numerous technical aspects, such as activation functions and frameworks. We also present a data analysis of publications about neural networks to provide a quantitative insight into the use of network types and the number of journals per year to determine the usage in different scientific fields.
Collapse
Affiliation(s)
- Romano Weiss
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Sanaz Karimijafarbigloo
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Dirk Roggenbuck
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
| | - Stefan Rödiger
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
- Correspondence:
| |
Collapse
|
32
|
Pramanik R, Biswas M, Sen S, Souza Júnior LAD, Papa JP, Sarkar R. A fuzzy distance-based ensemble of deep models for cervical cancer detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106776. [PMID: 35398621 DOI: 10.1016/j.cmpb.2022.106776] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/22/2022] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. METHODS We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. RESULTS In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. CONCLUSION Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.
Collapse
Affiliation(s)
- Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Momojit Biswas
- Department of Metallurgical and Material Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Shibaprasad Sen
- Department of Computer Science and Technology, University of Engineering and Management, Kolkata, 700160, West Bengal, India.
| | - Luis Antonio de Souza Júnior
- Department of Computing, São Carlos Federal University-UFScar, São Carlos, São Paulo, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany.
| | - João Paulo Papa
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany; Department of Computing, São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01, Bauru, São Paulo, Brazil.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
33
|
Yang W, Wen G, Cao P, Yang J, Zaiane OR. Collaborative learning of graph generation, clustering and classification for brain networks diagnosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106772. [PMID: 35395591 DOI: 10.1016/j.cmpb.2022.106772] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 03/20/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE Accurate diagnosis of autism spectrum disorder (ASD) plays a key role in improving the condition and quality of life for patients. In this study, we mainly focus on ASD diagnosis with functional brain networks (FBNs). The major challenge for brain networks modeling is the high dimensional connectivity in brain networks and limited number of subjects, which hinders the classification capability of graph convolutional networks (GCNs). METHOD To alleviate the influence of the limited data and high dimensional connectivity, we introduce a unified three-stage graph learning framework for brain network classification, involving multi-graph clustering, graph generation and graph classification. The framework combining Graph Generation, Clustering and Classification Networks (GraphCGC-Net) enhances the critical connections by multi-graph clustering (MGC) with a supervision scheme, and generates realistic brain networks by simultaneously preserving the global consistent distribution and local topology properties. RESULTS To demonstrate the effectiveness of our approach, we evaluate the performance of the proposed method on the Autism Brain Imaging Data Exchange (ABIDE) dataset and conduct extensive experiments on the ASD classification problem. Our proposed method achieves an average accuracy of 70.45% and an AUC of 72.76% on ABIDE. Compared with the traditional GCN model, the proposed GraphCGC-Net obtains 9.3%, and 10.64% improvement in terms of accuracy and AUC metrics, respectively. CONCLUSION The comprehensive experiments demonstrate that our GraphCGC-Net is effective for graph classification in brain disorders diagnosis. Moreover, we find that MGC can generate biologically meaningful subnetworks, which is highly consistent with the previous neuroimaging-derived biomarker evidence of ASD. More importantly, the promising results suggest that applying generative adversarial networks (GANs) in brain networks to improve the classification performance is worth further investigation.
Collapse
Affiliation(s)
- Wenju Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Guangqi Wen
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jinzhu Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Osmar R Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Canada
| |
Collapse
|
34
|
Automatic Cancer Cell Taxonomy Using an Ensemble of Deep Neural Networks. Cancers (Basel) 2022; 14:cancers14092224. [PMID: 35565352 PMCID: PMC9100154 DOI: 10.3390/cancers14092224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 04/18/2022] [Accepted: 04/26/2022] [Indexed: 12/24/2022] Open
Abstract
Microscopic image-based analysis has been intensively performed for pathological studies and diagnosis of diseases. However, mis-authentication of cell lines due to misjudgments by pathologists has been recognized as a serious problem. To address this problem, we propose a deep-learning-based approach for the automatic taxonomy of cancer cell types. A total of 889 bright-field microscopic images of four cancer cell lines were acquired using a benchtop microscope. Individual cells were further segmented and augmented to increase the image dataset. Afterward, deep transfer learning was adopted to accelerate the classification of cancer types. Experiments revealed that the deep-learning-based methods outperformed traditional machine-learning-based methods. Moreover, the Wilcoxon signed-rank test showed that deep ensemble approaches outperformed individual deep-learning-based models (p < 0.001) and were in effect to achieve the classification accuracy up to 97.735%. Additional investigation with the Wilcoxon signed-rank test was conducted to consider various network design choices, such as the type of optimizer, type of learning rate scheduler, degree of fine-tuning, and use of data augmentation. Finally, it was found that the using data augmentation and updating all the weights of a network during fine-tuning improve the overall performance of individual convolutional neural network models.
Collapse
|
35
|
Chen W, Shen W, Gao L, Li X. Hybrid Loss-Constrained Lightweight Convolutional Neural Networks for Cervical Cell Classification. SENSORS 2022; 22:s22093272. [PMID: 35590961 PMCID: PMC9101629 DOI: 10.3390/s22093272] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) technologies have resulted in remarkable achievements and conferred massive benefits to computer-aided systems in medical imaging. However, the worldwide usage of AI-based automation-assisted cervical cancer screening systems is hindered by computational cost and resource limitations. Thus, a highly economical and efficient model with enhanced classification ability is much more desirable. This paper proposes a hybrid loss function with label smoothing to improve the distinguishing power of lightweight convolutional neural networks (CNNs) for cervical cell classification. The results strengthen our confidence in hybrid loss-constrained lightweight CNNs, which can achieve satisfactory accuracy with much lower computational cost for the SIPakMeD dataset. In particular, ShufflenetV2 obtained a comparable classification result (96.18% in accuracy, 96.30% in precision, 96.23% in recall, and 99.08% in specificity) with only one-seventh of the memory usage, one-sixth of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). GhostNet achieved an improved classification result (96.39% accuracy, 96.42% precision, 96.39% recall, and 99.09% specificity) with one-half of the memory usage, one-quarter of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). The proposed lightweight CNNs are likely to lead to an easily-applicable and cost-efficient automation-assisted system for cervical cancer diagnosis and prevention.
Collapse
|
36
|
Subarna T, Sukumar P. Detection and classification of cervical cancer images using CEENET deep learning approach. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Earlier detection of cervical cancer in women can save their lives before a chronic development. The accurate detection in cancer tissues of cervix in the human body is very important. In this article, cervical images were classified into either affected or healthy images using deep learning architecture. The proposed approach was designed with the modules of Edge detector, complex wavelet transform, feature derivation and Convolutional Neural Networks (CNN) architecture with segmentation. The edge pixels in the source cervical image were detected using Kirsch’s edge detector, the Complex Wavelet Transform (CWT) was there used to decompose the edge detected cervical images into number of sub bands. Local Derivative Pattern (LDP) and statistical features were computed from the decomposed sub bands and feature map was constructed using the computed features. The featured map along with the source cervical image was fed into the Cervical Ensemble Network (CEENET) model for classifying of cervical images into the classes healthy or cancer (affected).
Collapse
Affiliation(s)
- T.G. Subarna
- Department of Electronics and Communication Engineering, Nanadha Engineering College, Erode, Tamilnadu, India
| | - P. Sukumar
- Department of Electronics and Communication Engineering, Nanadha Engineering College, Erode, Tamilnadu, India
| |
Collapse
|
37
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|
38
|
Yaman O, Tuncer T. Exemplar pyramid deep feature extraction based cervical cancer image classification model using pap-smear images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103428] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
39
|
Lightweight convolutional neural network with knowledge distillation for cervical cells classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
40
|
Zhao C, Shuai R, Ma L, Liu W, Wu M. Improving cervical cancer classification with imbalanced datasets combining taming transformers with T2T-ViT. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:24265-24300. [PMID: 35342326 PMCID: PMC8933771 DOI: 10.1007/s11042-022-12670-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 01/12/2022] [Accepted: 02/21/2022] [Indexed: 05/12/2023]
Abstract
UNLABELLED Cervical cell classification has important clinical significance in cervical cancer screening at early stages. However, there are fewer public cervical cancer smear cell datasets, the weights of each classes' samples are unbalanced, the image quality is uneven, and the classification research results based on CNN tend to overfit. To solve the above problems, we propose a cervical cell image generation model based on taming transformers (CCG-taming transformers) to provide high-quality cervical cancer datasets with sufficient samples and balanced weights, we improve the encoder structure by introducing SE-block and MultiRes-block to improve the ability to extract information from cervical cancer cells images; we introduce Layer Normlization to standardize the data, which is convenient for the subsequent non-linear processing of the data by the ReLU activation function in feed forward; we also introduce SMOTE-Tomek Links to balance the source data set and the number of samples and weights of the images we use Tokens-to-Token Vision Transformers (T2T-ViT) combing transfer learning to classify the cervical cancer smear cell image dataset to improve the classification performance. Classification experiments using the model proposed in this paper are performed on three public cervical cancer datasets, the classification accuracy in the liquid-based cytology Pap smear dataset (4-class), SIPAKMeD (5-class), and Herlev (7-class) are 98.79%, 99.58%, and 99.88%, respectively. The quality of the images we generated on these three data sets is very close to the source data set, the final averaged inception score (IS), Fréchet inception distance (FID), Recall and Precision are 3.75, 0.71, 0.32 and 0.65 respectively. Our method improves the accuracy of cervical cancer smear cell classification, provides more cervical cell sample images for cervical cancer-related research, and assists gynecologists to judge and diagnose different types of cervical cancer cells and analyze cervical cancer cells at different stages, which are difficult to distinguish. This paper applies the transformer to the generation and recognition of cervical cancer cell images for the first time. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11042-022-12670-0.
Collapse
Affiliation(s)
- Chen Zhao
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816 China
| | - Renjun Shuai
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816 China
| | - Li Ma
- Nanjing Health Information Center, Nanjing, 210003 China
| | - Wenjia Liu
- Changzhou No. 2 People’s Hospital affiliated with Nanjing Medical University, Changzhou, 213003 China
| | - Menglin Wu
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816 China
| |
Collapse
|
41
|
Li J, Dou Q, Yang H, Liu J, Fu L, Zhang Y, Zheng L, Zhang D. Cervical cell multi-classification algorithm using global context information and attention mechanism. Tissue Cell 2021; 74:101677. [PMID: 34814053 DOI: 10.1016/j.tice.2021.101677] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 11/01/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022]
Abstract
Cervical cancer is the second biggest killer of female cancer, second only to breast cancer. The cure rate of precancerous lesions found early is relatively high. Therefore, cervical cell classification has very important clinical value in the early screening of cervical cancer. This paper proposes a convolutional neural network (L-PCNN) that integrates global context information and attention mechanism to classify cervical cells. The cell image is sent to the improved ResNet-50 backbone network to extract deep learning features. In order to better extract deep features, each convolution block introduces a convolution block attention mechanism to guide the network to focus on the cell area. Then, the end of the backbone network adds a pyramid pooling layer and a long short-term memory module (LSTM) to aggregate image features in different regions. The low-level features and high-level features are integrated, so that the whole network can learn more regional detail features, and solve the problem of network gradient disappearance. The experiment is conducted on the SIPaKMeD public data set. The experimental results show that the accuracy of the proposed l-PCNN in cervical cell accuracy is 98.89 %, the sensitivity is 99.9 %, the specificity is 99.8 % and the F-measure is 99.89 %, which is better than most cervical cell classification models, which proves the effectiveness of the model.
Collapse
Affiliation(s)
- Jun Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Qiyan Dou
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Haima Yang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Le Fu
- Department of Radiology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lulu Zheng
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Dawei Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
42
|
Liu W, Li C, Rahaman MM, Jiang T, Sun H, Wu X, Hu W, Chen H, Sun C, Yao Y, Grzegorzek M. Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification: From convolutional neural networks to visual transformers. Comput Biol Med 2021; 141:105026. [PMID: 34801245 DOI: 10.1016/j.compbiomed.2021.105026] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 11/08/2021] [Indexed: 11/19/2022]
Abstract
Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.
Collapse
Affiliation(s)
- Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Tao Jiang
- School of Control Engineering, Chengdu University of Information Technology, Chengdu, 610225, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, 110001, China
| | - Xiangchen Wu
- Suzhou Ruiguan Technology Company Ltd., Suzhou, 215000, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110169, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| |
Collapse
|
43
|
Zhang XM, Liang L, Liu L, Tang MJ. Graph Neural Networks and Their Current Applications in Bioinformatics. Front Genet 2021; 12:690049. [PMID: 34394185 PMCID: PMC8360394 DOI: 10.3389/fgene.2021.690049] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 05/28/2021] [Indexed: 12/22/2022] Open
Abstract
Graph neural networks (GNNs), as a branch of deep learning in non-Euclidean space, perform particularly well in various tasks that process graph structure data. With the rapid accumulation of biological network data, GNNs have also become an important tool in bioinformatics. In this research, a systematic survey of GNNs and their advances in bioinformatics is presented from multiple perspectives. We first introduce some commonly used GNN models and their basic principles. Then, three representative tasks are proposed based on the three levels of structural information that can be learned by GNNs: node classification, link prediction, and graph generation. Meanwhile, according to the specific applications for various omics data, we categorize and discuss the related studies in three aspects: disease prediction, drug discovery, and biomedical imaging. Based on the analysis, we provide an outlook on the shortcomings of current studies and point out their developing prospect. Although GNNs have achieved excellent results in many biological tasks at present, they still face challenges in terms of low-quality data processing, methodology, and interpretability and have a long road ahead. We believe that GNNs are potentially an excellent method that solves various biological problems in bioinformatics research.
Collapse
Affiliation(s)
- Xiao-Meng Zhang
- School of Information, Yunnan Normal University, Kunming, China
| | - Li Liang
- School of Information, Yunnan Normal University, Kunming, China
| | - Lin Liu
- School of Information, Yunnan Normal University, Kunming, China
- Key Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal University, Kunming, China
| | - Ming-Jing Tang
- Key Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal University, Kunming, China
- School of Life Sciences, Yunnan Normal University, Kunming, China
| |
Collapse
|
44
|
Rahaman MM, Li C, Yao Y, Kulwa F, Wu X, Li X, Wang Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput Biol Med 2021; 136:104649. [PMID: 34332347 DOI: 10.1016/j.compbiomed.2021.104649] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 07/08/2021] [Accepted: 07/09/2021] [Indexed: 01/01/2023]
Abstract
Cervical cancer, one of the most common fatal cancers among women, can be prevented by regular screening to detect any precancerous lesions at early stages and treat them. Pap smear test is a widely performed screening technique for early detection of cervical cancer, whereas this manual screening method suffers from high false-positive results because of human errors. To improve the manual screening practice, machine learning (ML) and deep learning (DL) based computer-aided diagnostic (CAD) systems have been investigated widely to classify cervical Pap cells. Most of the existing studies require pre-segmented images to obtain good classification results. In contrast, accurate cervical cell segmentation is challenging because of cell clustering. Some studies rely on handcrafted features, which cannot guarantee the classification stage's optimality. Moreover, DL provides poor performance for a multiclass classification task when there is an uneven distribution of data, which is prevalent in the cervical cell dataset. This investigation has addressed those limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF) technique based on DL, to classify the cervical cells accurately. Our proposed method uses various DL models to capture more potential information to enhance classification performance. Our proposed HDFF method is tested on the publicly available SIPaKMeD dataset and compared the performance with base DL models and the late fusion (LF) method. For the SIPaKMeD dataset, we have obtained the state-of-the-art classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class, and 5-class classification. This method is also tested on the Herlev dataset and achieves an accuracy of 98.32% for 2-class and 90.32% for 7-class classification. The source code of the DeepCervix model is available at: https://github.com/Mamunur-20/DeepCervix.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Frank Kulwa
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Xiangchen Wu
- Suzhou Ruiguan Technology Company Ltd., Suzhou, 215000, China
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Hospital and Institute, Shenyang, 110042, China.
| | - Qian Wang
- Cancer Hospital of China Medical University, Liaoning Hospital and Institute, Shenyang, 110042, China
| |
Collapse
|
45
|
Zhu X, Li X, Ong K, Zhang W, Li W, Li L, Young D, Su Y, Shang B, Peng L, Xiong W, Liu Y, Liao W, Xu J, Wang F, Liao Q, Li S, Liao M, Li Y, Rao L, Lin J, Shi J, You Z, Zhong W, Liang X, Han H, Zhang Y, Tang N, Hu A, Gao H, Cheng Z, Liang L, Yu W, Ding Y. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat Commun 2021; 12:3541. [PMID: 34112790 PMCID: PMC8192526 DOI: 10.1038/s41467-021-23913-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/24/2021] [Indexed: 02/05/2023] Open
Abstract
Technical advancements significantly improve earlier diagnosis of cervical cancer, but accurate diagnosis is still difficult due to various factors. We develop an artificial intelligence assistive diagnostic solution, AIATBS, to improve cervical liquid-based thin-layer cell smear diagnosis according to clinical TBS criteria. We train AIATBS with >81,000 retrospective samples. It integrates YOLOv3 for target detection, Xception and Patch-based models to boost target classification, and U-net for nucleus segmentation. We integrate XGBoost and a logical decision tree with these models to optimize the parameters given by the learning process, and we develop a complete cervical liquid-based cytology smear TBS diagnostic system which also includes a quality control solution. We validate the optimized system with >34,000 multicenter prospective samples and achieve better sensitivity compared to senior cytologists, yet retain high specificity while achieving a speed of <180s/slide. Our system is adaptive to sample preparation using different standards, staining protocols and scanners.
Collapse
Affiliation(s)
- Xiaohui Zhu
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Xiaoming Li
- Department of Pathology, Shenzhen Bao'an People's Hospital (group), Shenzhen, Guangdong Province, PR China
| | - Kokhaur Ong
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
- Bioinformatics Institute, A*STAR, Singapore, Singapore
| | - Wenli Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Wencai Li
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Longjie Li
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yongjian Su
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Bin Shang
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Linggan Peng
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wei Xiong
- Guangzhou Kaipu Biotechnology Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Yunke Liu
- Laboratory Department, Guangzhou Tianhe District Maternal and Child Health Care Hospital, Guangzhou, Guangdong Province, PR China
| | - Wenting Liao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong Province, PR China
| | - Jingjing Xu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Feifei Wang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Qing Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Shengnan Li
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Minmin Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Yu Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Linshang Rao
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jinquan Lin
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jianyuan Shi
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Zejun You
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wenlong Zhong
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Xinrong Liang
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Hao Han
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yan Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Department of Pathology, Shenzhen Longhua District Maternity & Child Healthcare Hospital, Shenzhen, PR China
| | - Na Tang
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China
| | - Aixia Hu
- Department of Pathology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, PR China
| | - Hongyi Gao
- Department of Pathology, Guangdong Provincial Women's and Children's Dispensary, Shenzhen, Guangdong Province, PR China
| | - Zhiqiang Cheng
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.
- Bioinformatics Institute, A*STAR, Singapore, Singapore.
| | - Yanqing Ding
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| |
Collapse
|
46
|
Chandran V, Sumithra MG, Karthick A, George T, Deivakani M, Elakkiya B, Subramaniam U, Manoharan S. Diagnosis of Cervical Cancer based on Ensemble Deep Learning Network using Colposcopy Images. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5584004. [PMID: 33997017 PMCID: PMC8112909 DOI: 10.1155/2021/5584004] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/31/2021] [Accepted: 04/20/2021] [Indexed: 12/17/2022]
Abstract
Traditional screening of cervical cancer type classification majorly depends on the pathologist's experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This paper proposes two deep learning CNN architectures to detect cervical cancer using the colposcopy images; one is the VGG19 (TL) model, and the other is CYENET. In the CNN architecture, VGG19 is adopted as a transfer learning for the studies. A new model is developed and termed as the Colposcopy Ensemble Network (CYENET) to classify cervical cancers from colposcopy images automatically. The accuracy, specificity, and sensitivity are estimated for the developed model. The classification accuracy for VGG19 was 73.3%. Relatively satisfied results are obtained for VGG19 (TL). From the kappa score of the VGG19 model, we can interpret that it comes under the category of moderate classification. The experimental results show that the proposed CYENET exhibited high sensitivity, specificity, and kappa scores of 92.4%, 96.2%, and 88%, respectively. The classification accuracy of the CYENET model is improved as 92.3%, which is 19% higher than the VGG19 (TL) model.
Collapse
Affiliation(s)
- Venkatesan Chandran
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - M. G. Sumithra
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - Alagar Karthick
- Renewable Energy Lab, Department of Electrical and Electronics Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - Tony George
- Department of Electrical and Electronics Engineering, Adi Shankara Institute of Engineering and Technology Mattoor, Kalady, Kerala 683574, India
| | - M. Deivakani
- Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul, 624622 Tamilnadu, India
| | - Balan Elakkiya
- Department of Electronics and Communication Engineering, Vel Tech High Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Tamilnadu 600062, India
| | - Umashankar Subramaniam
- Department of Communications and Networks, Renewable Energy Lab, College of Engineering, Prince, Sultan University, Riyadh 12435, Saudi Arabia
| | - S. Manoharan
- Department of Computer Science, School of Informatics and Electrical Engineering, Institute of Technology, Ambo University, Ambo, Post Box No. 19, Ethiopia
| |
Collapse
|
47
|
Zheng Y, Jiang Z, Zhang H, Xie F, Hu D, Sun S, Shi J, Xue C. Stain Standardization Capsule for Application-Driven Histopathological Image Normalization. IEEE J Biomed Health Inform 2021; 25:337-347. [PMID: 32248128 DOI: 10.1109/jbhi.2020.2983206] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Color consistency is crucial to developing robust deep learning methods for histopathological image analysis. With the increasing application of digital histopathological slides, the deep learning methods are probably developed based on the data from multiple medical centers. This requirement makes it a challenging task to normalize the color variance of histopathological images from different medical centers. In this paper, we propose a novel color standardization module named stain standardization capsule based on the capsule network and the corresponding dynamic routing algorithm. The proposed module can learn and generate uniform stain separation outputs for histopathological images in various color appearance without the reference to manually selected template images. The proposed module is light and can be jointly trained with the application-driven CNN model. The proposed method was validated on three histopathology datasets and a cytology dataset, and was compared with state-of-the-art methods. The experimental results have demonstrated that the SSC module is effective in improving the performance of histopathological image analysis and has achieved the best performance in the compared methods.
Collapse
|