1
|
Yi J, Liu X, Cheng S, Chen L, Zeng S. Multi-scale window transformer for cervical cytopathology image recognition. Comput Struct Biotechnol J 2024; 24:314-321. [PMID: 38681132 PMCID: PMC11046249 DOI: 10.1016/j.csbj.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Cervical cancer is a major global health issue, particularly in developing countries where access to healthcare is limited. Early detection of pre-cancerous lesions is crucial for successful treatment and reducing mortality rates. However, traditional screening and diagnostic processes require cytopathology doctors to manually interpret a huge number of cells, which is time-consuming, costly, and prone to human experiences. In this paper, we propose a Multi-scale Window Transformer (MWT) for cervical cytopathology image recognition. We design multi-scale window multi-head self-attention (MW-MSA) to simultaneously integrate cell features of different scales. Small window self-attention is used to extract local cell detail features, and large window self-attention aims to integrate features from smaller-scale window attention to achieve window-to-window information interaction. Our design enables long-range feature integration but avoids whole image self-attention (SA) in ViT or twice local window SA in Swin Transformer. We find convolutional feed-forward networks (CFFN) are more efficient than original MLP-based FFN for representing cytopathology images. Our overall model adopts a pyramid architecture. We establish two multi-center cervical cell classification datasets of two-category 192,123 images and four-category 174,138 images. Extensive experiments demonstrate that our MWT outperforms state-of-the-art general classification networks and specialized classifiers for cytopathology images in the internal and external test sets. The results on large-scale datasets prove the effectiveness and generalization of our proposed model. Our work provides a reliable cytopathology image recognition method and helps establish computer-aided screening for cervical cancer. Our code is available at https://github.com/nmyz669/MWT, and our web service tool can be accessed at https://huggingface.co/spaces/nmyz/MWTdemo.
Collapse
Affiliation(s)
- Jiaxiang Yi
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Xiuli Liu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Shenghua Cheng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
2
|
Shalata AT, Alksas A, Shehata M, Khater S, Ezzat O, Ali KM, Gondim D, Mahmoud A, El-Gendy EM, Mohamed MA, Alghamdi NS, Ghazal M, El-Baz A. Precise grading of non-muscle invasive bladder cancer with multi-scale pyramidal CNN. Sci Rep 2024; 14:25131. [PMID: 39448755 PMCID: PMC11502747 DOI: 10.1038/s41598-024-77101-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 10/18/2024] [Indexed: 10/26/2024] Open
Abstract
The grading of non-muscle invasive bladder cancer (NMIBC) continues to face challenges due to subjective interpretations, which affect the assessment of its severity. To address this challenge, we are developing an innovative artificial intelligence (AI) system aimed at objectively grading NMIBC. This system uses a novel convolutional neural network (CNN) architecture called the multi-scale pyramidal pretrained CNN to analyze both local and global pathology markers extracted from digital pathology images. The proposed CNN structure takes as input three levels of patches, ranging from small patches (e.g., 128 × 128 ) to the largest size patches ( 512 × 512 ). These levels are then fused by random forest (RF) to estimate the severity grade of NMIBC. The optimal patch sizes and other model hyperparameters are determined using a grid search algorithm. For each patch size, the proposed system has been trained on 32K patches (comprising 16K low-grade and 16K high-grade samples) and subsequently tested on 8K patches (consisting of 4K low-grade and 4K high-grade samples), all annotated by two pathologists. Incorporating light and efficient processing, defining new benchmarks in the application of AI to histopathology, the ShuffleNet-based AI system achieved notable metrics on the testing data, including 94.25% ± 0.70% accuracy, 94.47% ± 0.93% sensitivity, 94.03% ± 0.95% specificity, and a 94.29% ± 0.70% F1-score. These results highlight its superior performance over traditional models like ResNet-18. The proposed system's robustness in accurately grading pathology demonstrates its potential as an advanced AI tool for diagnosing human diseases in the domain of digital pathology.
Collapse
Affiliation(s)
- Aya T Shalata
- Biomedical Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ahmed Alksas
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Mohamed Shehata
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Sherry Khater
- Urology and Nephrology Center, Mansoura University, Mansoura, Egypt
| | - Osama Ezzat
- Urology and Nephrology Center, Mansoura University, Mansoura, Egypt
| | - Khadiga M Ali
- Pathology Department, Faculty of Medicine, Mansoura University, Mansoura, Egypt
| | - Dibson Gondim
- Department of Pathology and Laboratory Medicine, University of Louisville, Louisville, KY, USA
| | - Ali Mahmoud
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Eman M El-Gendy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mohamed A Mohamed
- Electronics and Communication Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Norah S Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
3
|
Mascarenhas M, Alencoão I, Carinhas MJ, Martins M, Ribeiro T, Mendes F, Cardoso P, Almeida MJ, Mota J, Fernandes J, Ferreira J, Macedo G, Mascarenhas T, Zulmira R. Artificial Intelligence and Colposcopy: Automatic Identification of Vaginal Squamous Cell Carcinoma Precursors. Cancers (Basel) 2024; 16:3540. [PMID: 39456634 PMCID: PMC11505987 DOI: 10.3390/cancers16203540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/17/2024] [Accepted: 10/03/2024] [Indexed: 10/28/2024] Open
Abstract
Background/Objectives: While human papillomavirus (HPV) is well known for its role in cervical cancer, it also affects vaginal cancers. Although colposcopy offers a comprehensive examination of the female genital tract, its diagnostic accuracy remains suboptimal. Integrating artificial intelligence (AI) could enhance the cost-effectiveness of colposcopy, but no AI models specifically differentiate low-grade (LSILs) and high-grade (HSILs) squamous intraepithelial lesions in the vagina. This study aims to develop and validate an AI model for the differentiation of HPV-associated dysplastic lesions in this region. Methods: A convolutional neural network (CNN) model was developed to differentiate HSILs from LSILs in vaginoscopy (during colposcopy) still images. The AI model was developed on a dataset of 57,250 frames (90% training/validation [including a 5-fold cross-validation] and 10% testing) obtained from 71 procedures. The model was evaluated based on its sensitivity, specificity, accuracy and area under the receiver operating curve (AUROC). Results: For HSIL/LSIL differentiation in the vagina, during the training/validation phase, the CNN demonstrated a mean sensitivity, specificity and accuracy of 98.7% (IC95% 96.7-100.0%), 99.1% (IC95% 98.1-100.0%), and 98.9% (IC95% 97.9-99.8%), respectively. The mean AUROC was 0.990 ± 0.004. During testing phase, the sensitivity was 99.6% and 99.7% for both specificity and accuracy. Conclusions: This is the first globally developed AI model capable of HSIL/LSIL differentiation in the vaginal region, demonstrating high and robust performance metrics. Its effective application paves the way for AI-powered colposcopic assessment across the entire female genital tract, offering a significant advancement in women's healthcare worldwide.
Collapse
Affiliation(s)
- Miguel Mascarenhas
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4150-180 Porto, Portugal
| | - Inês Alencoão
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, 4099-001 Porto, Portugal; (I.A.)
| | - Maria João Carinhas
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, 4099-001 Porto, Portugal; (I.A.)
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4150-180 Porto, Portugal
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4150-180 Porto, Portugal
| | - Maria João Almeida
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Joana Mota
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Joana Fernandes
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, 4150-180 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, 4150-180 Porto, Portugal
| | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, 4200-319 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4150-180 Porto, Portugal
| | - Teresa Mascarenhas
- Department of Gynecology, São João University Hospital, 4200-319 Porto, Portugal
| | - Rosa Zulmira
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, 4099-001 Porto, Portugal; (I.A.)
| |
Collapse
|
4
|
Qiu R, Zhou M, Bai J, Lu Y, Wang H. PSFHSP-Net: an efficient lightweight network for identifying pubic symphysis-fetal head standard plane from intrapartum ultrasound images. Med Biol Eng Comput 2024; 62:2975-2986. [PMID: 38722478 PMCID: PMC11379789 DOI: 10.1007/s11517-024-03111-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 04/25/2024] [Indexed: 09/07/2024]
Abstract
The accurate selection of the ultrasound plane for the fetal head and pubic symphysis is critical for precisely measuring the angle of progression. The traditional method depends heavily on sonographers manually selecting the imaging plane. This process is not only time-intensive and laborious but also prone to variability based on the clinicians' expertise. Consequently, there is a significant need for an automated method driven by artificial intelligence. To enhance the efficiency and accuracy of identifying the pubic symphysis-fetal head standard plane (PSFHSP), we proposed a streamlined neural network, PSFHSP-Net, based on a modified version of ResNet-18. This network comprises a single convolutional layer and three residual blocks designed to mitigate noise interference and bolster feature extraction capabilities. The model's adaptability was further refined by expanding the shared feature layer into task-specific layers. We assessed its performance against both traditional heavyweight and other lightweight models by evaluating metrics such as F1-score, accuracy (ACC), recall, precision, area under the ROC curve (AUC), model parameter count, and frames per second (FPS). The PSFHSP-Net recorded an ACC of 0.8995, an F1-score of 0.9075, a recall of 0.9191, and a precision of 0.9022. This model surpassed other heavyweight and lightweight models in these metrics. Notably, it featured the smallest model size (1.48 MB) and the highest processing speed (65.7909 FPS), meeting the real-time processing criterion of over 24 images per second. While the AUC of our model was 0.930, slightly lower than that of ResNet34 (0.935), it showed a marked improvement over ResNet-18 in testing, with increases in ACC and F1-score of 0.0435 and 0.0306, respectively. However, precision saw a slight decrease from 0.9184 to 0.9022, a reduction of 0.0162. Despite these trade-offs, the compression of the model significantly reduced its size from 42.64 to 1.48 MB and increased its inference speed by 4.4753 to 65.7909 FPS. The results confirm that the PSFHSP-Net is capable of swiftly and effectively identifying the PSFHSP, thereby facilitating accurate measurements of the angle of progression. This development represents a significant advancement in automating fetal imaging analysis, promising enhanced consistency and reduced operator dependency in clinical settings.
Collapse
Affiliation(s)
- Ruiyu Qiu
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, 510632, China
| | - Mengqiang Zhou
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, 510632, China.
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China.
| | - Yaosheng Lu
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, 510632, China
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Department of Computer Science, College of Information Science and Technology, Jinan University, Guangzhou, 510632, China
| |
Collapse
|
5
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
6
|
Li Z, Zeng CM, Dong YG, Cao Y, Yu LY, Liu HY, Tian X, Tian R, Zhong CY, Zhao TT, Liu JS, Chen Y, Li LF, Huang ZY, Wang YY, Hu Z, Zhang J, Liang JX, Zhou P, Lu YQ. A segmentation model to detect cevical lesions based on machine learning of colposcopic images. Heliyon 2023; 9:e21043. [PMID: 37928028 PMCID: PMC10623278 DOI: 10.1016/j.heliyon.2023.e21043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 11/07/2023] Open
Abstract
Background Semantic segmentation is crucial in medical image diagnosis. Traditional deep convolutional neural networks excel in image classification and object detection but fall short in segmentation tasks. Enhancing the accuracy and efficiency of detecting high-level cervical lesions and invasive cancer poses a primary challenge in segmentation model development. Methods Between 2018 and 2022, we retrospectively studied a total of 777 patients, comprising 339 patients with high-level cervical lesions and 313 patients with microinvasive or invasive cervical cancer. Overall, 1554 colposcopic images were put into the DeepLabv3+ model for learning. Accuracy, Precision, Specificity, and mIoU were employed to evaluate the performance of the model in the prediction of cervical high-level lesions and cancer. Results Experiments showed that our segmentation model had better diagnosis efficiency than colposcopic experts and other artificial intelligence models, and reached Accuracy of 93.29 %, Precision of 87.2 %, Specificity of 90.1 %, and mIoU of 80.27 %, respectively. Conclution The DeepLabv3+ model had good performance in the segmentation of cervical lesions in colposcopic post-acetic-acid images and can better assist colposcopists in improving the diagnosis.
Collapse
Affiliation(s)
- Zhen Li
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Chu-Mei Zeng
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Yan-Gang Dong
- Institute for Brain Research and Rehabilitation, the South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ying Cao
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Li-Yao Yu
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Hui-Ying Liu
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Xun Tian
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Rui Tian
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Chao-Yue Zhong
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Ting-Ting Zhao
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Jia-Shuo Liu
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Ye Chen
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Li-Fang Li
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Zhe-Ying Huang
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Yu-Yan Wang
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Zheng Hu
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Jingjing Zhang
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Jiu-Xing Liang
- Institute for Brain Research and Rehabilitation, the South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ping Zhou
- Department of Gynecology, Dongguan Maternal and Child Hospital, Dongguan, Guangdong, 523057, China
| | - Yi-Qin Lu
- Department of Gynecology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, 101121, China
| |
Collapse
|
7
|
Retinal image blood vessel classification using hybrid deep learning in cataract diseased fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
8
|
A. Mansouri R, Ragab M. Equilibrium Optimization Algorithm with Ensemble Learning Based Cervical Precancerous Lesion Classification Model. Healthcare (Basel) 2022; 11:healthcare11010055. [PMID: 36611515 PMCID: PMC9819283 DOI: 10.3390/healthcare11010055] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/17/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Recently, artificial intelligence (AI) with deep learning (DL) and machine learning (ML) has been extensively used to automate labor-intensive and time-consuming work and to help in prognosis and diagnosis. AI's role in biomedical and biological imaging is an emerging field of research and reveals future trends. Cervical cell (CCL) classification is crucial in screening cervical cancer (CC) at an earlier stage. Unlike the traditional classification method, which depends on hand-engineered or crafted features, convolution neural network (CNN) usually categorizes CCLs through learned features. Moreover, the latent correlation of images might be disregarded in CNN feature learning and thereby influence the representative capability of the CNN feature. This study develops an equilibrium optimizer with ensemble learning-based cervical precancerous lesion classification on colposcopy images (EOEL-PCLCCI) technique. The presented EOEL-PCLCCI technique mainly focuses on identifying and classifying cervical cancer on colposcopy images. In the presented EOEL-PCLCCI technique, the DenseNet-264 architecture is used for the feature extractor, and the EO algorithm is applied as a hyperparameter optimizer. An ensemble of weighted voting classifications, namely long short-term memory (LSTM) and gated recurrent unit (GRU), is used for the classification process. A widespread simulation analysis is performed on a benchmark dataset to depict the superior performance of the EOEL-PCLCCI approach, and the results demonstrated the betterment of the EOEL-PCLCCI algorithm over other DL models.
Collapse
Affiliation(s)
- Rasha A. Mansouri
- Department of Biochemistry, Faculty of Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Department of Mathematics, Faculty of Science, Al-Azhar University, Naser City, Cairo 11884, Egypt
- Correspondence:
| |
Collapse
|