1
|
Rönnau MM, Lepper TW, Guedes IC, Espinosa ALF, Rados PV, Oliveira MM. Automatic segmentation and classification of Papanicolaou-stained cells and dataset for oral cancer detection. Comput Biol Med 2024; 180:108967. [PMID: 39111154 DOI: 10.1016/j.compbiomed.2024.108967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 06/19/2024] [Accepted: 07/28/2024] [Indexed: 08/29/2024]
Abstract
BACKGROUND AND OBJECTIVE Papanicolaou staining has been successfully used to assist early detection of cervix cancer for several decades. We postulate that this staining technique can also be used for assisting early detection of oral cancer, which is responsible for about 300,000 deaths every year. The rational for such claim includes two key observations: (i) nuclear atypia, i.e., changes in volume, shape, and staining properties of the cell nuclei can be linked to rapid cell proliferation and genetic instability; and (ii) Papanicolaou staining allows one to reliably segment cells' nuclei and cytoplasms. While Papanicolaou staining is an attractive tool due to its low cost, its interpretation requires a trained pathologist. Our goal is to automate the segmentation and classification of morphological features needed to evaluate the use of Papanicolaou staining for early detection of mouth cancer. METHODS We built a convolutional neural network (CNN) for automatic segmentation and classification of cells in Papanicolaou-stained images. Our CNN was trained and evaluated on a new image dataset of cells from oral mucosa consisting of 1,563 Full HD images from 52 patients, annotated by specialists. The effectiveness of our model was evaluated against a group of experts. Its robustness was also demonstrated on five public datasets of cervical images captured with different microscopes and cameras, and having different resolutions, colors, background intensities, and noise levels. RESULTS Our CNN model achieved expert-level performance in a comparison with a group of three human experts on a set of 400 Papanicolaou-stained images of the oral mucosa from 20 patients. The results of this experiment exhibited high Interclass Correlation Coefficient (ICC) values. Despite being trained on images from the oral mucosa, it produced high-quality segmentation and plausible classification for five public datasets of cervical cells. Our Papanicolaou-stained image dataset is the most diverse publicly available image dataset for the oral mucosa in terms of number of patients. CONCLUSION Our solution provides the means for exploring the potential of Papanicolaou-staining as a powerful and inexpensive tool for early detection of oral cancer. We are currently using our system to detect suspicious cells and cell clusters in oral mucosa slide images. Our trained model, code, and dataset are available and can help practitioners and stimulate research in early oral cancer detection.
Collapse
Affiliation(s)
- Maikel M Rönnau
- Instituto de Informática, Universidade Federal do Rio Grande do Sul, Av. Bento Gonçalves, 9500, Porto Alegre, 91501-970, RS, Brazil.
| | - Tatiana W Lepper
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Igor C Guedes
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Ana L F Espinosa
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Pantelis V Rados
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Manuel M Oliveira
- Instituto de Informática, Universidade Federal do Rio Grande do Sul, Av. Bento Gonçalves, 9500, Porto Alegre, 91501-970, RS, Brazil.
| |
Collapse
|
2
|
Kupas D, Hajdu A, Kovacs I, Hargitai Z, Szombathy Z, Harangi B. Annotated Pap cell images and smear slices for cell classification. Sci Data 2024; 11:743. [PMID: 38972893 PMCID: PMC11228026 DOI: 10.1038/s41597-024-03596-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 07/02/2024] [Indexed: 07/09/2024] Open
Abstract
Machine learning-based systems have become instrumental in augmenting global efforts to combat cervical cancer. A burgeoning area of research focuses on leveraging artificial intelligence to enhance the cervical screening process, primarily through the exhaustive examination of Pap smears, traditionally reliant on the meticulous and labor-intensive analysis conducted by specialized experts. Despite the existence of some comprehensive and readily accessible datasets, the field is presently constrained by the limited volume of publicly available images and smears. As a remedy, our work unveils APACC (Annotated PAp cell images and smear slices for Cell Classification), a comprehensive dataset designed to bridge this gap. The APACC dataset features a remarkable array of images crucial for advancing research in this field. It comprises 103,675 annotated cell images, carefully extracted from 107 whole smears, which are further divided into 21,371 sub-regions for a more refined analysis. This dataset includes a vast number of cell images from conventional Pap smears and their specific locations on each smear, offering a valuable resource for in-depth investigation and study.
Collapse
Affiliation(s)
- David Kupas
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Zoltan Hargitai
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Zita Szombathy
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
3
|
Harangi B, Bogacsovics G, Toth J, Kovacs I, Dani E, Hajdu A. Pixel-wise segmentation of cells in digitized Pap smear images. Sci Data 2024; 11:733. [PMID: 38971865 PMCID: PMC11227563 DOI: 10.1038/s41597-024-03566-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 06/24/2024] [Indexed: 07/08/2024] Open
Abstract
A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.
Collapse
Affiliation(s)
- Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Gergo Bogacsovics
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Janos Toth
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Erzsebet Dani
- Department of Library and Information Science, Faculty of Humanities, University of Debrecen, Debrecen, Hungary
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
4
|
Ando Y, Cho J, Park NJY, Ko S, Han H. Toward Interpretable Cell Image Representation and Abnormality Scoring for Cervical Cancer Screening Using Pap Smears. Bioengineering (Basel) 2024; 11:567. [PMID: 38927803 PMCID: PMC11200554 DOI: 10.3390/bioengineering11060567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 05/29/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
Screening is critical for prevention and early detection of cervical cancer but it is time-consuming and laborious. Supervised deep convolutional neural networks have been developed to automate pap smear screening and the results are promising. However, the interest in using only normal samples to train deep neural networks has increased owing to the class imbalance problems and high-labeling costs that are both prevalent in healthcare. In this study, we introduce a method to learn explainable deep cervical cell representations for pap smear cytology images based on one-class classification using variational autoencoders. Findings demonstrate that a score can be calculated for cell abnormality without training models with abnormal samples, and we localize abnormality to interpret our results with a novel metric based on absolute difference in cross-entropy in agglomerative clustering. The best model that discriminates squamous cell carcinoma (SCC) from normals gives 0.908±0.003 area under operating characteristic curve (AUC) and one that discriminates high-grade epithelial lesion (HSIL) 0.920±0.002 AUC. Compared to other clustering methods, our method enhances the V-measure and yields higher homogeneity scores, which more effectively isolate different abnormality regions, aiding in the interpretation of our results. Evaluation using an external dataset shows that our model can discriminate abnormality without the need for additional training of deep models.
Collapse
Affiliation(s)
- Yu Ando
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (Y.A); (S.K.); (H.H.)
| | - Junghwan Cho
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
| | - Nora Jee-Young Park
- Department of Pathology, School of Medicine, Kyungpook National University, Daegu 41944, Republic of Korea;
- Department of Pathology, Kyunpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
| | - Seokhwan Ko
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (Y.A); (S.K.); (H.H.)
| | - Hyungsoo Han
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (Y.A); (S.K.); (H.H.)
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
| |
Collapse
|
5
|
Abd-Almoniem E, Abd-Alsabour N, Elsheikh S, Mostafa RR, Elesawy YF. A Novel Validated Real-World Dataset for the Diagnosis of Multiclass Serous Effusion Cytology according to the International System and Ground-Truth Validation Data. Acta Cytol 2024; 68:160-170. [PMID: 38522415 DOI: 10.1159/000538465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
INTRODUCTION The application of artificial intelligence (AI) algorithms in serous fluid cytology is lacking due to the deficiency in standardized publicly available datasets. Here, we develop a novel public serous effusion cytology dataset. Furthermore, we apply AI algorithms on it to test its diagnostic utility and safety in clinical practice. METHODS The work is divided into three phases. Phase 1 entails building the dataset based on the multitiered evidence-based classification system proposed by the International System (TIS) of serous fluid cytology along with ground-truth tissue diagnosis for malignancy. To ensure reliable results of future AI research on this dataset, we carefully consider all the steps of the preparation and staining from a real-world cytopathology perspective. In phase 2, we pay special consideration to the image acquisition pipeline to ensure image integrity. Then we utilize the power of transfer learning using the convolutional layers of the VGG16 deep learning model for feature extraction. Finally, in phase 3, we apply the random forest classifier on the constructed dataset. RESULTS The dataset comprises 3,731 images distributed among the four TIS diagnostic categories. The model achieves 74% accuracy in this multiclass classification problem. Using a one-versus-all classifier, the fallout rate for images that are misclassified as negative for malignancy despite being a higher risk diagnosis is 0.13. Most of these misclassified images (77%) belong to the atypia of undetermined significance category in concordance with real-life statistics. CONCLUSION This is the first and largest publicly available serous fluid cytology dataset based on a standardized diagnostic system. It is also the first dataset to include various types of effusions and pericardial fluid specimens. In addition, it is the first dataset to include the diagnostically challenging atypical categories. AI algorithms applied on this novel dataset show reliable results that can be incorporated into actual clinical practice with minimal risk of missing a diagnosis of malignancy. This work provides a foundation for researchers to develop and test further AI algorithms for the diagnosis of serous effusions.
Collapse
Affiliation(s)
- Esraa Abd-Almoniem
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Nadia Abd-Alsabour
- Department of Computer Science, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza, Egypt
| | - Samar Elsheikh
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Rasha R Mostafa
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Yasmine Fathy Elesawy
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| |
Collapse
|
6
|
Fang M, Fu M, Liao B, Lei X, Wu FX. Deep integrated fusion of local and global features for cervical cell classification. Comput Biol Med 2024; 171:108153. [PMID: 38364660 DOI: 10.1016/j.compbiomed.2024.108153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 02/18/2024]
Abstract
Cervical cytology image classification is of great significance to the cervical cancer diagnosis and prognosis. Recently, convolutional neural network (CNN) and visual transformer have been adopted as two branches to learn the features for image classification by simply adding local and global features. However, such the simple addition may not be effective to integrate these features. In this study, we explore the synergy of local and global features for cytology images for classification tasks. Specifically, we design a Deep Integrated Feature Fusion (DIFF) block to synergize local and global features of cytology images from a CNN branch and a transformer branch. Our proposed method is evaluated on three cervical cell image datasets (SIPaKMeD, CRIC, Herlev) and another large blood cell dataset BCCD for several multi-class and binary classification tasks. Experimental results demonstrate the effectiveness of the proposed method in cervical cell classification, which could assist medical specialists to better diagnose cervical cancer.
Collapse
Affiliation(s)
- Ming Fang
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Bo Liao
- School of Mathematics and Statistics, Hainan Normal University, 99 Longkun South Road, Haikou, 571158, Hainan, China
| | - Xiujuan Lei
- School of Computer Science, Shaanxi Normal University, 620 West Chang'an Avenue, Xi'an, 710119, Shaanxi, China.
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Computer Science, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada.
| |
Collapse
|
7
|
Kalbhor M, Shinde S, Wajire P, Jude H. CerviCell-detector: An object detection approach for identifying the cancerous cells in pap smear images of cervical cancer. Heliyon 2023; 9:e22324. [PMID: 38058644 PMCID: PMC10696000 DOI: 10.1016/j.heliyon.2023.e22324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 10/18/2023] [Accepted: 11/09/2023] [Indexed: 12/08/2023] Open
Abstract
Cervical cancer is the second most commonly seen cancer in women. It affects the cervix portion of the vagina. The most preferred diagnostic test required for screening for cervical cancer is the pap smear test. Pap smear is a time-consuming test as it requires detailed analysis by expert cytologists. Cytologists can screen around 100 to 1000 slides depending upon the availability of advanced equipment. It requires substantial time and effort to carefully examine each slide, identify and classify cells, and make accurate diagnoses. Prolonged periods of visual inspection can increase the likelihood of human errors, such as overlooking abnormalities or misclassifying cells. The sheer volume of slides to be screened can exacerbate fatigue and impact diagnostic accuracy. Due to this reason Artificial intelligence (AI) based computer-aided diagnosis system for the classification and detection of pap smear images is needed. There are some AI-based solutions proposed in the literature, still, an effective and accurate system is under research. In this paper, we implement a state-of-the-art object detection model with a newly available CRIC dataset which follows the Bethesda system for nomenclature. Object detection models implemented are YOLOv5 which uses the CSPNet backbone, Faster R-CNN which has Region Proposal Network (RPN) and Detectron2 framework created by Facebook AI Research (FAIR) Group. ResNext model is implemented among the available models from Detectron2. The CRIC dataset is preprocessed and augmented using Roboflow tool. The performance measures of Average Precision and mean Average precision over the Intersection over Union (IoU) are used to evaluate the effectiveness of the models. The models performed better for two classes namely Normal and Abnormal compared to six classes from the Bethesda system. The highest mean Average Precision (mAP) is observed on the augmented dataset for YOLOv5 models for binary classification with 83 % mAP with IoU in the range of 0.50-0.95.
Collapse
Affiliation(s)
| | - Swati Shinde
- Pimpri Chinchwad College of Engineering, Pune, India
| | - Pankaj Wajire
- Pimpri Chinchwad College of Engineering, Pune, India
| | - Hemanth Jude
- Karunya Institute of Technology and Sciences, India
| |
Collapse
|
8
|
Zhao J, He YJ, Zhou SH, Qin J, Xie YN. CNSeg: A dataset for cervical nuclear segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107732. [PMID: 37544166 DOI: 10.1016/j.cmpb.2023.107732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 05/31/2023] [Accepted: 07/23/2023] [Indexed: 08/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Nuclear segmentation in cervical cell images is a crucial technique for automatic cytopathology diagnosis. Experimental evaluation of nuclear segmentation methods with datasets is helpful in promoting the advancement of nuclear segmentation techniques. However, public datasets are not enough for a reasonable and comprehensive evaluation because of insufficient quantity, single data source, and low segmentation difficulty. METHODS Therefore, we provide the largest dataset for cervical nuclear segmentation (CNSeg). It contains 124,000 annotated nuclei collected from 1,530 patients under different conditions. The image styles in this dataset cover most practical application scenarios, including microbial infection, cytopathic heterogeneity, overlapping nuclei, etc. To evaluate the performance of segmentation methods from different aspects, we divided the CNSeg dataset into three subsets, namely the patch segmentation dataset (PatchSeg) with nuclei images collected under complex conditions, the cluster segmentation dataset (ClusterSeg) with cluster nuclei, and the domain segmentation dataset (DomainSeg) with data from different domains. Furthermore, we propose a post-processing method that processes overlapping nuclei single ones. RESULTS AND CONCLUSION Experiments show that our dataset can comprehensively evaluate cervical nuclear segmentation methods from different aspects. We provide guidelines for other researchers to use the dataset. https://github.com/jingzhaohlj/AL-Net.
Collapse
Affiliation(s)
- Jing Zhao
- Northeast Forestry University, Mechanical and Electrical Engineering, Harbin 150006, China
| | - Yong-Jun He
- Harbin Institute of Technology, School of Computer Science, Harbin 150001, China.
| | - Shu-Hang Zhou
- Wenzhou Business College, Information Engineering, Wenzhou 325035, China
| | - Jian Qin
- Harbin University of Science and Technology, School of Computer Science and Technology, No. 52 Xuefu Road, 150080 Harbin, China
| | - Yi-Ning Xie
- Northeast Forestry University, Mechanical and Electrical Engineering, Harbin 150006, China
| |
Collapse
|
9
|
Khan A, Han S, Ilyas N, Lee YM, Lee B. CervixFormer: A Multi-scale swin transformer-Based cervical pap-Smear WSI classification framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107718. [PMID: 37451230 DOI: 10.1016/j.cmpb.2023.107718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/05/2023] [Accepted: 07/08/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. METHODS The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. RESULTS In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. CONCLUSIONS In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.
Collapse
Affiliation(s)
- Anwar Khan
- Center for Cancer Biology, Vlaams Instituut voor Biotechnologie (VIB), Belgium; Department of Oncology, Katholieke Universiteit (KU) Leuven, Belgium; Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Seunghyeon Han
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Naveed Ilyas
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea; Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, UAE.
| | - Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, South Korea.
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| |
Collapse
|
10
|
Ji J, Zhang W, Dong Y, Lin R, Geng Y, Hong L. Automated cervical cell segmentation using deep ensemble learning. BMC Med Imaging 2023; 23:137. [PMID: 37735354 PMCID: PMC10514950 DOI: 10.1186/s12880-023-01096-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/04/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Cervical cell segmentation is a fundamental step in automated cervical cancer cytology screening. The aim of this study was to develop and evaluate a deep ensemble model for cervical cell segmentation including both cytoplasm and nucleus segmentation. METHODS The Cx22 dataset was used to develop the automated cervical cell segmentation algorithm. The U-Net, U-Net + + , DeepLabV3, DeepLabV3Plus, Transunet, and Segformer were used as candidate model architectures, and each of the first four architectures adopted two different encoders choosing from resnet34, resnet50 and denseNet121. Models were trained under two settings: trained from scratch, encoders initialized from ImageNet pre-trained models and then all layers were fine-tuned. For every segmentation task, four models were chosen as base models, and Unweighted average was adopted as the model ensemble method. RESULTS U-Net and U-Net + + with resnet34 and denseNet121 encoders trained using transfer learning consistently performed better than other models, so they were chosen as base models. The ensemble model obtained the Dice similarity coefficient, sensitivity, specificity of 0.9535 (95% CI:0.9534-0.9536), 0.9621 (0.9619-0.9622),0.9835 (0.9834-0.9836) and 0.7863 (0.7851-0.7876), 0.9581 (0.9573-0.959), 0.9961 (0.9961-0.9962) on cytoplasm segmentation and nucleus segmentation, respectively. The Dice, sensitivity, specificity of baseline models for cytoplasm segmentation and nucleus segmentation were 0.948, 0.954, 0.9823 and 0.750, 0.713, 0.9988, respectively. Except for the specificity of cytoplasm segmentation, all metrics outperformed the best baseline models (P < 0.05) with a moderate margin. CONCLUSIONS The proposed algorithm achieved better performances on cervical cell segmentation than baseline models. It can be potentially used in automated cervical cancer cytology screening system.
Collapse
Affiliation(s)
- Jie Ji
- Network & Information Center, Shantou University, Shantou, 515041, Guangdong, China
| | - Weifeng Zhang
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China
| | - Yuejiao Dong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Ruilin Lin
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Yiqun Geng
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China.
| | - Liangli Hong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China.
| |
Collapse
|
11
|
de Oliveira JA, Souza MDC, Cunha LFD, Mota BEF, Rezende MT, Carneiro CM, Pereira MG, Mocaiber I, Souza GGL. Emotionally subjective reactivity to cervical cytology pictures is modulated by expertise. J Health Psychol 2023; 28:176-188. [PMID: 35733410 DOI: 10.1177/13591053221106023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Our aims were to create a catalog of cytological pictures and to evaluate the valence (level of pleasantness/unpleasantness) and arousal (level of calm/excitement) of these pictures in individuals with different occupations. The sample consisted of medical and law college students and cytopathologists. Valence and arousal score for general pictures were not modulated by expertise in cytology. However, students judged the cytological pictures to be lower in valence and in arousal than the cytopathologists. The cytopathologists classified cytological pictures with lesions as lower in valence and higher in arousal than cytological pictures without lesions.
Collapse
|
12
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
13
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Yaman O, Tuncer T. Exemplar pyramid deep feature extraction based cervical cancer image classification model using pap-smear images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103428] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
15
|
Li X, Xu Z, Shen X, Zhou Y, Xiao B, Li TQ. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr Oncol 2021; 28:3585-3601. [PMID: 34590614 PMCID: PMC8482136 DOI: 10.3390/curroncol28050307] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/06/2021] [Accepted: 09/12/2021] [Indexed: 01/16/2023] Open
Abstract
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN architecture for the detection of abnormal cervical cells in cytology images from a cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cervical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using traditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
Collapse
Affiliation(s)
- Xia Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Zhenhao Xu
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Xi Shen
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Yongxia Zhou
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Binggang Xiao
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Tie-Qiang Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, S-17177 Stockholm, Sweden
- Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, S-14186 Stockholm, Sweden
| |
Collapse
|
16
|
N. Diniz D, T. Rezende M, G. C. Bianchi A, M. Carneiro C, J. S. Luz E, J. P. Moreira G, M. Ushizima D, N. S. de Medeiros F, J. F. Souza M. A Deep Learning Ensemble Method to Assist Cytopathologists in Pap Test Image Classification. J Imaging 2021; 7:111. [PMID: 39080899 PMCID: PMC8321382 DOI: 10.3390/jimaging7070111] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/17/2021] [Accepted: 07/02/2021] [Indexed: 11/19/2022] Open
Abstract
In recent years, deep learning methods have outperformed previous state-of-the-art machine learning techniques for several problems, including image classification. Classifying cells in Pap smear images is very challenging, and it is still of paramount importance for cytopathologists. The Pap test is a cervical cancer prevention test that tracks preneoplastic changes in cervical epithelial cells. Carrying out this exam is important in that early detection. It is directly related to a greater chance of curing or reducing the number of deaths caused by the disease. The analysis of Pap smears is exhaustive and repetitive, as it is performed manually by cytopathologists. Therefore, a tool that assists cytopathologists is needed. This work considers 10 deep convolutional neural networks and proposes an ensemble of the three best architectures to classify cervical cancer upon cell nuclei and reduce the professionals' workload. The dataset used in the experiments is available in the Center for Recognition and Inspection of Cells (CRIC) Searchable Image Database. Considering the metrics of precision, recall, F1-score, accuracy, and sensitivity, the proposed ensemble improves previous methods shown in the literature for two- and three-class classification. We also introduce the six-class classification outcome.
Collapse
Affiliation(s)
- Débora N. Diniz
- Departamento de Computação, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (A.G.C.B.); (E.J.S.L.); (G.J.P.M.); (M.J.F.S.)
| | - Mariana T. Rezende
- Departamento de Análises Clínicas, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (M.T.R.); (C.M.C.)
| | - Andrea G. C. Bianchi
- Departamento de Computação, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (A.G.C.B.); (E.J.S.L.); (G.J.P.M.); (M.J.F.S.)
| | - Claudia M. Carneiro
- Departamento de Análises Clínicas, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (M.T.R.); (C.M.C.)
| | - Eduardo J. S. Luz
- Departamento de Computação, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (A.G.C.B.); (E.J.S.L.); (G.J.P.M.); (M.J.F.S.)
| | - Gladston J. P. Moreira
- Departamento de Computação, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (A.G.C.B.); (E.J.S.L.); (G.J.P.M.); (M.J.F.S.)
| | - Daniela M. Ushizima
- Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA;
- Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA
- Bakar Computational Health Sciences Institute, University of California, San Francisco, CA 94143, USA
| | - Fátima N. S. de Medeiros
- Departamento de Engenharia de Teleinformática, Universidade Federal do Ceará (UFC), Fortaleza 60455-970, Brazil;
| | - Marcone J. F. Souza
- Departamento de Computação, Universidade Federal de Ouro Preto (UFOP), Ouro Preto 35400-000, Brazil; (A.G.C.B.); (E.J.S.L.); (G.J.P.M.); (M.J.F.S.)
| |
Collapse
|