1
|
Yang X, Ding B, Qin J, Guo L, Zhao J, He Y. HVS-Unsup: Unsupervised cervical cell instance segmentation method based on human visual simulation. Comput Biol Med 2024; 171:108147. [PMID: 38387385 DOI: 10.1016/j.compbiomed.2024.108147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/22/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Instance segmentation plays an important role in the automatic diagnosis of cervical cancer. Although deep learning-based instance segmentation methods can achieve outstanding performance, they need large amounts of labeled data. This results in a huge consumption of manpower and material resources. To solve this problem, we propose an unsupervised cervical cell instance segmentation method based on human visual simulation, named HVS-Unsup. Our method simulates the process of human cell recognition and incorporates prior knowledge of cervical cells. Specifically, firstly, we utilize prior knowledge to generate three types of pseudo labels for cervical cells. In this way, the unsupervised instance segmentation is transformed to a supervised task. Secondly, we design a Nucleus Enhanced Module (NEM) and a Mask-Assisted Segmentation module (MAS) to address problems of cell overlapping, adhesion, and even scenarios involving visually indistinguishable cases. NEM can accurately locate the nuclei by the nuclei attention feature maps generated by point-level pseudo labels, and MAS can reduce the interference from impurities by updating the weight of the shallow network through the dice loss. Next, we propose a Category-Wise droploss (CW-droploss) to reduce cell omissions in lower-contrast images. Finally, we employ an iterative self-training strategy to rectify mislabeled instances. Experimental results on our dataset MS-cellSeg, the public datasets Cx22 and ISBI2015 demonstrate that HVS-Unsup outperforms existing mainstream unsupervised cervical cell segmentation methods.
Collapse
Affiliation(s)
- Xiaona Yang
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Bo Ding
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jian Qin
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Luyao Guo
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jing Zhao
- Northeast Forestry University, School of Mechanical and Electrical Engineering, Harbin, 150040, China
| | - Yongjun He
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin, 150001, China.
| |
Collapse
|
2
|
Cervical Cell Segmentation Method Based on Global Dependency and Local Attention. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157742] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The refined segmentation of nuclei and the cytoplasm is the most challenging task in the automation of cervical cell screening. The U-Shape network structure has demonstrated great superiority in the field of biomedical imaging. However, the classical U-Net network cannot effectively utilize mixed domain information and contextual information, and fails to achieve satisfactory results in this task. To address the above problems, a module based on global dependency and local attention (GDLA) for contextual information modeling and features refinement, is proposed in this study. It consists of three components computed in parallel, which are the global dependency module, the spatial attention module, and the channel attention module. The global dependency module models global contextual information to capture a priori knowledge of cervical cells, such as the positional dependence of the nuclei and cytoplasm, and the closure and uniqueness of the nuclei. The spatial attention module combines contextual information to extract cell boundary information and refine target boundaries. The channel and spatial attention modules are used to provide adaption of the input information, and make it easy to identify subtle but dominant differences of similar objects. Comparative and ablation experiments are conducted on the Herlev dataset, and the experimental results demonstrate the effectiveness of the proposed method, which surpasses the most popular existing channel attention, hybrid attention, and context networks in terms of the nuclei and cytoplasm segmentation metrics, achieving better segmentation performance than most previous advanced methods.
Collapse
|
3
|
Zhao Y, Fu C, Xu S, Cao L, Ma HF. LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images. Comput Biol Med 2022; 145:105500. [PMID: 35421793 DOI: 10.1016/j.compbiomed.2022.105500] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/16/2022] [Accepted: 04/04/2022] [Indexed: 11/19/2022]
Abstract
With the widely applied computer-aided diagnosis techniques in cervical cancer screening, cell segmentation has become a necessary step to determine the progression of cervical cancer. Traditional manual methods alleviate the dilemma caused by the shortage of medical resources to a certain extent. Unfortunately, with their low segmentation accuracy for abnormal cells, the complex process cannot realize an automatic diagnosis. In addition, various methods on deep learning can automatically extract image features with high accuracy and small error, making artificial intelligence increasingly popular in computer-aided diagnosis. However, they are not suitable for clinical practice because those complicated models would result in more redundant parameters from networks. To address the above problems, a lightweight feature attention network (LFANet), extracting differentially abundant feature information of objects with various resolutions, is proposed in this study. The model can accurately segment both the nucleus and cytoplasm regions in cervical images. Specifically, a lightweight feature extraction module is designed as an encoder to extract abundant features of input images, combining with depth-wise separable convolution, residual connection and attention mechanism. Besides, the feature layer attention module is added to precisely recover pixel location, which employs the global high-level information as a guide for the low-level features, capturing dependencies of channel features. Finally, our LFANet model is evaluated on all four independent datasets. The experimental results demonstrate that compared with other advanced methods, our proposed network achieves state-of-the-art performance with a low computational complexity.
Collapse
Affiliation(s)
- Yanli Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; School of Electrical Information Engineering, Ningxia Institute of Technology, Shizuishan, 753000, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China.
| | - Sen Xu
- General Hospital of Northern Theatre Command, Shenyang, 110016, China
| | - Lin Cao
- School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing, 100101, China
| | - Hong-Feng Ma
- Dopamine Group Ltd., Auckland, 1542, New Zealand
| |
Collapse
|
4
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
5
|
Classification of cervical cells leveraging simultaneous super-resolution and ordinal regression. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108208] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
6
|
Li X, Xu Z, Shen X, Zhou Y, Xiao B, Li TQ. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr Oncol 2021; 28:3585-3601. [PMID: 34590614 PMCID: PMC8482136 DOI: 10.3390/curroncol28050307] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/06/2021] [Accepted: 09/12/2021] [Indexed: 01/16/2023] Open
Abstract
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN architecture for the detection of abnormal cervical cells in cytology images from a cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cervical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using traditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
Collapse
Affiliation(s)
- Xia Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Zhenhao Xu
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Xi Shen
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Yongxia Zhou
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Binggang Xiao
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Tie-Qiang Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, S-17177 Stockholm, Sweden
- Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, S-14186 Stockholm, Sweden
| |
Collapse
|
7
|
Guo Y, Bi L, Zhu Z, Feng DD, Zhang R, Wang Q, Kim J. Automatic left ventricular cavity segmentation via deep spatial sequential network in 4D computed tomography. Comput Med Imaging Graph 2021; 91:101952. [PMID: 34144318 DOI: 10.1016/j.compmedimag.2021.101952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 05/17/2021] [Accepted: 05/21/2021] [Indexed: 11/26/2022]
Abstract
Automated segmentation of left ventricular cavity (LVC) in temporal cardiac image sequences (consisting of multiple time-points) is a fundamental requirement for quantitative analysis of cardiac structural and functional changes. Deep learning methods for segmentation are the state-of-the-art in performance; however, these methods are generally formulated to work on a single time-point, and thus disregard the complementary information available from the temporal image sequences that can aid in segmentation accuracy and consistency across the time-points. In particular, single time-point segmentation methods perform poorly in segmenting the end-systole (ES) phase image in the cardiac sequence, where the left ventricle deforms to the smallest irregular shape, and the boundary between the blood chamber and the myocardium becomes inconspicuous and ambiguous. To overcome these limitations in automatically segmenting temporal LVCs, we present a spatial sequential network (SS-Net) to learn the deformation and motion characteristics of the LVCs in an unsupervised manner; these characteristics are then integrated with sequential context information derived from bi-directional learning (BL) where both chronological and reverse-chronological directions of the image sequence are used. Our experimental results on a cardiac computed tomography (CT) dataset demonstrate that our spatial-sequential network with bi-directional learning (SS-BL-Net) outperforms existing methods for spatiotemporal LVC segmentation.
Collapse
Affiliation(s)
- Yuyu Guo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; School of Computer Science, University of Sydney, NSW 2006, Australia
| | - Lei Bi
- School of Computer Science, University of Sydney, NSW 2006, Australia
| | - Zhengbin Zhu
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - David Dagan Feng
- School of Computer Science, University of Sydney, NSW 2006, Australia
| | - Ruiyan Zhang
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Qian Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW 2006, Australia.
| |
Collapse
|
8
|
Cric searchable image database as a public platform for conventional pap smear cytology data. Sci Data 2021; 8:151. [PMID: 34112812 PMCID: PMC8192784 DOI: 10.1038/s41597-021-00933-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/11/2021] [Indexed: 01/02/2023] Open
Abstract
Amidst the current health crisis and social distancing, telemedicine has become an important part of mainstream of healthcare, and building and deploying computational tools to support screening more efficiently is an increasing medical priority. The early identification of cervical cancer precursor lesions by Pap smear test can identify candidates for subsequent treatment. However, one of the main challenges is the accuracy of the conventional method, often subject to high rates of false negative. While machine learning has been highlighted to reduce the limitations of the test, the absence of high-quality curated datasets has prevented strategies development to improve cervical cancer screening. The Center for Recognition and Inspection of Cells (CRIC) platform enables the creation of CRIC Cervix collection, currently with 400 images (1,376 × 1,020 pixels) curated from conventional Pap smears, with manual classification of 11,534 cells. This collection has the potential to advance current efforts in training and testing machine learning algorithms for the automation of tasks as part of the cytopathological analysis in the routine work of laboratories.
Collapse
|
9
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
10
|
Liang Y, Tang Z, Yan M, Chen J, Liu Q, Xiang Y. Comparison detector for cervical cell/clumps detection in the limited data scenario. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
11
|
Improving Computer-Aided Cervical Cells Classification Using Transfer Learning Based Snapshot Ensemble. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10207292] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cervical cells classification is a crucial component of computer-aided cervical cancer detection. Fine-grained classification is of great clinical importance when guiding clinical decisions on the diagnoses and treatment, which remains very challenging. Recently, convolutional neural networks (CNN) provide a novel way to classify cervical cells by using automatically learned features. Although the ensemble of CNN models can increase model diversity and potentially boost the classification accuracy, it is a multi-step process, as several CNN models need to be trained respectively and then be selected for ensemble. On the other hand, due to the small training samples, the advantages of powerful CNN models may not be effectively leveraged. In order to address such a challenging issue, this paper proposes a transfer learning based snapshot ensemble (TLSE) method by integrating snapshot ensemble learning with transfer learning in a unified and coordinated way. Snapshot ensemble provides ensemble benefits within a single model training procedure, while transfer learning focuses on the small sample problem in cervical cells classification. Furthermore, a new training strategy is proposed for guaranteeing the combination. The TLSE method is evaluated on a pap-smear dataset called Herlev dataset and is proved to have some superiorities over the exiting methods. It demonstrates that TLSE can improve the accuracy in an ensemble manner with only one single training process for the small sample in fine-grained cervical cells classification.
Collapse
|
12
|
Polar coordinate sampling-based segmentation of overlapping cervical cells using attention U-Net and random walk. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.036] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
13
|
Rajarao C, Singh RP. Improved normalized graph cut with generalized data for enhanced segmentation in cervical cancer detection. EVOLUTIONARY INTELLIGENCE 2020. [DOI: 10.1007/s12065-019-00226-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.086] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Conceição T, Braga C, Rosado L, Vasconcelos MJM. A Review of Computational Methods for Cervical Cells Segmentation and Abnormality Classification. Int J Mol Sci 2019; 20:E5114. [PMID: 31618951 PMCID: PMC6834130 DOI: 10.3390/ijms20205114] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 02/07/2023] Open
Abstract
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Collapse
Affiliation(s)
| | | | - Luís Rosado
- Fraunhofer Portugal AICOS, 4200-135 Porto, Portugal.
| | | |
Collapse
|
16
|
Sarwar A, Sheikh AA, Manhas J, Sharma V. Segmentation of cervical cells for automated screening of cervical cancer: a review. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09735-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
17
|
Zhang L, Nogues I, Summers RM, Liu S, Yao J. DeepPap: Deep Convolutional Networks for Cervical Cell Classification. IEEE J Biomed Health Inform 2017; 21:1633-1643. [PMID: 28541229 DOI: 10.1109/jbhi.2017.2705583] [Citation(s) in RCA: 141] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.
Collapse
|