1
|
Li J, Hu P, Gao H, Shen N, Hua K. Classification of cervical lesions based on multimodal features fusion. Comput Biol Med 2024; 177:108589. [PMID: 38781641 DOI: 10.1016/j.compbiomed.2024.108589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/20/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024]
Abstract
Cervical cancer is a severe threat to women's health worldwide with a long cancerous cycle and a clear etiology, making early screening vital for the prevention and treatment. Based on the dataset provided by the Obstetrics and Gynecology Hospital of Fudan University, a four-category classification model for cervical lesions including Normal, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL) and cancer (Ca) is developed. Considering the dataset characteristics, to fully utilize the research data and ensure the dataset size, the model inputs include original and acetic colposcopy images, lesion segmentation masks, human papillomavirus (HPV), thinprep cytologic test (TCT) and age, but exclude iodine images that have a significant overlap with lesions under acetic images. Firstly, the change information between original and acetic images is introduced by calculating the acetowhite opacity to mine the correlation between the acetowhite thickness and lesion grades. Secondly, the lesion segmentation masks are utilized to introduce prior knowledge of lesion location and shape into the classification model. Lastly, a cross-modal feature fusion module based on the self-attention mechanism is utilized to fuse image information with clinical text information, revealing the features correlation. Based on the dataset used in this study, the proposed model is comprehensively compared with five excellent models over the past three years, demonstrating that the proposed model has superior classification performance and a better balance between performance and complexity. The modules ablation experiments further prove that each proposed improved module can independently improve the model performance.
Collapse
Affiliation(s)
- Jing Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Peng Hu
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Huayu Gao
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Nanyan Shen
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Keqin Hua
- Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China.
| |
Collapse
|
2
|
Jena L, Behera SK, Dash S, Sethy PK. Deep feature extraction and fine κ-nearest neighbour for enhanced human papillomavirus detection in cervical cancer - a comprehensive analysis of colposcopy images. Contemp Oncol (Pozn) 2024; 28:37-44. [PMID: 38800533 PMCID: PMC11117158 DOI: 10.5114/wo.2024.139091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 03/18/2024] [Indexed: 05/29/2024] Open
Abstract
Introduction This study introduces a novel methodology for classifying human papillomavirus (HPV) using colposcopy images, focusing on its potential in diagnosing cervical cancer, the second most prevalent malignancy among women globally. Addressing a crucial gap in the literature, this study highlights the unexplored territory of HPV-based colposcopy image diagnosis for cervical cancer. Emphasising the suitability of colposcopy screening in underdeveloped and low-income regions owing to its small, cost-effective setup that eliminates the need for biopsy specimens, the methodological framework includes robust dataset augmentation and feature extraction using EfficientNetB0 architecture. Material and methods The optimal convolutional neural network model was selected through experimentation with 19 architectures, and fine-tuning with the fine κ-nearest neighbour algorithm enhanced the classification precision, enabling detailed distinctions with a single neighbour. Results The proposed methodology achieved outstanding results, with a validation accuracy of 99.9% and an area under the curve (AUC) of 99.86%, with robust performance on test data, 91.4% accuracy, and an AUC of 91.76%. These remarkable findings underscore the effectiveness of the integrated approach, which offers a highly accurate and reliable system for HPV classification.Conclusions: This research sets the stage for advancements in medical imaging applications, prompting future refinement and validation in diverse clinical settings.
Collapse
Affiliation(s)
- Lipsarani Jena
- Veer Surendra Sai University of Technology, Burla, India
- GITA Autonomous College, Bhubaneswar, India
| | | | | | - Prabira Kumar Sethy
- Sambalpur University, India
- Guru Ghasidas Vishwavidyalaya, Bilaspur, C.G., India
| |
Collapse
|
3
|
Civit-Masot J, Luna-Perejon F, Muñoz-Saavedra L, Domínguez-Morales M, Civit A. A lightweight xAI approach to cervical cancer classification. Med Biol Eng Comput 2024:10.1007/s11517-024-03063-6. [PMID: 38507122 DOI: 10.1007/s11517-024-03063-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 02/24/2024] [Indexed: 03/22/2024]
Abstract
Cervical cancer is caused in the vast majority of cases by the human papilloma virus (HPV) through sexual contact and requires a specific molecular-based analysis to be detected. As an HPV vaccine is available, the incidence of cervical cancer is up to ten times higher in areas without adequate healthcare resources. In recent years, liquid cytology has been used to overcome these shortcomings and perform mass screening. In addition, classifiers based on convolutional neural networks can be developed to help pathologists diagnose the disease. However, these systems always require the final verification of a pathologist to make a final diagnosis. For this reason, explainable AI techniques are required to highlight the most significant data to the healthcare professional, as it can be used to determine the confidence in the results and the areas of the image used for classification (allowing the professional to point out the areas he/she thinks are most important and cross-check them against those detected by the system in order to create incremental learning systems). In this work, a 4-phase optimization process is used to obtain a custom deep-learning classifier for distinguishing between 4 severity classes of cervical cancer with liquid-cytology images. The final classifier obtains an accuracy over 97% for 4 classes and 100% for 2 classes with execution times under 1 s (including the final report generation). Compared to previous works, the proposed classifier obtains better accuracy results with a lower computational cost.
Collapse
Affiliation(s)
- Javier Civit-Masot
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain.
| | - Francisco Luna-Perejon
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Luis Muñoz-Saavedra
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Manuel Domínguez-Morales
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
- Computer Engineering Research Institute, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Anton Civit
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
- Computer Engineering Research Institute, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| |
Collapse
|
4
|
Artificial Intelligence-Based Cervical Cancer Screening on Images Taken during Visual Inspection with Acetic Acid: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050836. [PMID: 36899979 PMCID: PMC10001377 DOI: 10.3390/diagnostics13050836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/16/2023] [Accepted: 02/19/2023] [Indexed: 02/25/2023] Open
Abstract
Visual inspection with acetic acid (VIA) is one of the methods recommended by the World Health Organization for cervical cancer screening. VIA is simple and low-cost; it, however, presents high subjectivity. We conducted a systematic literature search in PubMed, Google Scholar and Scopus to identify automated algorithms for classifying images taken during VIA as negative (healthy/benign) or precancerous/cancerous. Of the 2608 studies identified, 11 met the inclusion criteria. The algorithm with the highest accuracy in each study was selected, and some of its key features were analyzed. Data analysis and comparison between the algorithms were conducted, in terms of sensitivity and specificity, ranging from 0.22 to 0.93 and 0.67 to 0.95, respectively. The quality and risk of each study were assessed following the QUADAS-2 guidelines. Artificial intelligence-based cervical cancer screening algorithms have the potential to become a key tool for supporting cervical cancer screening, especially in settings where there is a lack of healthcare infrastructure and trained personnel. The presented studies, however, assess their algorithms using small datasets of highly selected images, not reflecting whole screened populations. Large-scale testing in real conditions is required to assess the feasibility of integrating those algorithms in clinical settings.
Collapse
|
5
|
Pedestrian gender classification on imbalanced and small sample datasets using deep and traditional features. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08331-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
|
6
|
Dash S, Sethy PK, Behera SK. Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images. Cancer Inform 2023; 22:11769351231161477. [PMID: 37008072 PMCID: PMC10064461 DOI: 10.1177/11769351231161477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/16/2023] [Indexed: 03/31/2023] Open
Abstract
The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network’s training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.
Collapse
Affiliation(s)
- Srikanta Dash
- Department of Electronics, Sambalpur University, Sambalpur, Odisha, India
| | - Prabira Kumar Sethy
- Department of Electronics, Sambalpur University, Sambalpur, Odisha, India
- Prabira Kumar Sethy, Department of Electronics, Sambalpur University, Jyoti Vihar, Sambalpur, Odisha 768019, India.
| | | |
Collapse
|
7
|
Kim M, Park SK, Kubota Y, Lee S, Park K, Kong DS. Applying a deep convolutional neural network to monitor the lateral spread response during microvascular surgery for hemifacial spasm. PLoS One 2022; 17:e0276378. [PMID: 36322573 PMCID: PMC9629649 DOI: 10.1371/journal.pone.0276378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/06/2022] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND Intraoperative neurophysiological monitoring is essential in neurosurgical procedures. In this study, we built and evaluated the performance of a deep neural network in differentiating between the presence and absence of a lateral spread response, which provides critical information during microvascular decompression surgery for the treatment of hemifacial spasm using intraoperatively acquired electromyography images. METHODS AND FINDINGS A total of 3,674 image screenshots of monitoring devices from 50 patients were prepared, preprocessed, and then adopted into training and validation sets. A deep neural network was constructed using current-standard, off-the-shelf tools. The neural network correctly differentiated 50 test images (accuracy, 100%; area under the curve, 0.96) collected from 25 patients whose data were never exposed to the neural network during training or validation. The accuracy of the network was equivalent to that of the neuromonitoring technologists (p = 0.3013) and higher than that of neurosurgeons experienced in hemifacial spasm (p < 0.0001). Heatmaps obtained to highlight the key region of interest achieved a level similar to that of trained human professionals. Provisional clinical application showed that the neural network was preferable as an auxiliary tool. CONCLUSIONS A deep neural network trained on a dataset of intraoperatively collected electromyography data could classify the presence and absence of the lateral spread response with equivalent performance to human professionals. Well-designated applications based upon the neural network may provide useful auxiliary tools for surgical teams during operations.
Collapse
Affiliation(s)
- Minsoo Kim
- Department of Neurosurgery, Gangneung Asan Hospital, Gangneung, Korea
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Medicine, Graduate School, Yonsei University College of Medicine, Seoul, Korea
| | - Sang-Ku Park
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | | | - Seunghoon Lee
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kwan Park
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | - Doo-Sik Kong
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
8
|
Raimond K, Rao GB, Juliet S, Tamilarasi SRG, Evangelin PS, Mathew L. An emerging paradigms on cervical cancer screening methods and devices for clinical trails. Front Public Health 2022; 10:1030304. [PMID: 36388384 PMCID: PMC9651910 DOI: 10.3389/fpubh.2022.1030304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/10/2022] [Indexed: 01/29/2023] Open
Affiliation(s)
- Kumudha Raimond
- Department of Computer Science Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - Gadudasu Babu Rao
- Department of Mechanical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - Sujitha Juliet
- Department of Computer Science Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India,*Correspondence: Sujitha Juliet
| | - S. Rubeena Grace Tamilarasi
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - P. S. Evangelin
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - Limson Mathew
- Department of Mechanical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| |
Collapse
|
9
|
Skerrett E, Miao Z, Asiedu MN, Richards M, Crouch B, Sapiro G, Qiu Q, Ramanujam N. Multicontrast Pocket Colposcopy Cervical Cancer Diagnostic Algorithm for Referral Populations. BME FRONTIERS 2022; 2022:9823184. [PMID: 37850189 PMCID: PMC10521679 DOI: 10.34133/2022/9823184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 07/19/2022] [Indexed: 10/19/2023] Open
Abstract
Objective and Impact Statement. We use deep learning models to classify cervix images-collected with a low-cost, portable Pocket colposcope-with biopsy-confirmed high-grade precancer and cancer. We boost classification performance on a screened-positive population by using a class-balanced loss and incorporating green-light colposcopy image pairs, which come at no additional cost to the provider. Introduction. Because the majority of the 300,000 annual deaths due to cervical cancer occur in countries with low- or middle-Human Development Indices, an automated classification algorithm could overcome limitations caused by the low prevalence of trained professionals and diagnostic variability in provider visual interpretations. Methods. Our dataset consists of cervical images (n = 1,760 ) from 880 patient visits. After optimizing the network architecture and incorporating a weighted loss function, we explore two methods of incorporating green light image pairs into the network to boost the classification performance and sensitivity of our model on a test set. Results. We achieve an area under the receiver-operator characteristic curve, sensitivity, and specificity of 0.87, 75%, and 88%, respectively. The addition of the class-balanced loss and green light cervical contrast to a Resnet-18 backbone results in a 2.5 times improvement in sensitivity. Conclusion. Our methodology, which has already been tested on a prescreened population, can boost classification performance and, in the future, be coupled with Pap smear or HPV triaging, thereby broadening access to early detection of precursor lesions before they advance to cancer.
Collapse
Affiliation(s)
- Erica Skerrett
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Zichen Miao
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Mercy N. Asiedu
- Department of Computer Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Megan Richards
- Department of Electrical and Computer Engineering, Department of Biomedical Engineering, Department of Computer Science, Department of Mathematics, Duke University, Durham, NC, USA
| | - Brian Crouch
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Department of Biomedical Engineering, Department of Computer Science, Department of Mathematics, Duke University, Durham, NC, USA
| | - Qiang Qiu
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Nirmala Ramanujam
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
10
|
Ma JH, You SF, Xue JS, Li XL, Chen YY, Hu Y, Feng Z. Computer-aided diagnosis of cervical dysplasia using colposcopic images. Front Oncol 2022; 12:905623. [PMID: 35992807 PMCID: PMC9389460 DOI: 10.3389/fonc.2022.905623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 07/11/2022] [Indexed: 11/13/2022] Open
Abstract
Backgroundcomputer-aided diagnosis of medical images is becoming more significant in intelligent medicine. Colposcopy-guided biopsy with pathological diagnosis is the gold standard in diagnosing CIN and invasive cervical cancer. However, it struggles with its low sensitivity in differentiating cancer/HSIL from LSIL/normal, particularly in areas with a lack of skilled colposcopists and access to adequate medical resources.Methodsthe model used the auto-segmented colposcopic images to extract color and texture features using the T-test method. It then augmented minority data using the SMOTE method to balance the skewed class distribution. Finally, it used an RBF-SVM to generate a preliminary output. The results, integrating the TCT, HPV tests, and age, were combined into a naïve Bayes classifier for cervical lesion diagnosis.Resultsthe multimodal machine learning model achieved physician-level performance (sensitivity: 51.2%, specificity: 86.9%, accuracy: 81.8%), and it could be interpreted by feature extraction and visualization. With the aid of the model, colposcopists improved the sensitivity from 53.7% to 70.7% with an acceptable specificity of 81.1% and accuracy of 79.6%.Conclusionusing a computer-aided diagnosis system, physicians could identify cancer/HSIL with greater sensitivity, which guided biopsy to take timely treatment.
Collapse
Affiliation(s)
| | | | | | | | | | - Yan Hu
- *Correspondence: Zhen Feng, ; Yan Hu,
| | - Zhen Feng
- *Correspondence: Zhen Feng, ; Yan Hu,
| |
Collapse
|
11
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
12
|
Fan Y, Ma H, Fu Y, Liang X, Yu H, Liu Y. Colposcopic multimodal fusion for the classification of cervical lesions. Phys Med Biol 2022; 67. [PMID: 35617940 DOI: 10.1088/1361-6560/ac73d4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/26/2022] [Indexed: 01/01/2023]
Abstract
Objective: Cervical cancer is one of the two biggest killers of women and early detection of cervical precancerous lesions can effectively improve the survival rate of patients. Manual diagnosis by combining colposcopic images and clinical examination results is the main clinical diagnosis method at present. Developing an intelligent diagnosis algorithm based on artificial intelligence is an inevitable trend to solve the objectification of diagnosis and improve the quality and efficiency of diagnosis.Approach: A colposcopic multimodal fusion convolutional neural network (CMF-CNN) was proposed for the classification of cervical lesions. Mask region convolutional neural network was used to detect the cervical region while the encoding network EfficientNet-B3 was introduced to extract the multimodal image features from the acetic image and iodine image. Finally, Squeeze-and-Excitation, Atrous Spatial Pyramid Pooling, and convolution block were also adopted to encode and fuse the patient's clinical text information.Main results: The experimental results showed that in 7106 cases of colposcopy, the accuracy, macro F1-score, macro-areas under the curve of the proposed model were 92.70%, 92.74%, 98.56%, respectively. They are superior to the mainstream unimodal image classification models.Significance: CMF-CNN proposed in this paper combines multimodal information, which has high performance in the classification of cervical lesions in colposcopy, so it can provide comprehensive diagnostic aid.
Collapse
Affiliation(s)
- Yinuo Fan
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Huizhan Ma
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuanbin Fu
- The College of Intelligence and Computidng, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xiaoyun Liang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Hui Yu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuzhen Liu
- The Department of Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang 261042, People's Republic of China
| |
Collapse
|
13
|
Cervical Lesion Classification Method Based on Cross-Validation Decision Fusion Method of Vision Transformer and DenseNet. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3241422. [PMID: 35607393 PMCID: PMC9124126 DOI: 10.1155/2022/3241422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/24/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022]
Abstract
Objective. In order to better adapt to clinical applications, this paper proposes a cross-validation decision-making fusion method of Vision Transformer and DenseNet161. Methods. The dataset is the most critical acetic acid image for clinical diagnosis, and the SR areas are processed by a specific method. Then, the Vision Transformer and DenseNet161 models are trained by the fivefold cross-validation method, and the fivefold prediction results corresponding to the two models are fused by different weights. Finally, the five fused results are averaged to obtain the category with the highest probability. Results. The results show that the fusion method in this paper reaches an accuracy rate of 68% for the four classifications of cervical lesions. Conclusions. It is more suitable for clinical environments, effectively reducing the missed detection rate and ensuring the life and health of patients.
Collapse
|
14
|
Earth Mover’s Distance-Based Tool for Rapid Screening of Cervical Cancer Using Cervigrams. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Cervical cancer is a major public health challenge that can be cured with early diagnosis and timely treatment. This challenge formed the rationale behind our design and development of an intelligent and robust image analysis and diagnostic tool/scale, namely “OM—The OncoMeter”, for which we used R (version-3.6.3) and Linux (Ubuntu-20.04) to tag and triage patients in order of their disease severity. The socio-demographic profiles and cervigrams of 398 patients evaluated at OPDs of Batra Hospital & Medical Research Centre, New Delhi, India, and Delhi State Cancer Institute (East), New Delhi, India, were acquired during the course of this study. Tested on 398 India-specific women’s cervigrams, the scale yielded significant achievements, with 80.15% accuracy, a sensitivity of 84.79%, and a specificity of 66.66%. The statistical analysis of sociodemographic profiles showed significant associations of age, education, annual income, occupation, and menstrual health with the health of the cervix, where a p-value less than (<) 0.05 was considered statistically significant. The deployment of cervical cancer screening tools such as “OM—The OncoMeter” in live clinical settings of resource-limited healthcare infrastructure will facilitate early diagnosis in a non-invasive manner, leading to a timely clinical intervention for infected patients upon detection even during primary healthcare (PHC).
Collapse
|
15
|
Chen J, Li P, Xu T, Xue H, Wang X, Li Y, Lin H, Liu P, Dong B, Sun P. Detection of cervical lesions in colposcopic images based on the RetinaNet method. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103589] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
16
|
Liu J, Sun X, Li R, Peng Y. Recognition of cervical precancerous lesions based on probability distribution feature guidance. Curr Med Imaging 2022; 18:1204-1213. [DOI: 10.2174/1573405618666220428104541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/07/2022] [Accepted: 03/13/2022] [Indexed: 11/22/2022]
Abstract
INTRODUCTION:
Cervical cancer is a high incidence of cancer in women and cervical precancerous screening plays an important role in reducing the mortality rate.
METHOD:
- In this study, we proposed a multichannel feature extraction method based on the probability distribution features of the acetowhite (AW) region to identify cervical precancerous lesions, with the overarching goal to improve the accuracy of cervical precancerous screening. A k-means clustering algorithm was first used to extract the cervical region images from the original colposcopy images. We then used a deep learning model called DeepLab V3+ to segment the AW region of the cervical image after the acetic acid experiment, from which the probability distribution map of the AW region after segmentation was obtained. This probability distribution map was fed into a neural network classification model for multichannel feature extraction, which resulted in the final classification performance.
RESULT:
Results of the experimental evaluation showed that the proposed method achieved an average accuracy of 87.7%, an average sensitivity of 89.3%, and an average specificity of 85.6%. Compared with the methods that did not add segmented probability features, the proposed method increased the average accuracy rate, sensitivity, and specificity by 8.3%, 8%, and 8.4%, respectively.
CONCLUSION:
Overall, the proposed method holds great promise for enhancing the screening of cervical precancerous lesions in the clinic by providing the physician with more reliable screening results that might reduce their workload.
Collapse
Affiliation(s)
- Jun Liu
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Xiaoxue Sun
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Rihui Li
- Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Yuanxiu Peng
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| |
Collapse
|
17
|
HLDnet: Novel deep learning based Artificial Intelligence tool fuses acetic acid and Lugol’s iodine cervicograms for accurate pre-cancer screening. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103163] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
18
|
Assessment of HPV Risk Type in H&E-stained Biopsy Specimens of the Cervix by Microscopy Image Analysis. Appl Immunohistochem Mol Morphol 2021; 28:702-710. [PMID: 31876603 DOI: 10.1097/pai.0000000000000823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The objective of this study was (a) to identify, by computer processing of digitized images of hematoxylin and eosin (H&E)-stained biopsy material of the cervix, differences in the structure of nuclei between high-risk (HR) and low-risk (LR) human papillomavirus virus (HPV) types and (b) to assess the HPV risk type by designing a decision-support system (DSS). MATERIALS AND METHODS Clinical material comprised H&E-stained biopsies from squamous intraepithelial lesions of 55 patients with polymerase chain reaction-verified HR-HPV (26 patients) or LR-HPV (29 patients) infection. From each patient's biopsy specimen, we digitized 1 region of interest, guided by the expert physician. After the segmentation of nuclei, we quantified from each nucleus 77 textural and morphologic features. We represented each patient by a 77-feature vector, the feature means of all nuclei, and we created 2 classes for HR-HPV and LR-HPV types. We carried out (a) a statistical analysis to determine features with statistically significant differences between the 2 classes and (b) a discriminant analysis, by designing a DSS, to estimate the HPV risk type. RESULTS Statistical analysis revealed 40 features with between-classes statistically significant differences and discriminant analysis showed that the best DSS design achieved a high accuracy of about 93% in identifying the HPV risk type on data not used in the design of the DSS. CONCLUSIONS Nuclei of HR-HPV types were of higher intensity, contained larger structures, had higher edges, were coarser, rougher, had higher contrast, were larger, and attained more irregular shapes. The proposed DSS indicates that discrimination of HPV risk type from images of H&E-stained biopsy material of the cervix is promising.
Collapse
|
19
|
Elakkiya R, Subramaniyaswamy V, Vijayakumar V, Mahanti A. Cervical Cancer Diagnostics Healthcare System Using Hybrid Object Detection Adversarial Networks. IEEE J Biomed Health Inform 2021; 26:1464-1471. [PMID: 34214045 DOI: 10.1109/jbhi.2021.3094311] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cervical cancer is one of the common cancers among women and it causes significant mortality in many developing countries. Diagnosis of cervical lesions is done using pap smear test or visual inspection using acetic acid (staining). Digital colposcopy, an inexpensive methodology, provides painless and efficient screening results. Therefore, automating cervical cancer screening using colposcopy images will be highly useful in saving many lives. Nowadays, many automation techniques using computer vision and machine learning in cervical screening gained attention, paving the way for diagnosing cervical cancer. However, most of the methods rely entirely on the annotation of cervical spotting and segmentation. This paper aims to introduce the Faster Small-Object Detection Neural Networks (FSOD-GAN) to address the cervical screening and diagnosis of cervical cancer and the type of cancer using digital colposcopy images. The proposed approach automatically detects the cervical spot using Faster Region-Based Convolutional Neural Network (FR-CNN) and performs the hierarchical multiclass classification of three types of cervical cancer lesions. Experimentation was done with colposcopy data collected from available open sources consisting of 1,993 patients with three cervical categories, and the proposed approach shows 99% accuracy in diagnosing the stages of cervical cancer.
Collapse
|
20
|
Yan L, Li S, Guo Y, Ren P, Song H, Yang J, Shen X. Multi-state colposcopy image fusion for cervical precancerous lesion diagnosis using BF-CNN. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102700] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
|
22
|
Liu J, Liang T, Peng Y, Peng G, Sun L, Li L, Dong H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol Health Care 2021; 30:469-482. [PMID: 34180439 DOI: 10.3233/thc-212890] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
BACKGROUND Acetowhite (AW) region is a critical physiological phenomenon of precancerous lesions of cervical cancer. An accurate segmentation of the AW region can provide a useful diagnostic tool for gynecologic oncologists in screening cervical cancers. Traditional approaches for the segmentation of AW regions relied heavily on manual or semi-automatic methods. OBJECTIVE To automatically segment the AW regions from colposcope images. METHODS First, the cervical region was extracted from the original colposcope images by k-means clustering algorithm. Second, a deep learning-based image semantic segmentation model named DeepLab V3+ was used to segment the AW region from the cervical image. RESULTS The results showed that, compared to the fuzzy clustering segmentation algorithm and the level set segmentation algorithm, the new method proposed in this study achieved a mean Jaccard Index (JI) accuracy of 63.6% (improved by 27.9% and 27.5% respectively), a mean specificity of 94.9% (improved by 55.8% and 32.3% respectively) and a mean accuracy of 91.2% (improved by 38.6% and 26.4% respectively). A mean sensitivity of 78.2% was achieved by the proposed method, which was 17.4% and 10.1% lower respectively. Compared to the image semantic segmentation models U-Net and PSPNet, the proposed method yielded a higher mean JI accuracy, mean sensitivity and mean accuracy. CONCLUSION The improved segmentation performance suggested that the proposed method may serve as a useful complimentary tool in screening cervical cancer.
Collapse
Affiliation(s)
- Jun Liu
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Tong Liang
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Yun Peng
- San Diego, California, CA 91355, USA
| | - Gengyou Peng
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Lechan Sun
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Jiangxi 330006, China
| | - Hua Dong
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| |
Collapse
|
23
|
Nikookar E, Naderi E, Rahnavard A. Cervical Cancer Prediction by Merging Features of Different Colposcopic Images and Using Ensemble Classifier. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:67-78. [PMID: 34268095 PMCID: PMC8253312 DOI: 10.4103/jmss.jmss_16_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 03/15/2020] [Accepted: 05/02/2020] [Indexed: 11/04/2022]
Abstract
Background Cervical cancer is a significant cause of cancer mortality in women, particularly in low-income countries. In regular cervical screening methods, such as colposcopy, an image is taken from the cervix of a patient. The particular image can be used by computer-aided diagnosis (CAD) systems that are trained using artificial intelligence algorithms to predict the possibility of cervical cancer. Artificial intelligence models had been highlighted in a number of cervical cancer studies. However, there are a limited number of studies that investigate the simultaneous use of three colposcopic screening modalities including Greenlight, Hinselmann, and Schiller. Methods We propose a cervical cancer predictor model which incorporates the result of different classification algorithms and ensemble classifiers. Our approach merges features of different colposcopic images of a patient. The feature vector of each image includes semantic medical features, subjective judgments, and a consensus. The class label of each sample is calculated using an aggregation function on expert judgments and consensuses. Results We investigated different aggregation strategies to find the best formula for aggregation function and then we evaluated our method using the quality assessment of digital colposcopies dataset, and our approach performance with 96% of sensitivity and 94% of specificity values yields a significant improvement in the field. Conclusion Our model can be used as a supportive clinical decision-making strategy by giving more reliable information to the clinical decision makers. Our proposed model also is more applicable in cervical cancer CAD systems compared to the available methods.
Collapse
Affiliation(s)
- Elham Nikookar
- Department of Computer Engineering, Faculty of Engineering, Shiahd Chamran University of Ahvaz, Ahvaz, Iran
| | - Ebrahim Naderi
- Department of Computer Engineering, University of Applied Science and Technology, Ahvaz, Iran
| | - Ali Rahnavard
- Computational Biology Institute, Department of Biostatistics and Bioinformatics, Milken Institute School of Public Health, The George Washington University, Washington D.C., United States
| |
Collapse
|
24
|
Chandran V, Sumithra MG, Karthick A, George T, Deivakani M, Elakkiya B, Subramaniam U, Manoharan S. Diagnosis of Cervical Cancer based on Ensemble Deep Learning Network using Colposcopy Images. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5584004. [PMID: 33997017 PMCID: PMC8112909 DOI: 10.1155/2021/5584004] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/31/2021] [Accepted: 04/20/2021] [Indexed: 12/17/2022]
Abstract
Traditional screening of cervical cancer type classification majorly depends on the pathologist's experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This paper proposes two deep learning CNN architectures to detect cervical cancer using the colposcopy images; one is the VGG19 (TL) model, and the other is CYENET. In the CNN architecture, VGG19 is adopted as a transfer learning for the studies. A new model is developed and termed as the Colposcopy Ensemble Network (CYENET) to classify cervical cancers from colposcopy images automatically. The accuracy, specificity, and sensitivity are estimated for the developed model. The classification accuracy for VGG19 was 73.3%. Relatively satisfied results are obtained for VGG19 (TL). From the kappa score of the VGG19 model, we can interpret that it comes under the category of moderate classification. The experimental results show that the proposed CYENET exhibited high sensitivity, specificity, and kappa scores of 92.4%, 96.2%, and 88%, respectively. The classification accuracy of the CYENET model is improved as 92.3%, which is 19% higher than the VGG19 (TL) model.
Collapse
Affiliation(s)
- Venkatesan Chandran
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - M. G. Sumithra
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - Alagar Karthick
- Renewable Energy Lab, Department of Electrical and Electronics Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India
| | - Tony George
- Department of Electrical and Electronics Engineering, Adi Shankara Institute of Engineering and Technology Mattoor, Kalady, Kerala 683574, India
| | - M. Deivakani
- Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul, 624622 Tamilnadu, India
| | - Balan Elakkiya
- Department of Electronics and Communication Engineering, Vel Tech High Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Tamilnadu 600062, India
| | - Umashankar Subramaniam
- Department of Communications and Networks, Renewable Energy Lab, College of Engineering, Prince, Sultan University, Riyadh 12435, Saudi Arabia
| | - S. Manoharan
- Department of Computer Science, School of Informatics and Electrical Engineering, Institute of Technology, Ambo University, Ambo, Post Box No. 19, Ethiopia
| |
Collapse
|
25
|
Pal A, Xue Z, Befano B, Rodriguez AC, Long LR, Schiffman M, Antani S. Deep Metric Learning for Cervical Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:53266-53275. [PMID: 34178558 PMCID: PMC8224396 DOI: 10.1109/access.2021.3069346] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cervical cancer is caused by the persistent infection of certain types of the Human Papillomavirus (HPV) and is a leading cause of female mortality particularly in low and middle-income countries (LMIC). Visual inspection of the cervix with acetic acid (VIA) is a commonly used technique in cervical screening. While this technique is inexpensive, clinical assessment is highly subjective, and relatively poor reproducibility has been reported. A deep learning-based algorithm for automatic visual evaluation (AVE) of aceto-whitened cervical images was shown to be effective in detecting confirmed precancer (i.e. direct precursor to invasive cervical cancer). The images were selected from a large longitudinal study conducted by the National Cancer Institute in the Guanacaste province of Costa Rica. The training of AVE used annotation for cervix boundary, and the data scarcity challenge was dealt with manually optimized data augmentation. In contrast, we present a novel approach for cervical precancer detection using a deep metric learning-based (DML) framework which does not incorporate any effort for cervix boundary marking. The DML is an advanced learning strategy that can deal with data scarcity and bias training due to class imbalance data in a better way. Three different widely-used state-of-the-art DML techniques are evaluated- (a) Contrastive loss minimization, (b) N-pair embedding loss minimization, and, (c) Batch-hard loss minimization. Three popular Deep Convolutional Neural Networks (ResNet-50, MobileNet, NasNet) are configured for training with DML to produce class-separated (i.e. linearly separable) image feature descriptors. Finally, a K-Nearest Neighbor (KNN) classifier is trained with the extracted deep features. Both the feature quality and classification performance are quantitatively evaluated on the same data set as used in AVE. It shows that, unlike AVE, without using any data augmentation, the best model produced from our research improves specificity in disease detection without compromising sensitivity. The present research thus paves the way for new research directions for the related field.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Brian Befano
- Information Management Services, Calverton, MD 20705, USA
| | | | - L Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Rockville, MD 20850, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| |
Collapse
|
26
|
Li Y, Liu ZH, Xue P, Chen J, Ma K, Qian T, Zheng Y, Qiao YL. GRAND: A large-scale dataset and benchmark for cervical intraepithelial Neoplasia grading with fine-grained lesion description. Med Image Anal 2021; 70:102006. [PMID: 33690025 DOI: 10.1016/j.media.2021.102006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 12/10/2020] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. World Health Organization (WHO) divided the CIN into three grades (CIN1, CIN2 and CIN3). In clinical practice, different CIN grades require different treatments. Although existing studies proposed computer aided diagnosis (CAD) systems for cervical cancer diagnosis, most of them are fail to perform accurate separation between CIN1 and CIN2/3, due to the similar appearances under colposcopy. To boost the accuracy of CAD systems, we construct a colposcopic image dataset for GRAding cervical intraepithelial Neoplasia with fine-grained lesion Description (GRAND). The dataset consists of colposcopic images collected from 8,604 patients along with the pathological reports. Additionally, we invite the experienced colposcopist to annotate two main clues, which are usually adopted for clinical diagnosis of CIN grade, i.e., texture of acetowhite epithelium (TAE) and appearance of blood vessel (ABV). A multi-rater model using the annotated clues is benchmarked for our dataset. The proposed framework contains several sub-networks (raters) to exploit the fine-grained lesion features TAE and ABV, respectively, by contrastive learning and a backbone network to extract the global information from colposcopic images. A comprehensive experiment is conducted on our GRAND dataset. The experimental results demonstrate the benefit of using additional lesion descriptions (TAE and ABV), which increases the CIN grading accuracy by over 10%. Furthermore, we conduct a human-machine confrontation to evaluate the potential of the proposed benchmark framework for clinical applications. Particularly, three colposcopists on different professional levels (intern, in-service and professional) are invited to compete with our benchmark framework by investigating a same extra test set-our framework achieves a comparable CIN grading accuracy to that of a professional colposcopist.
Collapse
Affiliation(s)
| | - Zhi-Hua Liu
- Diagnosis and Treatment for Cervical Lesions Center, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | | | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | | | - You-Lin Qiao
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
27
|
Peng G, Dong H, Liang T, Li L, Liu J. Diagnosis of cervical precancerous lesions based on multimodal feature changes. Comput Biol Med 2021; 130:104209. [PMID: 33440316 DOI: 10.1016/j.compbiomed.2021.104209] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/11/2020] [Accepted: 12/31/2020] [Indexed: 12/24/2022]
Abstract
To realize the automatic diagnosis of cervical intraepithelial neoplasia (CIN) cases by preacetic acid test and postacetic acid test colposcopy images, this paper proposes a method of cervical precancerous lesion diagnosis based on multimodal feature changes. First, the preacetic acid test and postacetic acid test colposcopy images were registered based on cross-correlation and projection transformation, and then the cervical region was extracted by the k-means clustering algorithm. Finally, a deep learning network was used to extract features and classify the preacetic acid test and postacetic acid test cervical images after registration. Finally, the proposed method achieves a classification accuracy of 86.3%, a sensitivity of 84.1%, and a specificity of 89.8% in 60 test cases. Experimental results show that this method can make better use of the multimodal features of colposcopy images and has lower requirements for medical staff in the process of data acquisition. It has certain clinical significance in cervical cancer precancerous lesion screening systems.
Collapse
Affiliation(s)
- Gengyou Peng
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Hua Dong
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Tong Liang
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Nanchang, Jiangxi, China
| | - Jun Liu
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China.
| |
Collapse
|
28
|
Yu Y, Ma J, Zhao W, Li Z, Ding S. MSCI: A multistate dataset for colposcopy image classification of cervical cancer screening. Int J Med Inform 2020; 146:104352. [PMID: 33360117 DOI: 10.1016/j.ijmedinf.2020.104352] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 11/05/2020] [Accepted: 11/21/2020] [Indexed: 11/26/2022]
Abstract
BACKGROUND Cervical cancer is the second most common female cancer globally, and it is vital to detect cervical cancer with low cost at an early stage using automated screening methods of high accuracy, especially in areas with insufficient medical resources. Automatic detection of cervical intraepithelial neoplasia (CIN) can effectively prevent cervical cancer. OBJECTIVES Due to the deficiency of standard and accessible colposcopy image datasets, we present a dataset containing 4753 colposcopy images acquired from 679 patients in three states (acetic acid reaction, green filter, and iodine test) for detection of cervical intraepithelial neoplasia. Based on this dataset, a new computer-aided method for cervical cancer screening was proposed. METHODS We employed a wide range of methods to comprehensively evaluate our proposed dataset. Hand-crafted feature extraction methods and deep learning methods were used for the performance verification of the multistate colposcopy image (MSCI) dataset. Importantly, we propose a gated recurrent convolutional neural network (C-GCNN) for colposcopy image analysis that considers time series and combined multistate cervical images for CIN grading. RESULTS The experimental results showed that the proposed C-GCNN model achieves the best classification performance in CIN grading compared with hand-crafted feature extraction methods and classic deep learning methods. The results showed an accuracy of 96.87 %, a sensitivity of 95.68 %, and a specificity of 98.72 %. CONCLUSION A multistate colposcopy image dataset (MSCI) is proposed. A CIN grading model (C-GCNN) based on the MSCI dataset is established, which provides a potential method for automated cervical cancer screening.
Collapse
Affiliation(s)
- Yao Yu
- The School of Management, Hefei University of Technology, China
| | - Jie Ma
- The First Affiliated Hospital of USTC, China
| | | | - Zhenmin Li
- The School of Microelectronics, Hefei University of Technology, China
| | - Shuai Ding
- The School of Management, Hefei University of Technology, China.
| |
Collapse
|
29
|
Li Y, Chen J, Xue P, Tang C, Chang J, Chu C, Ma K, Li Q, Zheng Y, Qiao Y. Computer-Aided Cervical Cancer Diagnosis Using Time-Lapsed Colposcopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3403-3415. [PMID: 32406830 DOI: 10.1109/tmi.2020.2994778] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. In this paper, we propose a deep learning framework for the accurate identification of LSIL+ (including CIN and cervical cancer) using time-lapsed colposcopic images. The proposed framework involves two main components, i.e., key-frame feature encoding networks and feature fusion network. The features of the original (pre-acetic-acid) image and the colposcopic images captured at around 60s, 90s, 120s and 150s during the acetic acid test are encoded by the feature encoding networks. Several fusion approaches are compared, all of which outperform the existing automated cervical cancer diagnosis systems using a single time slot. A graph convolutional network with edge features (E-GCN) is found to be the most suitable fusion approach in our study, due to its excellent explainability consistent with the clinical practice. A large-scale dataset, containing time-lapsed colposcopic images from 7,668 patients, is collected from the collaborative hospital to train and validate our deep learning framework. Colposcopists are invited to compete with our computer-aided diagnosis system. The proposed deep learning framework achieves a classification accuracy of 78.33%-comparable to that of an in-service colposcopist-which demonstrates its potential to provide assistance in the realistic clinical scenario.
Collapse
|
30
|
Artificial intelligence-assisted cytology for detection of cervical intraepithelial neoplasia or invasive cancer: A multicenter, clinical-based, observational study. Gynecol Oncol 2020; 159:171-178. [PMID: 32814641 DOI: 10.1016/j.ygyno.2020.07.099] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 07/23/2020] [Indexed: 12/24/2022]
Abstract
OBJECTIVE Artificial intelligence (AI) could automatedly detect abnormalities in digital cytological images, however, the effect in cervical cancer screening is inconclusive. We aim to evaluate the performance of AI-assisted cytology for the detection of histologically cervical intraepithelial lesions (CIN) or cancer. METHODS We trained a supervised deep learning algorithm based on 188,542 digital cytological images. Between Mar 13, 2017, and Oct 20, 2018, 2145 referral women from organized screening were enrolled in a multicenter, clinical-based, observational study. Cervical specimen was sampled to generate two liquid-based slides: one random slide was allocated to AI-assisted reading, and the other to manual reading conducted by skilled cytologists from senior hospital and cytology doctors from primary hospitals. HPV testing and colposcopy-directed biopsy was performed, and histological result was regarded as reference. We calculated the relative sensitivity and relative specificity of AI-assisted reading compared to manual reading for CIN2+. This trial was registered, number ChiCTR2000034131. RESULTS In the referral population, AI-assisted reading detected 92.6% of CIN 2 and 96.1% of CIN 3+, significantly higher than or similar to manual reading. AI-assisted reading had equivalent sensitivity (relative sensitivity 1.01, 95%CI, 0.97-1.05) and higher specificity (relative specificity 1.26, 1.20-1.32) compared to skilled cytologists; whereas higher sensitivity (1.12, 1.05-1.20) and specificity (1.36, 1.25-1.48) compared to cytology doctors. In HPV-positive women, AI-assisted reading improved specificity for CIN1 or less at no expense of reduction of sensitivity compared to manual reading. CONCLUSIONS AI-assisted cytology may contribute to the primary cytology screening or triage. Further studies are needed in general population.
Collapse
|
31
|
Bao H, Sun X, Zhang Y, Pang B, Li H, Zhou L, Wu F, Cao D, Wang J, Turic B, Wang L. The artificial intelligence-assisted cytology diagnostic system in large-scale cervical cancer screening: A population-based cohort study of 0.7 million women. Cancer Med 2020; 9:6896-6906. [PMID: 32697872 PMCID: PMC7520355 DOI: 10.1002/cam4.3296] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 05/20/2020] [Accepted: 06/22/2020] [Indexed: 02/06/2023] Open
Abstract
Background Adequate cytology is limited by insufficient cytologists in a large‐scale cervical cancer screening. We aimed to develop an artificial intelligence (AI)‐assisted cytology system in cervical cancer screening program. Methods We conducted a perspective cohort study within a population‐based cervical cancer screening program for 0.7 million women, using a validated AI‐assisted cytology system. For comparison, cytologists examined all slides classified by AI as abnormal and a randomly selected 10% of normal slides. Each woman with slides classified as abnormal by either AI‐assisted or manual reading was diagnosed by colposcopy and biopsy. The outcomes were histologically confirmed cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Results Finally, we recruited 703 103 women, of whom 98 549 were independently screened by AI and manual reading. The overall agreement rate between AI and manual reading was 94.7% (95% confidential interval [CI], 94.5%‐94.8%), and kappa was 0.92 (0.91‐0.92). The detection rates of CIN2+ increased with the severity of cytology abnormality performed by both AI and manual reading (Ptrend < 0.001). General estimated equations showed that detection of CIN2+ among women with ASC‐H or HSIL by AI were significantly higher than corresponding groups classified by cytologists (for ASC‐H: odds ratio [OR] = 1.22, 95%CI 1.11‐1.34, P < .001; for HSIL: OR = 1.41, 1.28‐1.55, P < .001). AI‐assisted cytology was 5.8% (3.0%‐8.6%) more sensitive for detection of CIN2+ than manual reading with a slight reduction in specificity. Conclusions AI‐assisted cytology system could exclude most of normal cytology, and improve sensitivity with clinically equivalent specificity for detection of CIN2+ compared with manual cytology reading. Overall, the results support AI‐based cytology system for the primary cervical cancer screening in large‐scale population.
Collapse
Affiliation(s)
- Heling Bao
- Department of Maternal and Child Health, School of Public Health, Peking University, Beijing, China.,National Center for Chronic and Non-communicable Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, China
| | - Xiaorong Sun
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Yi Zhang
- Electronic and Information Engineering Department, Wenhua College, Wuhan, China
| | - Baochuan Pang
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan University, Wuhan, China
| | - Hua Li
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Liang Zhou
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Fengpin Wu
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Dehua Cao
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Jian Wang
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Bojana Turic
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Linhong Wang
- Landing Cloud Medical Laboratory Co., Wuhan, China
| |
Collapse
|
32
|
Hu L, Bell D, Antani S, Xue Z, Yu K, Horning MP, Gachuhi N, Wilson B, Jaiswal MS, Befano B, Long LR, Herrero R, Einstein MH, Burk RD, Demarco M, Gage JC, Rodriguez AC, Wentzensen N, Schiffman M. An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening. J Natl Cancer Inst 2020; 111:923-932. [PMID: 30629194 DOI: 10.1093/jnci/djy225] [Citation(s) in RCA: 185] [Impact Index Per Article: 46.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 10/12/2018] [Accepted: 12/03/2018] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Human papillomavirus vaccination and cervical screening are lacking in most lower resource settings, where approximately 80% of more than 500 000 cancer cases occur annually. Visual inspection of the cervix following acetic acid application is practical but not reproducible or accurate. The objective of this study was to develop a "deep learning"-based visual evaluation algorithm that automatically recognizes cervical precancer/cancer. METHODS A population-based longitudinal cohort of 9406 women ages 18-94 years in Guanacaste, Costa Rica was followed for 7 years (1993-2000), incorporating multiple cervical screening methods and histopathologic confirmation of precancers. Tumor registry linkage identified cancers up to 18 years. Archived, digitized cervical images from screening, taken with a fixed-focus camera ("cervicography"), were used for training/validation of the deep learning-based algorithm. The resultant image prediction score (0-1) could be categorized to balance sensitivity and specificity for detection of precancer/cancer. All statistical tests were two-sided. RESULTS Automated visual evaluation of enrollment cervigrams identified cumulative precancer/cancer cases with greater accuracy (area under the curve [AUC] = 0.91, 95% confidence interval [CI] = 0.89 to 0.93) than original cervigram interpretation (AUC = 0.69, 95% CI = 0.63 to 0.74; P < .001) or conventional cytology (AUC = 0.71, 95% CI = 0.65 to 0.77; P < .001). A single visual screening round restricted to women at the prime screening ages of 25-49 years could identify 127 (55.7%) of 228 precancers (cervical intraepithelial neoplasia 2/cervical intraepithelial neoplasia 3/adenocarcinoma in situ [AIS]) diagnosed cumulatively in the entire adult population (ages 18-94 years) while referring 11.0% for management. CONCLUSIONS The results support consideration of automated visual evaluation of cervical images from contemporary digital cameras. If achieved, this might permit dissemination of effective point-of-care cervical screening.
Collapse
|
33
|
Konstandinou C, Kostopoulos S, Glotsos D, Pappa D, Ravazoula P, Michail G, Kalatzis I, Asvestas P, Lavdas E, Cavouras D, Sakellaropoulos G. GPU-enabled design of an adaptable pattern recognition system for discriminating squamous intraepithelial lesions of the cervix. ACTA ACUST UNITED AC 2020; 65:315-325. [PMID: 31747374 DOI: 10.1515/bmt-2019-0040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/30/2019] [Indexed: 11/15/2022]
Abstract
The aim of the present study was to design an adaptable pattern recognition (PR) system to discriminate low- from high-grade squamous intraepithelial lesions (LSIL and HSIL, respectively) of the cervix using microscopy images of hematoxylin and eosin (H&E)-stained biopsy material from two different medical centers. Clinical material comprised H&E-stained biopsies of 66 patients diagnosed with LSIL (34 cases) or HSIL (32 cases). Regions of interest were selected from each patient's digitized microscopy images. Seventy-seven features were generated, regarding the texture, morphology and spatial distribution of nuclei. The probabilistic neural network (PNN) classifier, the exhaustive search feature selection method, the leave-one-out (LOO) and the bootstrap validation methods were used to design the PR system and to assess its precision. Optimal PR system design and evaluation were made feasible by the employment of graphics processing unit (GPU) and Compute Unified Device Architecture (CUDA) technologies. The accuracy of the PR-system was 93% and 88.6% when using the LOO and bootstrap validation methods, respectively. The proposed PR system for discriminating LSIL from HSIL of the cervix was designed to operate in a clinical environment, having the capability of being redesigned when new verified cases are added to its repository and when data from other medical centers are included, following similar biopsy material preparation procedures.
Collapse
Affiliation(s)
- Christos Konstandinou
- Department of Medical Physics, School of Health Sciences, Faculty of Medicine, University of Patras, Rio, Patras, Greece
| | - Spiros Kostopoulos
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos Street, Egaleo, 122 43 Athens, Greece
| | - Dimitris Glotsos
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Dimitra Pappa
- Department of Pathology, IASO Thessalias, Larissa, Greece
| | | | - George Michail
- Department of Obstetrics and Gynecology, University Hospital of Patras, Rio, Greece
| | - Ioannis Kalatzis
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Pantelis Asvestas
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Eleftherios Lavdas
- Department of Biomedical Sciences, University of West Attica, Athens, Greece
| | - Dionisis Cavouras
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - George Sakellaropoulos
- Department of Medical Physics, School of Health Sciences, Faculty of Medicine, University of Patras, Rio, Patras, Greece
| |
Collapse
|
34
|
Xue Z, Novetsky AP, Einstein MH, Marcus JZ, Befano B, Guo P, Demarco M, Wentzensen N, Long LR, Schiffman M, Antani S. A demonstration of automated visual evaluation of cervical images taken with a smartphone camera. Int J Cancer 2020; 147:2416-2423. [PMID: 32356305 DOI: 10.1002/ijc.33029] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 03/19/2020] [Accepted: 04/09/2020] [Indexed: 02/05/2023]
Abstract
We examined whether automated visual evaluation (AVE), a deep learning computer application for cervical cancer screening, can be used on cervix images taken by a contemporary smartphone camera. A large number of cervix images acquired by the commercial MobileODT EVA system were filtered for acceptable visual quality and then 7587 filtered images from 3221 women were annotated by a group of gynecologic oncologists (so the gold standard is an expert impression, not histopathology). We tested and analyzed on multiple random splits of the images using two deep learning, object detection networks. For all the receiver operating characteristics curves, the area under the curve values for the discrimination of the most likely precancer cases from least likely cases (most likely controls) were above 0.90. These results showed that AVE can classify cervix images with confidence scores that are strongly associated with expert evaluations of severity for the same images. The results on a small subset of images that have histopathologic diagnoses further supported the capability of AVE for predicting cervical precancer. We examined the associations of AVE severity score with gynecologic oncologist impression at all regions where we had a sufficient number of cases and controls, and the influence of a woman's age. The method was found generally resilient to regional variation in the appearance of the cervix. This work suggests that using AVE on smartphones could be a useful adjunct to health-worker visual assessment with acetic acid, a cervical cancer screening method commonly used in low- and middle-resource settings.
Collapse
Affiliation(s)
- Zhiyun Xue
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Akiva P Novetsky
- Rutgers New Jersey Medical School, Cancer Institute of New Jersey (CINJ), Newark, New Jersey, USA
| | - Mark H Einstein
- Rutgers New Jersey Medical School, Cancer Institute of New Jersey (CINJ), Newark, New Jersey, USA
| | - Jenna Z Marcus
- Rutgers New Jersey Medical School, Cancer Institute of New Jersey (CINJ), Newark, New Jersey, USA
| | - Brian Befano
- Information Management Services, Inc, Rockville, Maryland, USA
| | - Peng Guo
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Maria Demarco
- National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Nicolas Wentzensen
- National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Leonard Rodney Long
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| |
Collapse
|
35
|
Bai B, Du Y, Liu P, Sun P, Li P, Lv Y. Detection of cervical lesion region from colposcopic images based on feature reselection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101785] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
36
|
Yue Z, Ding S, Zhao W, Wang H, Ma J, Zhang Y, Zhang Y. Automatic CIN Grades Prediction of Sequential Cervigram Image Using LSTM With Multistate CNN Features. IEEE J Biomed Health Inform 2020; 24:844-854. [DOI: 10.1109/jbhi.2019.2922682] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Saini SK, Bansal V, Kaur R, Juneja M. ColpoNet for automated cervical cancer screening using colposcopy images. MACHINE VISION AND APPLICATIONS 2020; 31:15. [DOI: 10.1007/s00138-020-01063-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 12/19/2019] [Accepted: 02/04/2020] [Indexed: 08/30/2023]
|
38
|
Zhang T, Luo YM, Li P, Liu PZ, Du YZ, Sun P, Dong B, Xue H. Cervical precancerous lesions classification using pre-trained densely connected convolutional networks with colposcopy images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101566] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
39
|
Chen H, Yang L, Li L, Li M, Chen Z. An efficient cervical disease diagnosis approach using segmented images and cytology reporting. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.07.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
40
|
Liu J, Peng Y, Li L, Chen Z, Zhang Y. Better resource utilization and quality of care for cervical cancer screening in low-resourced districts using an internet-based expert system. Technol Health Care 2019; 27:289-299. [DOI: 10.3233/thc-181577] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Jun Liu
- Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Yun Peng
- Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Nanchang, Jiangxi, China
| | - Zhen Chen
- Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Yingchun Zhang
- Department of Biomedical Engineering, University of Houston, Houston, TX, USA
| |
Collapse
|
41
|
Multifeature Quantification of Nuclear Properties from Images of H&E-Stained Biopsy Material for Investigating Changes in Nuclear Structure with Advancing CIN Grade. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:6358189. [PMID: 30073048 PMCID: PMC6057323 DOI: 10.1155/2018/6358189] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 05/03/2018] [Accepted: 06/03/2018] [Indexed: 01/27/2023]
Abstract
Background Cervical dysplasia is a precancerous condition, and if left untreated, it may lead to cervical cancer, which is the second most common cancer in women. The purpose of this study was to investigate differences in nuclear properties of the H&E-stained biopsy material between low CIN and high CIN cases and associate those properties with the CIN grade. Methods The clinical material comprised hematoxylin and eosin- (H&E-) stained biopsy specimens from lesions of 44 patients diagnosed with cervical intraepithelial neoplasia (CIN). Four or five nonoverlapping microscopy images were digitized from each patient's H&E specimens, from regions indicated by the expert physician. Sixty-three textural and morphological nuclear features were generated for each patient's images. The Wilcoxon statistical test and the point biserial correlation were used to estimate each feature's discriminatory power between low CIN and high CIN cases and its correlation with the advancing CIN grade, respectively. Results Statistical analysis showed 19 features that quantify nuclear shape, size, and texture and sustain statistically significant differences between low CIN and high CIN cases. These findings revealed that nuclei in high CIN cases, as compared to nuclei in low CIN cases, have more irregular shape, are larger in size, are coarser in texture, contain higher edges, have higher local contrast, are more inhomogeneous, and comprise structures of different intensities. Conclusion A systematic statistical analysis of nucleus features, quantified from the H&E-stained biopsy material, showed that there are significant differences in the shape, size, and texture of nuclei between low CIN and high CIN cases.
Collapse
|
42
|
Liu J, Li L, Wang L. Acetowhite region segmentation in uterine cervix images using a registered ratio image. Comput Biol Med 2018; 93:47-55. [DOI: 10.1016/j.compbiomed.2017.12.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 12/14/2017] [Accepted: 12/14/2017] [Indexed: 12/24/2022]
|
43
|
|
44
|
Zhang L, Nogues I, Summers RM, Liu S, Yao J. DeepPap: Deep Convolutional Networks for Cervical Cell Classification. IEEE J Biomed Health Inform 2017; 21:1633-1643. [PMID: 28541229 DOI: 10.1109/jbhi.2017.2705583] [Citation(s) in RCA: 141] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.
Collapse
|