1
|
Vargas-Cardona HD, Rodriguez-Lopez M, Arrivillaga M, Vergara-Sanchez C, García-Cifuentes JP, Bermúdez PC, Jaramillo-Botero A. Artificial intelligence for cervical cancer screening: Scoping review, 2009-2022. Int J Gynaecol Obstet 2024; 165:566-578. [PMID: 37811597 DOI: 10.1002/ijgo.15179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/04/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023]
Abstract
BACKGROUND The intersection of artificial intelligence (AI) with cancer research is increasing, and many of the advances have focused on the analysis of cancer images. OBJECTIVES To describe and synthesize the literature on the diagnostic accuracy of AI in early imaging diagnosis of cervical cancer following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). SEARCH STRATEGY Arksey and O'Malley methodology was used and PubMed, Scopus, and Google Scholar databases were searched using a combination of English and Spanish keywords. SELECTION CRITERIA Identified titles and abstracts were screened to select original reports and cross-checked for overlap of cases. DATA COLLECTION AND ANALYSIS A descriptive summary was organized by the AI algorithm used, total of images analyzed, data source, clinical comparison criteria, and diagnosis performance. MAIN RESULTS We identified 32 studies published between 2009 and 2022. The primary sources of images were digital colposcopy, cervicography, and mobile devices. The machine learning/deep learning (DL) algorithms applied in the articles included support vector machine (SVM), random forest classifier, k-nearest neighbors, multilayer perceptron, C4.5, Naïve Bayes, AdaBoost, XGboots, conditional random fields, Bayes classifier, convolutional neural network (CNN; and variations), ResNet (several versions), YOLO+EfficientNetB0, and visual geometry group (VGG; several versions). SVM and DL methods (CNN, ResNet, VGG) showed the best diagnostic performances, with an accuracy of over 97%. CONCLUSION We concluded that the use of AI for cervical cancer screening has increased over the years, and some results (mainly from DL) are very promising. However, further research is necessary to validate these findings.
Collapse
Affiliation(s)
| | - Mérida Rodriguez-Lopez
- Faculty of Health Sciences, Universidad Icesi, Cali, Colombia
- Fundación Valle del Lili, Centro de Investigaciones Clínicas, Cali, Colombia
| | | | | | | | | | - Andres Jaramillo-Botero
- OMICAS Research Institute (iOMICAS), Pontificia Universidad Javeriana Cali, Cali, Colombia
- Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California, USA
| |
Collapse
|
2
|
Abate A, Munshea A, Nibret E, Alemayehu DH, Alemu A, Abdissa A, Mihret A, Abebe M, Mulu A. Characterization of human papillomavirus genotypes and their coverage in vaccine delivered to Ethiopian women. Sci Rep 2024; 14:7976. [PMID: 38575600 PMCID: PMC10995144 DOI: 10.1038/s41598-024-57085-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 03/14/2024] [Indexed: 04/06/2024] Open
Abstract
Cervical cancer is a significant public health concern in Ethiopia. It is mainly caused by persistent infection with the human papillomaviruses. The aim of this study was to assess the relationship between carcinogenic risk of probable, possible and low risk HPV infection and those of cervical intraepithelial neoplasia (CIN) and cervical cancer. A cross sectional study nested from prospective cohort study was conducted in Bahir Dar, northwest Ethiopia. Statistical analyses were performed using SPSSversion 26.0. HPV-16 was associated with a relatively higher risk of CIN II+, (AOR = 15.42; 95% CI 6.81-34.91). In addition, HPV-52, -18, -53 and -58, were significantly associated with an increased risk of CIN II+, (AOR = 7.38 (1.73-31.54), 5.42 (1.61-18.31), 4.08 (1.53-10.87), and 3.17 (1.00-10.03)), respectively. The current study shows high rate of HPV with predominance of HPV-16, -53, -58, -18, -35, and -52. The quadrivalent and nonavalent vaccine had only covered 27.1% and 45% of the circulating HPV genotypes. Ethiopia may need to consider introduction of nonavalent vaccine into the national public health strategy. Polyvalent vaccine which includes the genotypes not covered by existing approved vaccines should be considered.
Collapse
Affiliation(s)
- Alemayehu Abate
- Department of Health Biotechnology, Institute of Biotechnology, Bahir Dar University, P. O. Box 79, Bahir Dar, Ethiopia.
- Amhara Public Health Institute, Bahir Dar, Ethiopia.
| | - Abaineh Munshea
- Department of Health Biotechnology, Institute of Biotechnology, Bahir Dar University, P. O. Box 79, Bahir Dar, Ethiopia
- Department of Biology, College of Science, Bahir Dar University, P. O Box 79, Bahir Dar, Ethiopia
| | - Endalkachew Nibret
- Department of Health Biotechnology, Institute of Biotechnology, Bahir Dar University, P. O. Box 79, Bahir Dar, Ethiopia
- Department of Biology, College of Science, Bahir Dar University, P. O Box 79, Bahir Dar, Ethiopia
| | | | - Ashenafi Alemu
- Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | | | - Adane Mihret
- Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | - Markos Abebe
- Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | | |
Collapse
|
3
|
Chen P, Liu F, Zhang J, Wang B. MFEM-CIN: A Lightweight Architecture Combining CNN and Transformer for the Classification of Pre-Cancerous Lesions of the Cervix. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:216-225. [PMID: 38606400 PMCID: PMC11008799 DOI: 10.1109/ojemb.2024.3367243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/03/2023] [Accepted: 02/05/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: Cervical cancer is one of the most common cancers in women worldwide, ranking among the top four. Unfortunately, it is also the fourth leading cause of cancer-related deaths among women, particularly in developing countries where incidence and mortality rates are higher compared to developed nations. Colposcopy can aid in the early detection of cervical lesions, but its effectiveness is limited in areas with limited medical resources and a lack of specialized physicians. Consequently, many cases are diagnosed at later stages, putting patients at significant risk. Methods: This paper proposes an automated colposcopic image analysis framework to address these challenges. The framework aims to reduce the labor costs associated with cervical precancer screening in undeserved regions and assist doctors in diagnosing patients. The core of the framework is the MFEM-CIN hybrid model, which combines Convolutional Neural Networks (CNN) and Transformer to aggregate the correlation between local and global features. This combined analysis of local and global information is scientifically useful in clinical diagnosis. In the model, MSFE and MSFF are utilized to extract and fuse multi-scale semantics. This preserves important shallow feature information and allows it to interact with the deep feature, enriching the semantics to some extent. Conclusions: The experimental results demonstrate an accuracy rate of 89.2% in identifying cervical intraepithelial neoplasia while maintaining a lightweight model. This performance exceeds the average accuracy achieved by professional physicians, indicating promising potential for practical application. Utilizing automated colposcopic image analysis and the MFEM-CIN model, this research offers a practical solution to reduce the burden on healthcare providers and improve the efficiency and accuracy of cervical cancer diagnosis in resource-constrained areas.
Collapse
Affiliation(s)
- Peng Chen
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
- Fin China-Anhui University Joint Laboratory for Financial Big Data ResearchHefei Financial China Information and Technology Company, Ltd.Hefei230022China
| | - Fobao Liu
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Jun Zhang
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Bing Wang
- School of Management Science and EngineeringAnhui University of Finance and EconomicsBengbu233030China
| |
Collapse
|
4
|
Ouh YT, Kim TJ, Ju W, Kim SW, Jeon S, Kim SN, Kim KG, Lee JK. Development and validation of artificial intelligence-based analysis software to support screening system of cervical intraepithelial neoplasia. Sci Rep 2024; 14:1957. [PMID: 38263154 PMCID: PMC10806233 DOI: 10.1038/s41598-024-51880-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 01/10/2024] [Indexed: 01/25/2024] Open
Abstract
Cervical cancer, the fourth most common cancer among women worldwide, often proves fatal and stems from precursor lesions caused by high-risk human papillomavirus (HR-HPV) infection. Accurate and early diagnosis is crucial for effective treatment. Current screening methods, such as the Pap test, liquid-based cytology (LBC), visual inspection with acetic acid (VIA), and HPV DNA testing, have limitations, requiring confirmation through colposcopy. This study introduces CerviCARE AI, an artificial intelligence (AI) analysis software, to address colposcopy challenges. It automatically analyzes Tele-cervicography images, distinguishing between low-grade and high-grade lesions. In a multicenter retrospective study, CerviCARE AI achieved a remarkable sensitivity of 98% for high-risk groups (P2, P3, HSIL or higher, CIN2 or higher) and a specificity of 95.5%. These findings underscore CerviCARE AI's potential as a valuable diagnostic tool for highly accurate identification of cervical precancerous lesions. While further prospective research is needed to validate its clinical utility, this AI system holds promise for improving cervical cancer screening and lessening the burden of this deadly disease.
Collapse
Affiliation(s)
- Yung-Taek Ouh
- Department of Obstetrics and Gynecology, Korea University Ansan Hospital, 123, Jeokgeum-ro, Danwon-gu, Ansan-si, Gyeonggi-do, Republic of Korea
| | - Tae Jin Kim
- Department of Obstetrics and Gynecology, Konkuk University School of Medicine, 120-1, Neungdong-ro, Gwangjin-gu, Seoul, Republic of Korea
| | - Woong Ju
- Department of Obstetrics and Gynecology, Ewha Womans University Seoul Hospital, 25, Magokdong-ro 2-gil, Gangseo-gu, Seoul, Republic of Korea
| | - Sang Wun Kim
- Department of Obstetrics and Gynecology, Institute of Women's Life Medical Science, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul, Republic of Korea
| | - Seob Jeon
- Department of Obstetrics and Gynecology, College of Medicine, Soonchunhyang University Cheonan Hospital, 31, Suncheonhyang 6-gil, Dongnam-gu, Cheonan-si, Chungcheongnam-do, Republic of Korea
| | - Soo-Nyung Kim
- R&D Center, NTL Medical Institute, Yongin, Republic of Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University College of Medicine, Gil Medical Center, 24, Namdong-daero 774beon-gil, Namdong-gu, Incheon, Republic of Korea
| | - Jae-Kwan Lee
- Department of Obstetrics and Gynecology, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Wu A, Xue P, Abulizi G, Tuerxun D, Rezhake R, Qiao Y. Artificial intelligence in colposcopic examination: A promising tool to assist junior colposcopists. Front Med (Lausanne) 2023; 10:1060451. [PMID: 37056736 PMCID: PMC10088560 DOI: 10.3389/fmed.2023.1060451] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/08/2023] [Indexed: 03/17/2023] Open
Abstract
IntroductionWell-trained colposcopists are in huge shortage worldwide, especially in low-resource areas. Here, we aimed to evaluate the Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) to detect abnormalities based on digital colposcopy images, especially focusing on its role in assisting junior colposcopist to correctly identify the lesion areas where biopsy should be performed.Materials and methodsThis is a hospital-based retrospective study, which recruited the women who visited colposcopy clinics between September 2021 to January 2022. A total of 366 of 1,146 women with complete medical information recorded by a senior colposcopist and valid histology results were included. Anonymized colposcopy images were reviewed by CAIADS and a junior colposcopist separately, and the junior colposcopist reviewed the colposcopy images with CAIADS results (named CAIADS-Junior). The diagnostic accuracy and biopsy efficiency of CAIADS and CAIADS-Junior were assessed in detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+), CIN3+, and cancer in comparison with the senior and junior colposcipists. The factors influencing the accuracy of CAIADS were explored.ResultsFor CIN2 + and CIN3 + detection, CAIADS showed a sensitivity at ~80%, which was not significantly lower than the sensitivity achieved by the senior colposcopist (for CIN2 +: 80.6 vs. 91.3%, p = 0.061 and for CIN3 +: 80.0 vs. 90.0%, p = 0.189). The sensitivity of the junior colposcopist was increased significantly with the assistance of CAIADS (for CIN2 +: 95.1 vs. 79.6%, p = 0.002 and for CIN3 +: 97.1 vs. 85.7%, p = 0.039) and was comparable to those of the senior colposcopists (for CIN2 +: 95.1 vs. 91.3%, p = 0.388 and for CIN3 +: 97.1 vs. 90.0%, p = 0.125). In detecting cervical cancer, CAIADS achieved the highest sensitivity at 100%. For all endpoints, CAIADS showed the highest specificity (55–64%) and positive predictive values compared to both senior and junior colposcopists. When CIN grades became higher, the average biopsy numbers decreased for the subspecialists and CAIADS required a minimum number of biopsies to detect per case (2.2–2.6 cut-points). Meanwhile, the biopsy sensitivity of the junior colposcopist was the lowest, but the CAIADS-assisted junior colposcopist achieved a higher biopsy sensitivity.ConclusionColposcopic Artificial Intelligence Auxiliary Diagnostic System could assist junior colposcopists to improve diagnostic accuracy and biopsy efficiency, which might be a promising solution to improve the quality of cervical cancer screening in low-resource settings.
Collapse
Affiliation(s)
- Aiyuan Wu
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Peng Xue
- School of Population Medicine and Public Health, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Guzhalinuer Abulizi
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Dilinuer Tuerxun
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Remila Rezhake
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
- *Correspondence: Remila Rezhake,
| | - Youlin Qiao
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
- School of Population Medicine and Public Health, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
- Youlin Qiao,
| |
Collapse
|
6
|
Allahqoli L, Laganà AS, Mazidimoradi A, Salehiniya H, Günther V, Chiantera V, Karimi Goghari S, Ghiasvand MM, Rahmani A, Momenimovahed Z, Alkatout I. Diagnosis of Cervical Cancer and Pre-Cancerous Lesions by Artificial Intelligence: A Systematic Review. Diagnostics (Basel) 2022; 12:2771. [PMID: 36428831 PMCID: PMC9689914 DOI: 10.3390/diagnostics12112771] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/06/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE The likelihood of timely treatment for cervical cancer increases with timely detection of abnormal cervical cells. Automated methods of detecting abnormal cervical cells were established because manual identification requires skilled pathologists and is time consuming and prone to error. The purpose of this systematic review is to evaluate the diagnostic performance of artificial intelligence (AI) technologies for the prediction, screening, and diagnosis of cervical cancer and pre-cancerous lesions. MATERIALS AND METHODS Comprehensive searches were performed on three databases: Medline, Web of Science Core Collection (Indexes = SCI-EXPANDED, SSCI, A & HCI Timespan) and Scopus to find papers published until July 2022. Articles that applied any AI technique for the prediction, screening, and diagnosis of cervical cancer were included in the review. No time restriction was applied. Articles were searched, screened, incorporated, and analyzed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. RESULTS The primary search yielded 2538 articles. After screening and evaluation of eligibility, 117 studies were incorporated in the review. AI techniques were found to play a significant role in screening systems for pre-cancerous and cancerous cervical lesions. The accuracy of the algorithms in predicting cervical cancer varied from 70% to 100%. AI techniques make a distinction between cancerous and normal Pap smears with 80-100% accuracy. AI is expected to serve as a practical tool for doctors in making accurate clinical diagnoses. The reported sensitivity and specificity of AI in colposcopy for the detection of CIN2+ were 71.9-98.22% and 51.8-96.2%, respectively. CONCLUSION The present review highlights the acceptable performance of AI systems in the prediction, screening, or detection of cervical cancer and pre-cancerous lesions, especially when faced with a paucity of specialized centers or medical resources. In combination with human evaluation, AI could serve as a helpful tool in the interpretation of cervical smears or images.
Collapse
Affiliation(s)
- Leila Allahqoli
- Midwifery Department, Ministry of Health and Medical Education, Tehran 1467664961, Iran
| | - Antonio Simone Laganà
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Afrooz Mazidimoradi
- Neyriz Public Health Clinic, Shiraz University of Medical Sciences, Shiraz 7134814336, Iran
| | - Hamid Salehiniya
- Social Determinants of Health Research Center, Birjand University of Medical Sciences, Birjand 9717853577, Iran
| | - Veronika Günther
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| | - Vito Chiantera
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Shirin Karimi Goghari
- School of Industrial and Systems Engineering, Tarbiat Modares University (TMU), Tehran 1411713114, Iran
| | - Mohammad Matin Ghiasvand
- Department of Computer Engineering, Amirkabir University of Technology (AUT), Tehran 1591634311, Iran
| | - Azam Rahmani
- Nursing and Midwifery Care Research Centre, School of Nursing and Midwifery, Tehran University of Medical Sciences, Tehran 141973317, Iran
| | - Zohre Momenimovahed
- Reproductive Health Department, Qom University of Medical Sciences, Qom 3716993456, Iran
| | - Ibrahim Alkatout
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| |
Collapse
|
7
|
Song J, Im S, Lee SH, Jang HJ. Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images. Diagnostics (Basel) 2022; 12:2623. [PMID: 36359467 PMCID: PMC9689570 DOI: 10.3390/diagnostics12112623] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 08/11/2023] Open
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
Collapse
Affiliation(s)
- JaeYen Song
- Department of Obstetrics and Gynecology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Soyoung Im
- Department of Hospital Pathology, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 16247, Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| |
Collapse
|
8
|
Fan Y, Ma H, Fu Y, Liang X, Yu H, Liu Y. Colposcopic multimodal fusion for the classification of cervical lesions. Phys Med Biol 2022; 67. [PMID: 35617940 DOI: 10.1088/1361-6560/ac73d4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/26/2022] [Indexed: 01/01/2023]
Abstract
Objective: Cervical cancer is one of the two biggest killers of women and early detection of cervical precancerous lesions can effectively improve the survival rate of patients. Manual diagnosis by combining colposcopic images and clinical examination results is the main clinical diagnosis method at present. Developing an intelligent diagnosis algorithm based on artificial intelligence is an inevitable trend to solve the objectification of diagnosis and improve the quality and efficiency of diagnosis.Approach: A colposcopic multimodal fusion convolutional neural network (CMF-CNN) was proposed for the classification of cervical lesions. Mask region convolutional neural network was used to detect the cervical region while the encoding network EfficientNet-B3 was introduced to extract the multimodal image features from the acetic image and iodine image. Finally, Squeeze-and-Excitation, Atrous Spatial Pyramid Pooling, and convolution block were also adopted to encode and fuse the patient's clinical text information.Main results: The experimental results showed that in 7106 cases of colposcopy, the accuracy, macro F1-score, macro-areas under the curve of the proposed model were 92.70%, 92.74%, 98.56%, respectively. They are superior to the mainstream unimodal image classification models.Significance: CMF-CNN proposed in this paper combines multimodal information, which has high performance in the classification of cervical lesions in colposcopy, so it can provide comprehensive diagnostic aid.
Collapse
Affiliation(s)
- Yinuo Fan
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Huizhan Ma
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuanbin Fu
- The College of Intelligence and Computidng, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xiaoyun Liang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Hui Yu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuzhen Liu
- The Department of Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang 261042, People's Republic of China
| |
Collapse
|
9
|
Huang W, Sun S, Yu Z, Lu S, Feng H. Chronic Cervicitis and Cervical Cancer Detection Based on Deep Learning of Colposcopy Images Toward Translational Pharmacology. Front Pharmacol 2022; 13:911962. [PMID: 35712722 PMCID: PMC9196041 DOI: 10.3389/fphar.2022.911962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
With the rapid development of deep learning, automatic image recognition is widely used in medical development. In this study, a deep learning convolutional neural network model was developed to recognize and classify chronic cervicitis and cervical cancer. A total of 10,012 colposcopy images of 1,081 patients from Hunan Provincial People’s Hospital in China were recorded. Five different colposcopy image features of the cervix including chronic cervicitis, intraepithelial lesions, cancer, polypus, and free hyperplastic squamous epithelial tissue were extracted to be applied in our deep learning network convolutional neural network model. However, the result showed a low accuracy (42.16%) due to computer misrecognition of chronic cervicitis, intraepithelial lesions, and free hyperplastic squamous epithelial tissue with high similarity. To optimize this model, we selected two significant feature images: chronic cervicitis and cervical cancer to input into a deep learning network. The result indicates high accuracy and robustness with an accuracy of 95.19%, which can be applied to detect whether the patient has chronic cervicitis or cervical cancer based on the patient’s colposcopy images.
Collapse
Affiliation(s)
- Wei Huang
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Shasha Sun
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Zhengyu Yu
- Faculty of Engineering and IT, University of Technology, Sydney, NSW, Australia
| | - Shanshan Lu
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Hao Feng
- Department of Dermatology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
- *Correspondence: Hao Feng,
| |
Collapse
|
10
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|
11
|
Ito Y, Miyoshi A, Ueda Y, Tanaka Y, Nakae R, Morimoto A, Shiomi M, Enomoto T, Sekine M, Sasagawa T, Yoshino K, Harada H, Nakamura T, Murata T, Hiramatsu K, Saito J, Yagi J, Tanaka Y, Kimura T. An artificial intelligence-assisted diagnostic system improves the accuracy of image diagnosis of uterine cervical lesions. Mol Clin Oncol 2022; 16:27. [PMID: 34987798 DOI: 10.3892/mco.2021.2460] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 10/07/2021] [Indexed: 12/31/2022] Open
Abstract
The present study created an artificial intelligence (AI)-automated diagnostics system for uterine cervical lesions and assessed the performance of these images for AI diagnostic imaging of pathological cervical lesions. A total of 463 colposcopic images were analyzed. The traditional colposcopy diagnoses were compared to those obtained by AI image diagnosis. Next, 100 images were presented to a panel of 32 gynecologists who independently examined each image in a blinded fashion and diagnosed them for four categories of tumors. Then, the 32 gynecologists revisited their diagnosis for each image after being informed of the AI diagnosis. The present study assessed any changes in physician diagnosis and the accuracy of AI-image-assisted diagnosis (AISD). The accuracy of AI was 57.8% for normal, 35.4% for cervical intraepithelial neoplasia (CIN)1, 40.5% for CIN2-3 and 44.2% for invasive cancer. The accuracy of gynecologist diagnoses from cervical pathological images, before knowing the AI image diagnosis, was 54.4% for CIN2-3 and 38.9% for invasive cancer. After learning of the AISD, their accuracy improved to 58.0% for CIN2-3 and 48.5% for invasive cancer. AI-assisted image diagnosis was able to improve gynecologist diagnosis accuracy significantly (P<0.01) for invasive cancer and tended to improve their accuracy for CIN2-3 (P=0.14).
Collapse
Affiliation(s)
- Yu Ito
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Ai Miyoshi
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Yutaka Ueda
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Yusuke Tanaka
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Ruriko Nakae
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Akiko Morimoto
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Mayu Shiomi
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| | - Takayuki Enomoto
- Department of Obstetrics and Gynecology, Niigata University Graduate School of Medicine, Chuo-ku, Niigata 951-8520, Japan
| | - Masayuki Sekine
- Department of Obstetrics and Gynecology, Niigata University Graduate School of Medicine, Chuo-ku, Niigata 951-8520, Japan
| | - Toshiyuki Sasagawa
- Department of Obstetrics and Gynecology, Kanazawa Medical University, Uchinada, Ishikawa 920-0293, Japan
| | - Kiyoshi Yoshino
- Department of Obstetrics and Gynecology, University of Occupational and Environmental Health, Kitakyushu, Fukuoka 807-8556, Japan
| | - Hiroshi Harada
- Department of Obstetrics and Gynecology, University of Occupational and Environmental Health, Kitakyushu, Fukuoka 807-8556, Japan
| | - Takafumi Nakamura
- Department of Obstetrics and Gynecology, Kawasaki Medical University, Kurashiki, Okayama 701-0192, Japan
| | - Takuya Murata
- Department of Obstetrics and Gynecology, Kawasaki Medical University, Kurashiki, Okayama 701-0192, Japan
| | - Keizo Hiramatsu
- Hiramatsu Obstetrics and Gynecology Clinic, Kishiwada-shi, Osaka 583-0024, Japan
| | - Junko Saito
- Saito Women Clinic, Yodogawa-ku, Osaka 532-0003, Japan
| | - Junko Yagi
- Ladies Clinic Yagi, Senboku-gunn, Osaka 595-0805, Japan
| | | | - Tadashi Kimura
- Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, Suita, Osaka 567-0871, Japan
| |
Collapse
|
12
|
Miyagi Y, Hata T, Bouno S, Koyanagi A, Miyake T. Recognition of facial expression of fetuses by artificial intelligence (AI). J Perinat Med 2021; 49:596-603. [PMID: 33548168 DOI: 10.1515/jpm-2020-0537] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 12/27/2020] [Indexed: 01/19/2023]
Abstract
OBJECTIVES The development of the artificial intelligence (AI) classifier to recognize fetal facial expressions that are considered as being related to the brain development of fetuses as a retrospective, non-interventional pilot study. METHODS Images of fetal faces with sonography obtained from outpatient pregnant women with a singleton fetus were enrolled in routine conventional practice from 19 to 38 weeks of gestation from January 1, 2020, to September 30, 2020, with completely de-identified data. The images were classified into seven categories, such as eye blinking, mouthing, face without any expression, scowling, smiling, tongue expulsion, and yawning. The category in which the number of fetuses was less than 10 was eliminated before preparation. Next, we created a deep learning AI classifier with the data. Statistical values such as accuracy for the test dataset and the AI confidence score profiles for each category per image for all data were obtained. RESULTS The number of fetuses/images in the rated categories were 14/147, 23/302, 33/320, 8/55, and 10/72 for eye blinking, mouthing, face without any expression, scowling, and yawning, respectively. The accuracy of the AI fetal facial expression for the entire test data set was 0.985. The accuracy/sensitivity/specificity values were 0.996/0.993/1.000, 0.992/0.986/1.000, 0.985/1.000/0.979, 0.996/0.888/1.000, and 1.000/1.000/1.000 for the eye blinking, mouthing, face without any expression, scowling categories, and yawning, respectively. CONCLUSIONS The AI classifier has the potential to objectively classify fetal facial expressions. AI can advance fetal brain development research using ultrasound.
Collapse
Affiliation(s)
- Yasunari Miyagi
- Department of Gynecology, Miyake Ofuku Clinic, Okayama, Japan.,Medical Data Labo, Okayama, Japan.,Department of Gynecologic Oncology, Saitama Medical University International Medical Center, Hidaka, Japan
| | - Toshiyuki Hata
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan.,Department of Perinatology and Gynecology, Kagawa University Graduate School of Medicine, Kagawa, Japan
| | - Saori Bouno
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| | - Aya Koyanagi
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| | - Takahito Miyake
- Department of Gynecology, Miyake Ofuku Clinic, Okayama, Japan.,Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| |
Collapse
|
13
|
Xue P, Tang C, Li Q, Li Y, Shen Y, Zhao Y, Chen J, Wu J, Li L, Wang W, Li Y, Cui X, Zhang S, Zhang W, Zhang X, Ma K, Zheng Y, Qian T, Ng MTA, Liu Z, Qiao Y, Jiang Y, Zhao F. Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies. BMC Med 2020; 18:406. [PMID: 33349257 PMCID: PMC7754595 DOI: 10.1186/s12916-020-01860-y] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 11/19/2020] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Colposcopy diagnosis and directed biopsy are the key components in cervical cancer screening programs. However, their performance is limited by the requirement for experienced colposcopists. This study aimed to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) for grading colposcopic impressions and guiding biopsies. METHODS Anonymized digital records of 19,435 patients were obtained from six hospitals across China. These records included colposcopic images, clinical information, and pathological results (gold standard). The data were randomly assigned (7:1:2) to a training and a tuning set for developing CAIADS and to a validation set for evaluating performance. RESULTS The agreement between CAIADS-graded colposcopic impressions and pathology findings was higher than that of colposcopies interpreted by colposcopists (82.2% versus 65.9%, kappa 0.750 versus 0.516, p < 0.001). For detecting pathological high-grade squamous intraepithelial lesion or worse (HSIL+), CAIADS showed higher sensitivity than the use of colposcopies interpreted by colposcopists at either biopsy threshold (low-grade or worse 90.5%, 95% CI 88.9-91.4% versus 83.5%, 81.5-85.3%; high-grade or worse 71.9%, 69.5-74.2% versus 60.4%, 57.9-62.9%; all p < 0.001), whereas the specificities were similar (low-grade or worse 51.8%, 49.8-53.8% versus 52.0%, 50.0-54.1%; high-grade or worse 93.9%, 92.9-94.9% versus 94.9%, 93.9-95.7%; all p > 0.05). The CAIADS also demonstrated a superior ability in predicting biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758. CONCLUSIONS The CAIADS has potential in assisting beginners and for improving the diagnostic quality of colposcopy and biopsy in the detection of cervical precancer/cancer.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Chao Tang
- School of Public Health, Dalian Medical University, Dalian, China
| | - Qing Li
- Diagnosis and Treatment for Cervical Lesions Center, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | | | - Yu Shen
- Zonsun Healthcare, Shenzhen, China
| | - Yuqian Zhao
- Center for Cancer Prevention Research, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | | | | | - Longyu Li
- Jiangxi Maternal and Child Health Hospital, Nanchang, China
| | - Wei Wang
- Chengdu Women's and Children's Central Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yucong Li
- Chongqing University Cancer Hospital, Chongqing, China
| | - Xiaoli Cui
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Shaokai Zhang
- Affiliated Cancer Hospital of Zhengzhou University/Henan Cancer Hospital, Zhengzhou, China
| | - Wenhua Zhang
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xun Zhang
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | | | | | - Zhihua Liu
- Department of Gynecology, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Youlin Qiao
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Fanghui Zhao
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
14
|
Miyagi Y, Habara T, Hirata R, Hayashi N. Predicting a live birth by artificial intelligence incorporating both the blastocyst image and conventional embryo evaluation parameters. Artif Intell Med Imaging 2020. [DOI: 10.35711/wjbc.v1.i3.87] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
15
|
Miyagi Y, Habara T, Hirata R, Hayashi N. Predicting a live birth by artificial intelligence incorporating both the blastocyst image and conventional embryo evaluation parameters. Artif Intell Med Imaging 2020; 1:94-107. [DOI: 10.35711/aimi.v1.i3.94] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 09/15/2020] [Accepted: 09/19/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The achievement of live birth is the goal of assisted reproductive technology in reproductive medicine. When the selected blastocyst is transferred to the uterus, the degree of implantation of the blastocyst is evaluated by microscopic inspection, and the result is only about 30%-40%, and the method of predicting live birth from the blastocyst image is unknown. Live births correlate with several clinical conventional embryo evaluation parameters (CEE), such as maternal age. Therefore, it is necessary to develop artificial intelligence (AI) that combines blastocyst images and CEE to predict live births.
AIM To develop an AI classifier for blastocyst images and CEE to predict the probability of achieving a live birth.
METHODS A total of 5691 images of blastocysts on the fifth day after oocyte retrieval obtained from consecutive patients from January 2009 to April 2017 with fully deidentified data were retrospectively enrolled with explanations to patients and a website containing additional information with an opt-out option. We have developed a system in which the original architecture of the deep learning neural network is used to predict the probability of live birth from a blastocyst image and CEE.
RESULTS The live birth rate was 0.387 (= 1587/4104 cases). The number of independent clinical information for predicting live birth is 10, which significantly avoids multicollinearity. A single AI classifier is composed of ten layers of convolutional neural networks, and each elementwise layer of ten factors is developed and obtained with 42792 as the number of training data points and 0.001 as the L2 regularization value. The accuracy, sensitivity, specificity, negative predictive value, positive predictive value, Youden J index, and area under the curve values for predicting live birth are 0.743, 0.638, 0.789, 0.831, 0.573, 0.427, and 0.740, respectively. The optimal cut-off point of the receiver operator characteristic curve is 0.207.
CONCLUSION AI classifiers have the potential of predicting live births that humans cannot predict. Artificial intelligence may make progress in assisted reproductive technology.
Collapse
Affiliation(s)
- Yasunari Miyagi
- Department of Artificial Intelligence, Medical Data Labo, Okayama 703-8267, Japan
- Department of Gynecologic Oncology, Saitama Medical University International Medical Center, Hidaka 350-1298, Saitama, Japan
| | - Toshihiro Habara
- Department of Reproduction, Okayama Couples' Clinic, Okayama 701-1152, Japan
| | - Rei Hirata
- Department of Reproduction, Okayama Couples' Clinic, Okayama 701-1152, Japan
| | - Nobuyoshi Hayashi
- Department of Reproduction, Okayama Couples' Clinic, Okayama 701-1152, Japan
| |
Collapse
|
16
|
Liu L, Wang Y, Ma Q, Tan L, Wu Y, Xiao J. Artificial classification of cervical squamous lesions in ThinPrep cytologic tests using a deep convolutional neural network. Oncol Lett 2020; 20:113. [PMID: 32863926 PMCID: PMC7448561 DOI: 10.3892/ol.2020.11974] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 07/17/2020] [Indexed: 12/24/2022] Open
Abstract
The diagnosis of squamous cell carcinoma requires the accurate classification of cervical squamous lesions in the ThinPrep cytologic test (TCT). It primarily relies on a pathologist's interpretation under a microscope. Deep convolutional neural networks (DCNN) have played an increasingly important role in digital pathology. However, they have not been applied to diverse datasets and externally validated. In the present study, a DCNN model based on VGG16 and an ensemble training strategy (ETS) based on 5-fold cross-validation was employed to automatically classify normal and abnormal cervical squamous cells from a multi-center dataset. First, we collected a dataset comprising 82 TCT samples from four hospitals and fine-tuned our model twice on the dataset with and without the ETS. Then, we compared the classifications obtained from the models with those provided by two skilled pathologists to discriminate the performance of the models in terms of classification accuracy and efficiency. Finally, paired sample t-tests were used to validate the consistency between the classification provided by the proposed methods and that of the pathologists. The results showed that ETS slightly, though not significantly, improved the classification accuracy compared with that of the pathologists: P0=0.387>0.05 (DCNN without ETS vs. DCNN with ETS), P1=0.771>0.05 (DCNN with ETS vs. pathologist 1), P2=0.489>0.05 (DCNN with ETS vs. pathologist 2). The DCNN model was almost 6-fold faster than that of the pathologists. The accuracy of our automated scheme was similar to that of the pathologists, but a higher efficiency in the accurate identification of cervical squamous lesions was provided by the scheme. This result allows for wider and more efficient screening and may provide a replacement for pathologists in the future. Future research should address the viability of the practical implementation of such DCNN models in the laboratory setting.
Collapse
Affiliation(s)
- Li Liu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Third Military Medical University (Army Medical University), Chongqing 400038, P.R. China
| | - Yuanhua Wang
- Department of Pathology, Third Affiliated Hospital, Third Military Medical University (Army Medical University), Chongqing 400042, P.R. China
| | - Qiang Ma
- Department of Pathology, Third Affiliated Hospital, Third Military Medical University (Army Medical University), Chongqing 400042, P.R. China
| | - Liwen Tan
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Third Military Medical University (Army Medical University), Chongqing 400038, P.R. China
| | - Yi Wu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Third Military Medical University (Army Medical University), Chongqing 400038, P.R. China
| | - Jingjing Xiao
- Department of Medical Engineering, Second Affiliated Hospital, Third Military Medical University (Army Medical University), Chongqing 400037, P.R. China
| |
Collapse
|