1
|
Hao L, Jiang Y, Zhang C, Han P. Genome composition-based deep learning predicts oncogenic potential of HPVs. Front Cell Infect Microbiol 2024; 14:1430424. [PMID: 39104853 PMCID: PMC11298479 DOI: 10.3389/fcimb.2024.1430424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 06/27/2024] [Indexed: 08/07/2024] Open
Abstract
Human papillomaviruses (HPVs) account for more than 30% of cancer cases, with definite identification of the oncogenic role of viral E6 and E7 genes. However, the identification of high-risk HPV genotypes has largely relied on lagged biological exploration and clinical observation, with types unclassified and oncogenicity unknown for many HPVs. In the present study, we retrieved and cleaned HPV sequence records with high quality and analyzed their genomic compositional traits of dinucleotide (DNT) and DNT representation (DCR) to overview the distribution difference among various types of HPVs. Then, a deep learning model was built to predict the oncogenic potential of all HPVs based on E6 and E7 genes. Our results showed that the main three groups of Alpha, Beta, and Gamma HPVs were clearly separated between/among types in the DCR trait for either E6 or E7 coding sequence (CDS) and were clustered within the same group. Moreover, the DCR data of either E6 or E7 were learnable with a convolutional neural network (CNN) model. Either CNN classifier predicted accurately the oncogenicity label of high and low oncogenic HPVs. In summary, the compositional traits of HPV oncogenicity-related genes E6 and E7 were much different between the high and low oncogenic HPVs, and the compositional trait of the DCR-based deep learning classifier predicted the oncogenic phenotype accurately of HPVs. The trained predictor in this study will facilitate the identification of HPV oncogenicity, particularly for those HPVs without clear genotype or phenotype.
Collapse
Affiliation(s)
- Lin Hao
- Department of Pharmacy, Linfen Central Hospital, Linfen, China
| | - Yu Jiang
- The 4 Medical Center, People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Can Zhang
- The 4 Medical Center, People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Pengfei Han
- The 4 Medical Center, People's Liberation Army (PLA) General Hospital, Beijing, China
| |
Collapse
|
2
|
Li J, Hu P, Gao H, Shen N, Hua K. Classification of cervical lesions based on multimodal features fusion. Comput Biol Med 2024; 177:108589. [PMID: 38781641 DOI: 10.1016/j.compbiomed.2024.108589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/20/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024]
Abstract
Cervical cancer is a severe threat to women's health worldwide with a long cancerous cycle and a clear etiology, making early screening vital for the prevention and treatment. Based on the dataset provided by the Obstetrics and Gynecology Hospital of Fudan University, a four-category classification model for cervical lesions including Normal, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL) and cancer (Ca) is developed. Considering the dataset characteristics, to fully utilize the research data and ensure the dataset size, the model inputs include original and acetic colposcopy images, lesion segmentation masks, human papillomavirus (HPV), thinprep cytologic test (TCT) and age, but exclude iodine images that have a significant overlap with lesions under acetic images. Firstly, the change information between original and acetic images is introduced by calculating the acetowhite opacity to mine the correlation between the acetowhite thickness and lesion grades. Secondly, the lesion segmentation masks are utilized to introduce prior knowledge of lesion location and shape into the classification model. Lastly, a cross-modal feature fusion module based on the self-attention mechanism is utilized to fuse image information with clinical text information, revealing the features correlation. Based on the dataset used in this study, the proposed model is comprehensively compared with five excellent models over the past three years, demonstrating that the proposed model has superior classification performance and a better balance between performance and complexity. The modules ablation experiments further prove that each proposed improved module can independently improve the model performance.
Collapse
Affiliation(s)
- Jing Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Peng Hu
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Huayu Gao
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Nanyan Shen
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Keqin Hua
- Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China.
| |
Collapse
|
3
|
Mascarenhas M, Alencoão I, Carinhas MJ, Martins M, Cardoso P, Mendes F, Fernandes J, Ferreira J, Macedo G, Zulmira Macedo R. Artificial Intelligence and Colposcopy: Automatic Identification of Cervical Squamous Cell Carcinoma Precursors. J Clin Med 2024; 13:3003. [PMID: 38792544 PMCID: PMC11122610 DOI: 10.3390/jcm13103003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/21/2024] [Accepted: 05/16/2024] [Indexed: 05/26/2024] Open
Abstract
Background/Objectives: Proficient colposcopy is crucial for the adequate management of cervical cancer precursor lesions; nonetheless its limitations may impact its cost-effectiveness. The development of artificial intelligence models is experiencing an exponential growth, particularly in image-based specialties. The aim of this study is to develop and validate a Convolutional Neural Network (CNN) for the automatic differentiation of high-grade (HSIL) from low-grade dysplasia (LSIL) in colposcopy. Methods: A unicentric retrospective study was conducted based on 70 colposcopy exams, comprising a total of 22,693 frames. Among these, 8729 were categorized as HSIL based on histopathology. The total dataset was divided into a training (90%, n = 20,423) and a testing set (10%, n = 2270), the latter being used to evaluate the model's performance. The main outcome measures included sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), and the area under the receiving operating curve (AUC-ROC). Results: The sensitivity was 99.7% and the specificity was 98.6%. The PPV and NPV were 97.8% and 99.8%, respectively. The overall accuracy was 99.0%. The AUC-ROC was 0.98. The CNN processed 112 frames per second. Conclusions: We developed a CNN capable of differentiating cervical cancer precursors in colposcopy frames. The high levels of accuracy for the differentiation of HSIL from LSIL may improve the diagnostic yield of this exam.
Collapse
Affiliation(s)
- Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.M.); (P.C.); (G.M.)
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Inês Alencoão
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, Largo da Maternidade Júlio Dinis, 4050-061 Porto, Portugal; (I.A.); (M.J.C.); (R.Z.M.)
| | - Maria João Carinhas
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, Largo da Maternidade Júlio Dinis, 4050-061 Porto, Portugal; (I.A.); (M.J.C.); (R.Z.M.)
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.M.); (P.C.); (G.M.)
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.M.); (P.C.); (G.M.)
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.M.); (P.C.); (G.M.)
| | - Joana Fernandes
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-065 Porto, Portugal; (J.F.); (J.F.)
| | - João Ferreira
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-065 Porto, Portugal; (J.F.); (J.F.)
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.M.); (P.C.); (G.M.)
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Rosa Zulmira Macedo
- Department of Gynecology, Centro Materno-Infantil do Norte Dr. Albino Aroso (CMIN), Santo António University Hospital, Largo da Maternidade Júlio Dinis, 4050-061 Porto, Portugal; (I.A.); (M.J.C.); (R.Z.M.)
| |
Collapse
|
4
|
Wang C, Wang X, Gao Z, Ran C, Li C, Ding C. Multiple serous cavity effusion screening based on smear images using vision transformer. Sci Rep 2024; 14:7395. [PMID: 38548898 PMCID: PMC10978834 DOI: 10.1038/s41598-024-58151-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 03/26/2024] [Indexed: 04/01/2024] Open
Abstract
Serous cavity effusion is a prevalent pathological condition encountered in clinical settings. Fluid samples obtained from these effusions are vital for diagnostic and therapeutic purposes. Traditionally, cytological examination of smears is a common method for diagnosing serous cavity effusion, renowned for its convenience. However, this technique presents limitations that can compromise its efficiency and diagnostic accuracy. This study aims to overcome these challenges and introduce an improved method for the precise detection of malignant cells in serous cavity effusions. We have developed a transformer-based classification framework, specifically employing the vision transformer (ViT) model, to fulfill this objective. Our research involved collecting smear images and corresponding cytological reports from 161 patients who underwent serous cavity drainage. We meticulously annotated 4836 patches from these images, identifying regions with and without malignant cells, thus creating a unique dataset for smear image classification. The findings of our study reveal that deep learning models, particularly the ViT model, exhibit remarkable accuracy in classifying patches as malignant or non-malignant. The ViT model achieved an impressive area under the receiver operating characteristic curve (AUROC) of 0.99, surpassing the performance of the convolutional neural network (CNN) model, which recorded an AUROC of 0.86. Additionally, we validated our models using an external cohort of 127 patients. The ViT model sustained its high-level screening performance, achieving an AUROC of 0.98 at the patient level, compared to the CNN model's AUROC of 0.84. The visualization of our ViT models confirmed their capability to precisely identify regions containing malignant cells in multiple serous cavity effusion smear images. In summary, our study demonstrates the potential of deep learning models, particularly the ViT model, in automating the screening process for serous cavity effusions. These models offer significant assistance to cytologists in enhancing diagnostic accuracy and efficiency. The ViT model stands out for its advanced self-attention mechanism, making it exceptionally suitable for tasks that necessitate detailed analysis of small, sparsely distributed targets like cellular clusters in serous cavity effusions.
Collapse
Affiliation(s)
- Chunbao Wang
- Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Xiangyu Wang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Zeyu Gao
- CRUK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Caihong Ran
- Department of Pathology, Ngari Prefecture People's Hospital, Ngari of Tibet, 859000, China
| | - Chen Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Caixia Ding
- Department of Pathology, Shaanxi Provincial Tumor Hospital, Xi'an, 710061, China.
| |
Collapse
|
5
|
Li X, Ning R, Xiao B, Meng S, Sun H, Fan X, Li S. A multi-variable predictive warning model for cervical cancer using clinical and SNPs data. Front Med (Lausanne) 2024; 11:1294230. [PMID: 38455474 PMCID: PMC10918689 DOI: 10.3389/fmed.2024.1294230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 01/23/2024] [Indexed: 03/09/2024] Open
Abstract
Introduction Cervical cancer is the fourth most common cancer among female worldwide. Early detection and intervention are essential. This study aims to construct an early predictive warning model for cervical cancer and precancerous lesions utilizing clinical data and simple nucleotide polymorphisms (SNPs). Methods Clinical data and germline SNPs were collected from 472 participants. Univariate logistic regression, least absolute shrinkage selection operator (LASSO), and stepwise regression were performed to screen variables. Logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), extreme gradient boosting(XGBoost) and neural network(NN) were applied to establish models. The receiver operating characteristic (ROC) curve was used to compare the models' efficiencies. The performance of models was validated using decision curve analysis (DCA). Results The LR model, which included 6 SNPs and 2 clinical variables as independent risk factors for cervical carcinogenesis, was ultimately chosen as the most optimal model. The DCA showed that the LR model had a good clinical application. Discussion The predictive model effectively foresees cervical cancer risk using clinical and SNP data, aiding in planning timely interventions. It provides a transparent tool for refining clinical decisions in cervical cancer management.
Collapse
Affiliation(s)
- Xiangqin Li
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ruoqi Ning
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Bing Xiao
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Silu Meng
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Haiying Sun
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xinran Fan
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shuang Li
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
6
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
7
|
Hou H, Mitbander R, Tang Y, Azimuddin A, Carns J, Schwarz RA, Richards-Kortum RR. Optical imaging technologies for in vivo cancer detection in low-resource settings. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2023; 28:100495. [PMID: 38406798 PMCID: PMC10883072 DOI: 10.1016/j.cobme.2023.100495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Cancer continues to affect underserved populations disproportionately. Novel optical imaging technologies, which can provide rapid, non-invasive, and accurate cancer detection at the point of care, have great potential to improve global cancer care. This article reviews the recent technical innovations and clinical translation of low-cost optical imaging technologies, highlighting the advances in both hardware and software, especially the integration of artificial intelligence, to improve in vivo cancer detection in low-resource settings. Additionally, this article provides an overview of existing challenges and future perspectives of adapting optical imaging technologies into clinical practice, which can potentially contribute to novel insights and programs that effectively improve cancer detection in low-resource settings.
Collapse
Affiliation(s)
- Huayu Hou
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Ruchika Mitbander
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Ahad Azimuddin
- School of Medicine, Texas A&M University, Houston, TX 77030, USA
| | - Jennifer Carns
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Richard A Schwarz
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | | |
Collapse
|
8
|
Wang Q, Chen K, Dou W, Ma Y. Cross-Attention Based Multi-Resolution Feature Fusion Model for Self-Supervised Cervical OCT Image Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2541-2554. [PMID: 37027657 DOI: 10.1109/tcbb.2023.3246979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Cervical cancer seriously endangers the health of the female reproductive system and even risks women's life in severe cases. Optical coherence tomography (OCT) is a non-invasive, real-time, high-resolution imaging technology for cervical tissues. However, since the interpretation of cervical OCT images is a knowledge-intensive, time-consuming task, it is tough to acquire a large number of high-quality labeled images quickly, which is a big challenge for supervised learning. In this study, we introduce the vision Transformer (ViT) architecture, which has recently achieved impressive results in natural image analysis, into the classification task of cervical OCT images. Our work aims to develop a computer-aided diagnosis (CADx) approach based on a self-supervised ViT-based model to classify cervical OCT images effectively. We leverage masked autoencoders (MAE) to perform self-supervised pre-training on cervical OCT images, so the proposed classification model has a better transfer learning ability. In the fine-tuning process, the ViT-based classification model extracts multi-scale features from OCT images of different resolutions and fuses them with the cross-attention module. The ten-fold cross-validation results on an OCT image dataset from a multi-center clinical study of 733 patients in China indicate that our model achieved an AUC value of 0.9963 ± 0.0069 with a 95.89 ± 3.30% sensitivity and 98.23 ± 1.36 % specificity, outperforming some state-of-the-art classification models based on Transformers and convolutional neural networks (CNNs) in the binary classification task of detecting high-risk cervical diseases, including high-grade squamous intraepithelial lesion (HSIL) and cervical cancer. Furthermore, our model with the cross-shaped voting strategy achieved a sensitivity of 92.06% and specificity of 95.56% on an external validation dataset containing 288 three-dimensional (3D) OCT volumes from 118 Chinese patients in a different new hospital. This result met or exceeded the average of four medical experts who have used OCT for over one year. In addition to promising classification performance, our model has a remarkable ability to detect and visualize local lesions using the attention map of the standard ViT model, providing good interpretability for gynecologists to locate and diagnose possible cervical diseases.
Collapse
|
9
|
Kakotkin VV, Semina EV, Zadorkina TG, Agapov MA. Prevention Strategies and Early Diagnosis of Cervical Cancer: Current State and Prospects. Diagnostics (Basel) 2023; 13:diagnostics13040610. [PMID: 36832098 PMCID: PMC9955852 DOI: 10.3390/diagnostics13040610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 02/03/2023] [Accepted: 02/05/2023] [Indexed: 02/11/2023] Open
Abstract
Cervical cancer ranks third among all new cancer cases and causes of cancer deaths in females. The paper provides an overview of cervical cancer prevention strategies employed in different regions, with incidence and mortality rates ranging from high to low. It assesses the effectiveness of approaches proposed by national healthcare systems by analysing data published in the National Library of Medicine (Pubmed) since 2018 featuring the following keywords: "cervical cancer prevention", "cervical cancer screening", "barriers to cervical cancer prevention", "premalignant cervical lesions" and "current strategies". WHO's 90-70-90 global strategy for cervical cancer prevention and early screening has proven effective in different countries in both mathematical models and clinical practice. The data analysis carried out within this study identified promising approaches to cervical cancer screening and prevention, which can further enhance the effectiveness of the existing WHO strategy and national healthcare systems. One such approach is the application of AI technologies for detecting precancerous cervical lesions and choosing treatment strategies. As such studies show, the use of AI can not only increase detection accuracy but also ease the burden on primary care.
Collapse
Affiliation(s)
- Viktor V. Kakotkin
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
| | - Ekaterina V. Semina
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
| | - Tatiana G. Zadorkina
- Kaliningrad Regional Centre for Specialised Medical Care, Barnaulskaia Street, 6, 236006 Kaliningrad, Russia
| | - Mikhail A. Agapov
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
- Correspondence: ; Tel.: +7-(4012)-59-55-95
| |
Collapse
|
10
|
Sheng B, Yao D, Du X, Chen D, Zhou L. Establishment and validation of a risk prediction model for high-grade cervical lesions. Eur J Obstet Gynecol Reprod Biol 2023; 281:1-6. [PMID: 36521399 DOI: 10.1016/j.ejogrb.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 11/21/2022] [Accepted: 12/04/2022] [Indexed: 12/13/2022]
Abstract
OBJECTIVE To establish and validate a risk prediction model for cervical high-grade squamous intraepithelial lesions (HSIL). METHODS This retrospective study included patients who underwent cervical biopsies at the Cervical Disease Centre of Maternal and Child Hospital of Hubei Province between January 2021 and December 2021. RESULTS A total of 1630 patients were divided into the HSIL + cervical lesion group (n = 186) and the ≤ LSIL cervical lesions group (n = 1444). LSIL, ASC-H, HSIL and SCC, high-risk HPV, HPV16, HPV18/45, multiple HPV strains, acetowhite epithelium, atypical vessels, and mosaicity were independently associated with HSIL + lesions. These factors were used to establish a risk prediction model with a demonstrated area under the curve (AUC) of 0.851 and a C-index of 0.829. Calibration curve analysis showed that the model performed well, with a mean absolute error (MAE) of 0.005. The decision curve showed that the model created by combining the risk factors was more specific and sensitive than each predictive variable. CONCLUSION The model for predicting HSIL demonstrated promising predictive capability and might help identify patients requiring biopsy and treatment.
Collapse
Affiliation(s)
- Binyue Sheng
- Department of Gynaecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Hongshan, Wuhan, Hubei 430070, PR China
| | - Dongmei Yao
- Department of Gynaecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Hongshan, Wuhan, Hubei 430070, PR China.
| | - Xin Du
- Department of Gynaecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Hongshan, Wuhan, Hubei 430070, PR China
| | - Dejun Chen
- Department of Gynaecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Hongshan, Wuhan, Hubei 430070, PR China
| | - Limin Zhou
- Department of Gynaecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Hongshan, Wuhan, Hubei 430070, PR China
| |
Collapse
|
11
|
Mousser W, Ouadfel S, Taleb-Ahmed A, Kitouni I. IDT: An incremental deep tree framework for biological image classification. Artif Intell Med 2022; 134:102392. [PMID: 36462909 DOI: 10.1016/j.artmed.2022.102392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/10/2022] [Accepted: 08/29/2022] [Indexed: 12/13/2022]
Abstract
Nowadays, breast and cervical cancers are respectively the first and fourth most common causes of cancer death in females. It is believed that, automated systems based on artificial intelligence would allow the early diagnostic which increases significantly the chances of proper treatment and survival. Although Convolutional Neural Networks (CNNs) have achieved human-level performance in object classification tasks, the regular growing of the amount of medical data and the continuous increase of the number of classes make them difficult to learn new tasks without being re-trained from scratch. Nevertheless, fine tuning and transfer learning in deep models are techniques that lead to the well-known catastrophic forgetting problem. In this paper, an Incremental Deep Tree (IDT) framework for biological image classification is proposed to address the catastrophic forgetting of CNNs allowing them to learn new classes while maintaining acceptable accuracies on the previously learnt ones. To evaluate the performance of our approach, the IDT framework is compared against with three popular incremental methods, namely iCaRL, LwF and SupportNet. The experimental results on MNIST dataset achieved 87 % of accuracy and the obtained values on the BreakHis, the LBC and the SIPaKMeD datasets are promising with 92 %, 98 % and 93 % respectively.
Collapse
Affiliation(s)
- Wafa Mousser
- Department of Computer Sciences and Applications, Laboratory of Complex Systems' Modeling and Implementation, Abdelhamid Mehri Constantine 2 University, National Biotechnology Research Center Constantine, Algeria.
| | - Salima Ouadfel
- Department of Computer Sciences and Applications, Abdelhamid Mehri Constantine 2 University, Algeria.
| | - Abdelmalik Taleb-Ahmed
- Institut d'Electronique de Microélectronique et de Nanotechnologie (IEMN), UMR 8520, Université Polytechnique Hauts de France, Université de Lille, CNRS, 59313 Valenciennes, France.
| | - Ilham Kitouni
- LISIA Laboratory "Laboratoire d'Informatique en Science de données et Intelligence Artificielle", "Abdelhamid Mehri Constantine 2 University, Algeria.
| |
Collapse
|
12
|
Allahqoli L, Laganà AS, Mazidimoradi A, Salehiniya H, Günther V, Chiantera V, Karimi Goghari S, Ghiasvand MM, Rahmani A, Momenimovahed Z, Alkatout I. Diagnosis of Cervical Cancer and Pre-Cancerous Lesions by Artificial Intelligence: A Systematic Review. Diagnostics (Basel) 2022; 12:2771. [PMID: 36428831 PMCID: PMC9689914 DOI: 10.3390/diagnostics12112771] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/06/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE The likelihood of timely treatment for cervical cancer increases with timely detection of abnormal cervical cells. Automated methods of detecting abnormal cervical cells were established because manual identification requires skilled pathologists and is time consuming and prone to error. The purpose of this systematic review is to evaluate the diagnostic performance of artificial intelligence (AI) technologies for the prediction, screening, and diagnosis of cervical cancer and pre-cancerous lesions. MATERIALS AND METHODS Comprehensive searches were performed on three databases: Medline, Web of Science Core Collection (Indexes = SCI-EXPANDED, SSCI, A & HCI Timespan) and Scopus to find papers published until July 2022. Articles that applied any AI technique for the prediction, screening, and diagnosis of cervical cancer were included in the review. No time restriction was applied. Articles were searched, screened, incorporated, and analyzed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. RESULTS The primary search yielded 2538 articles. After screening and evaluation of eligibility, 117 studies were incorporated in the review. AI techniques were found to play a significant role in screening systems for pre-cancerous and cancerous cervical lesions. The accuracy of the algorithms in predicting cervical cancer varied from 70% to 100%. AI techniques make a distinction between cancerous and normal Pap smears with 80-100% accuracy. AI is expected to serve as a practical tool for doctors in making accurate clinical diagnoses. The reported sensitivity and specificity of AI in colposcopy for the detection of CIN2+ were 71.9-98.22% and 51.8-96.2%, respectively. CONCLUSION The present review highlights the acceptable performance of AI systems in the prediction, screening, or detection of cervical cancer and pre-cancerous lesions, especially when faced with a paucity of specialized centers or medical resources. In combination with human evaluation, AI could serve as a helpful tool in the interpretation of cervical smears or images.
Collapse
Affiliation(s)
- Leila Allahqoli
- Midwifery Department, Ministry of Health and Medical Education, Tehran 1467664961, Iran
| | - Antonio Simone Laganà
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Afrooz Mazidimoradi
- Neyriz Public Health Clinic, Shiraz University of Medical Sciences, Shiraz 7134814336, Iran
| | - Hamid Salehiniya
- Social Determinants of Health Research Center, Birjand University of Medical Sciences, Birjand 9717853577, Iran
| | - Veronika Günther
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| | - Vito Chiantera
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Shirin Karimi Goghari
- School of Industrial and Systems Engineering, Tarbiat Modares University (TMU), Tehran 1411713114, Iran
| | - Mohammad Matin Ghiasvand
- Department of Computer Engineering, Amirkabir University of Technology (AUT), Tehran 1591634311, Iran
| | - Azam Rahmani
- Nursing and Midwifery Care Research Centre, School of Nursing and Midwifery, Tehran University of Medical Sciences, Tehran 141973317, Iran
| | - Zohre Momenimovahed
- Reproductive Health Department, Qom University of Medical Sciences, Qom 3716993456, Iran
| | - Ibrahim Alkatout
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| |
Collapse
|
13
|
Huang W, Sun S, Yu Z, Lu S, Feng H. Chronic Cervicitis and Cervical Cancer Detection Based on Deep Learning of Colposcopy Images Toward Translational Pharmacology. Front Pharmacol 2022; 13:911962. [PMID: 35712722 PMCID: PMC9196041 DOI: 10.3389/fphar.2022.911962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
With the rapid development of deep learning, automatic image recognition is widely used in medical development. In this study, a deep learning convolutional neural network model was developed to recognize and classify chronic cervicitis and cervical cancer. A total of 10,012 colposcopy images of 1,081 patients from Hunan Provincial People’s Hospital in China were recorded. Five different colposcopy image features of the cervix including chronic cervicitis, intraepithelial lesions, cancer, polypus, and free hyperplastic squamous epithelial tissue were extracted to be applied in our deep learning network convolutional neural network model. However, the result showed a low accuracy (42.16%) due to computer misrecognition of chronic cervicitis, intraepithelial lesions, and free hyperplastic squamous epithelial tissue with high similarity. To optimize this model, we selected two significant feature images: chronic cervicitis and cervical cancer to input into a deep learning network. The result indicates high accuracy and robustness with an accuracy of 95.19%, which can be applied to detect whether the patient has chronic cervicitis or cervical cancer based on the patient’s colposcopy images.
Collapse
Affiliation(s)
- Wei Huang
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Shasha Sun
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Zhengyu Yu
- Faculty of Engineering and IT, University of Technology, Sydney, NSW, Australia
| | - Shanshan Lu
- Department of Gynecology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Hao Feng
- Department of Dermatology, Hunan Provincial People’s Hospital (The First-Affiliated Hospital of Hunan Normal University), Changsha, China
- *Correspondence: Hao Feng,
| |
Collapse
|