1
|
Yi J, Liu X, Cheng S, Chen L, Zeng S. Multi-scale window transformer for cervical cytopathology image recognition. Comput Struct Biotechnol J 2024; 24:314-321. [PMID: 38681132 PMCID: PMC11046249 DOI: 10.1016/j.csbj.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Cervical cancer is a major global health issue, particularly in developing countries where access to healthcare is limited. Early detection of pre-cancerous lesions is crucial for successful treatment and reducing mortality rates. However, traditional screening and diagnostic processes require cytopathology doctors to manually interpret a huge number of cells, which is time-consuming, costly, and prone to human experiences. In this paper, we propose a Multi-scale Window Transformer (MWT) for cervical cytopathology image recognition. We design multi-scale window multi-head self-attention (MW-MSA) to simultaneously integrate cell features of different scales. Small window self-attention is used to extract local cell detail features, and large window self-attention aims to integrate features from smaller-scale window attention to achieve window-to-window information interaction. Our design enables long-range feature integration but avoids whole image self-attention (SA) in ViT or twice local window SA in Swin Transformer. We find convolutional feed-forward networks (CFFN) are more efficient than original MLP-based FFN for representing cytopathology images. Our overall model adopts a pyramid architecture. We establish two multi-center cervical cell classification datasets of two-category 192,123 images and four-category 174,138 images. Extensive experiments demonstrate that our MWT outperforms state-of-the-art general classification networks and specialized classifiers for cytopathology images in the internal and external test sets. The results on large-scale datasets prove the effectiveness and generalization of our proposed model. Our work provides a reliable cytopathology image recognition method and helps establish computer-aided screening for cervical cancer. Our code is available at https://github.com/nmyz669/MWT, and our web service tool can be accessed at https://huggingface.co/spaces/nmyz/MWTdemo.
Collapse
Affiliation(s)
- Jiaxiang Yi
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Xiuli Liu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Shenghua Cheng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
2
|
Fei M, Shen Z, Song Z, Wang X, Cao M, Yao L, Zhao X, Wang Q, Zhang L. Distillation of multi-class cervical lesion cell detection via synthesis-aided pre-training and patch-level feature alignment. Neural Netw 2024; 178:106405. [PMID: 38815471 DOI: 10.1016/j.neunet.2024.106405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 05/10/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024]
Abstract
Automated detection of cervical abnormal cells from Thin-prep cytologic test (TCT) images is crucial for efficient cervical abnormal screening using computer-aided diagnosis systems. However, the construction of the detection model is hindered by the preparation of the training images, which usually suffers from issues of class imbalance and incomplete annotations. Additionally, existing methods often overlook the visual feature correlations among cells, which are crucial in cervical lesion cell detection as pathologists commonly rely on surrounding cells for identification. In this paper, we propose a distillation framework that utilizes a patch-level pre-training network to guide the training of an image-level detection network, which can be applied to various detectors without changing their architectures during inference. The main contribution is three-fold: (1) We propose the Balanced Pre-training Model (BPM) as the patch-level cervical cell classification model, which employs an image synthesis model to construct a class-balanced patch dataset for pre-training. (2) We design the Score Correction Loss (SCL) to enable the detection network to distill knowledge from the BPM model, thereby mitigating the impact of incomplete annotations. (3) We design the Patch Correlation Consistency (PCC) strategy to exploit the correlation information of extracted cells, consistent with the behavior of cytopathologists. Experiments on public and private datasets demonstrate the superior performance of the proposed distillation method, as well as its adaptability to various detection architectures.
Collapse
Affiliation(s)
- Manman Fei
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Zhenrong Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Zhiyun Song
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Xin Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Maosong Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
| | - Linlin Yao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Xiangyu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
| |
Collapse
|
3
|
Sun X, Zhang S, Ma S. Prediction Consistency Regularization for Learning with Noise Labels Based on Contrastive Clustering. ENTROPY (BASEL, SWITZERLAND) 2024; 26:308. [PMID: 38667864 PMCID: PMC11049179 DOI: 10.3390/e26040308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 04/28/2024]
Abstract
In the classification task, label noise has a significant impact on models' performance, primarily manifested in the disruption of prediction consistency, thereby reducing the classification accuracy. This work introduces a novel prediction consistency regularization that mitigates the impact of label noise on neural networks by imposing constraints on the prediction consistency of similar samples. However, determining which samples should be similar is a primary challenge. We formalize the similar sample identification as a clustering problem and employ twin contrastive clustering (TCC) to address this issue. To ensure similarity between samples within each cluster, we enhance TCC by adjusting clustering prior to distribution using label information. Based on the adjusted TCC's clustering results, we first construct the prototype for each cluster and then formulate a prototype-based regularization term to enhance prediction consistency for the prototype within each cluster and counteract the adverse effects of label noise. We conducted comprehensive experiments using benchmark datasets to evaluate the effectiveness of our method under various scenarios with different noise rates. The results explicitly demonstrate the enhancement in classification accuracy. Subsequent analytical experiments confirm that the proposed regularization term effectively mitigates noise and that the adjusted TCC enhances the quality of similar sample recognition.
Collapse
Affiliation(s)
- Xinkai Sun
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; (X.S.); (S.Z.)
- Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100049, China
| | - Sanguo Zhang
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; (X.S.); (S.Z.)
- Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100049, China
| | - Shuangge Ma
- Department of Biostatistics, Yale School of Public Health, New Haven, CT 06510, USA
| |
Collapse
|
4
|
邺 琳, 于 凡, 胡 正, 王 霞, 唐 袁. [Preliminary Study on the Identification of Aerobic Vaginitis by Artificial Intelligence Analysis System]. SICHUAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF SICHUAN UNIVERSITY. MEDICAL SCIENCE EDITION 2024; 55:461-468. [PMID: 38645857 PMCID: PMC11026878 DOI: 10.12182/20240360504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 03/20/2024] [Indexed: 04/23/2024]
Abstract
Objective To develop an artificial intelligence vaginal secretion analysis system based on deep learning and to evaluate the accuracy of automated microscopy in the clinical diagnosis of aerobic vaginitis (AV). Methods In this study, the vaginal secretion samples of 3769 patients receiving treatment at the Department of Obstetrics and Gynecology, West China Second Hospital, Sichuan University between January 2020 and December 2021 were selected. Using the results of manual microscopy as the control, we developed the linear kernel SVM algorithm, an artificial intelligence (AI) automated analysis software, with Python Scikit-learn script. The AI automated analysis software could identify leucocytes with toxic appearance and parabasal epitheliocytes (PBC). The bacterial grading parameters were reset using standard strains of lactobacillus and AV common isolates. The receiver operating characteristic (ROC) curve analysis was used to determine the cut-off value of AV evaluation results for different scoring items were obtained by using the results of manual microscopy as the control. Then, the parameters of automatic AV identification were determined and the automatic AV analysis scoring method was initially established. Results A total of 3769 vaginal secretion samples were collected. The AI automated analysis system incorporated five parameters and each parameter incorporated three severity scoring levels. We selected 1.5 μm as the cut-off value for the diameter between Lactobacillus and common AV bacterial isolates. The automated identification parameter of Lactobacillus was the ratio of bacteria ≥1.5 μm to those <1.5 μm. The cut-off scores were 2.5 and 0.5, In the parameter of white blood cells (WBC), the cut-off value of the absolute number of WBC was 103 μL-1 and the cut-off value of WBC-to-epithelial cell ratio was 10. The automated identification parameter of toxic WBC was the ratio of toxic WBC toWBC and the cut-off values were 1% and 15%. The parameter of background flora was bacteria<1.5 μm and the cut-off values were 5×103 μL-1 and 3×104 μL-1. The parameter of the parabasal epitheliocytes was the ratio of PBC to epithelial cells and the cut-off values were 1% and 10%. The agreement rate between the results of automated microscopy and those of manual microscopy was 92.5%. Out of 200 samples, automated microscopy and manual microscopy produced consistent scores for 185 samples, while the results for 15 samples were inconsistent. Conclusion We developed an AI recognition software for AV and established an automated vaginal secretion microscopy scoring system for AV. There was good overall concordance between automated microscopy and manual microscopy. The AI identification software for AV can complete clinical lab examination with rather high objectivity, sensitivity, and efficiency, markedly reducing the workload of manual microscopy.
Collapse
Affiliation(s)
- 琳玲 邺
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 凡 于
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 正强 胡
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 霞 王
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 袁婷 唐
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| |
Collapse
|
5
|
Yu Z, Li X, Li J, Chen W, Tang Z, Geng D. HSA-net with a novel CAD pipeline boosts both clinical brain tumor MR image classification and segmentation. Comput Biol Med 2024; 170:108039. [PMID: 38308874 DOI: 10.1016/j.compbiomed.2024.108039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/07/2024] [Accepted: 01/26/2024] [Indexed: 02/05/2024]
Abstract
Brain tumors are among the most prevalent neoplasms in current medical studies. Accurately distinguishing and classifying brain tumor types accurately is crucial for patient treatment and survival in clinical practice. However, existing computer-aided diagnostic pipelines are inadequate for practical medical use due to tumor complexity. In this study, we curated a multi-centre brain tumor dataset that includes various clinical brain tumor data types, including segmentation and classification annotations, surpassing previous efforts. To enhance brain tumor segmentation accuracy, we propose a new segmentation method: HSA-Net. This method utilizes the Shared Weight Dilated Convolution module (SWDC) and Hybrid Dense Dilated Convolution module (HDense) to capture multi-scale information while minimizing parameter count. The Effective Multi-Dimensional Attention (EMA) and Important Feature Attention (IFA) modules effectively aggregate task-related information. We introduce a novel clinical brain tumor computer-aided diagnosis pipeline (CAD) that combines HSA-Net with pipeline modification. This approach not only improves segmentation accuracy but also utilizes the segmentation mask as an additional channel feature to enhance brain tumor classification results. Our experimental evaluation of 3327 real clinical data demonstrates the effectiveness of the proposed method, achieving an average Dice coefficient of 86.85 % for segmentation and a classification accuracy of 95.35 %. We also validated the effectiveness of our proposed method using the publicly available BraTS dataset.
Collapse
Affiliation(s)
- Zekuan Yu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China.
| | - Xiang Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; School of Safety Science and Engineering, Anhui University of Science and Technology, Huainan, 232000, China
| | - Jiaxin Li
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Weiqiang Chen
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Zhiri Tang
- School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai, China
| | - Daoying Geng
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; Huashan Hospital, Fudan University, Shanghai, 200040, China.
| |
Collapse
|
6
|
Kim D, Sundling KE, Virk R, Thrall MJ, Alperstein S, Bui MM, Chen-Yost H, Donnelly AD, Lin O, Liu X, Madrigal E, Michelow P, Schmitt FC, Vielh PR, Zakowski MF, Parwani AV, Jenkins E, Siddiqui MT, Pantanowitz L, Li Z. Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force. J Am Soc Cytopathol 2024; 13:97-110. [PMID: 38158317 DOI: 10.1016/j.jasc.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. However, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology departments were also assessed. The task force reviewed existing literature on digital cytology, conducted a worldwide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into practice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported.
Collapse
Affiliation(s)
- David Kim
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Kaitlin E Sundling
- The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin
| | - Renu Virk
- Department of Pathology and Cell Biology, Columbia University, New York, New York
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas
| | - Susan Alperstein
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Marilyn M Bui
- The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida
| | | | - Amber D Donnelly
- Diagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska
| | - Oscar Lin
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Emilio Madrigal
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Department of Pathology, National Health Laboratory Services, Johannesburg, South Africa
| | - Fernando C Schmitt
- Department of Pathology, Medical Faculty of Porto University, Porto, Portugal
| | - Philippe R Vielh
- Department of Pathology, Medipath and American Hospital of Paris, Paris, France
| | | | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | | - Momin T Siddiqui
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
7
|
Fang M, Fu M, Liao B, Lei X, Wu FX. Deep integrated fusion of local and global features for cervical cell classification. Comput Biol Med 2024; 171:108153. [PMID: 38364660 DOI: 10.1016/j.compbiomed.2024.108153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 02/18/2024]
Abstract
Cervical cytology image classification is of great significance to the cervical cancer diagnosis and prognosis. Recently, convolutional neural network (CNN) and visual transformer have been adopted as two branches to learn the features for image classification by simply adding local and global features. However, such the simple addition may not be effective to integrate these features. In this study, we explore the synergy of local and global features for cytology images for classification tasks. Specifically, we design a Deep Integrated Feature Fusion (DIFF) block to synergize local and global features of cytology images from a CNN branch and a transformer branch. Our proposed method is evaluated on three cervical cell image datasets (SIPaKMeD, CRIC, Herlev) and another large blood cell dataset BCCD for several multi-class and binary classification tasks. Experimental results demonstrate the effectiveness of the proposed method in cervical cell classification, which could assist medical specialists to better diagnose cervical cancer.
Collapse
Affiliation(s)
- Ming Fang
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Bo Liao
- School of Mathematics and Statistics, Hainan Normal University, 99 Longkun South Road, Haikou, 571158, Hainan, China
| | - Xiujuan Lei
- School of Computer Science, Shaanxi Normal University, 620 West Chang'an Avenue, Xi'an, 710119, Shaanxi, China.
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Computer Science, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada.
| |
Collapse
|
8
|
Chen P, Liu F, Zhang J, Wang B. MFEM-CIN: A Lightweight Architecture Combining CNN and Transformer for the Classification of Pre-Cancerous Lesions of the Cervix. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:216-225. [PMID: 38606400 PMCID: PMC11008799 DOI: 10.1109/ojemb.2024.3367243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/03/2023] [Accepted: 02/05/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: Cervical cancer is one of the most common cancers in women worldwide, ranking among the top four. Unfortunately, it is also the fourth leading cause of cancer-related deaths among women, particularly in developing countries where incidence and mortality rates are higher compared to developed nations. Colposcopy can aid in the early detection of cervical lesions, but its effectiveness is limited in areas with limited medical resources and a lack of specialized physicians. Consequently, many cases are diagnosed at later stages, putting patients at significant risk. Methods: This paper proposes an automated colposcopic image analysis framework to address these challenges. The framework aims to reduce the labor costs associated with cervical precancer screening in undeserved regions and assist doctors in diagnosing patients. The core of the framework is the MFEM-CIN hybrid model, which combines Convolutional Neural Networks (CNN) and Transformer to aggregate the correlation between local and global features. This combined analysis of local and global information is scientifically useful in clinical diagnosis. In the model, MSFE and MSFF are utilized to extract and fuse multi-scale semantics. This preserves important shallow feature information and allows it to interact with the deep feature, enriching the semantics to some extent. Conclusions: The experimental results demonstrate an accuracy rate of 89.2% in identifying cervical intraepithelial neoplasia while maintaining a lightweight model. This performance exceeds the average accuracy achieved by professional physicians, indicating promising potential for practical application. Utilizing automated colposcopic image analysis and the MFEM-CIN model, this research offers a practical solution to reduce the burden on healthcare providers and improve the efficiency and accuracy of cervical cancer diagnosis in resource-constrained areas.
Collapse
Affiliation(s)
- Peng Chen
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
- Fin China-Anhui University Joint Laboratory for Financial Big Data ResearchHefei Financial China Information and Technology Company, Ltd.Hefei230022China
| | - Fobao Liu
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Jun Zhang
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Bing Wang
- School of Management Science and EngineeringAnhui University of Finance and EconomicsBengbu233030China
| |
Collapse
|
9
|
Ma Y, Zhang X, Yi Z, Ding L, Cai B, Jiang Z, Liu W, Zou H, Wang X, Fu G. A study of machine learning models for rapid intraoperative diagnosis of thyroid nodules for clinical practice in China. Cancer Med 2024; 13:e6854. [PMID: 38189547 PMCID: PMC10904961 DOI: 10.1002/cam4.6854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 11/06/2023] [Accepted: 12/10/2023] [Indexed: 01/09/2024] Open
Abstract
BACKGROUND In China, rapid intraoperative diagnosis of frozen sections of thyroid nodules is used to guide surgery. However, the lack of subspecialty pathologists and delayed diagnoses are challenges in clinical treatment. This study aimed to develop novel diagnostic approaches to increase diagnostic effectiveness. METHODS Artificial intelligence and machine learning techniques were used to automatically diagnose histopathological slides. AI-based models were trained with annotations and selected as efficientnetV2-b0 from multi-set experiments. RESULTS On 191 test slides, the proposed method predicted benign and malignant categories with a sensitivity of 72.65%, specificity of 100.0%, and AUC of 86.32%. For the subtype diagnosis, the best AUC was 99.46% for medullary thyroid cancer with an average of 237.6 s per slide. CONCLUSIONS Within our testing dataset, the proposed method accurately diagnosed the thyroid nodules during surgery.
Collapse
Affiliation(s)
- Yan Ma
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Xiuming Zhang
- Department of PathologyThe First Affiliated Hospital, School of Medicine, Zhejiang UniversityHangzhouZhejiangChina
| | - Zhongliang Yi
- Department of PathologyHang Zhou Dian Medical LaboratoryHangzhouZhejiangP. R. China
| | - Liya Ding
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Bojun Cai
- Hangzhou PathoAI Technology Co., LtdHangzhouZhejiangChina
| | - Zhinong Jiang
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Wangwang Liu
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Hong Zou
- Department of PathologyThe Second Affiliated Hospital of Zhejiang University School of MedicineHangzhouZhejiangChina
| | - Xiaomei Wang
- Hangzhou PathoAI Technology Co., LtdHangzhouZhejiangChina
| | - Guoxiang Fu
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| |
Collapse
|
10
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
11
|
Khan A, Han S, Ilyas N, Lee YM, Lee B. CervixFormer: A Multi-scale swin transformer-Based cervical pap-Smear WSI classification framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107718. [PMID: 37451230 DOI: 10.1016/j.cmpb.2023.107718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/05/2023] [Accepted: 07/08/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. METHODS The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. RESULTS In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. CONCLUSIONS In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.
Collapse
Affiliation(s)
- Anwar Khan
- Center for Cancer Biology, Vlaams Instituut voor Biotechnologie (VIB), Belgium; Department of Oncology, Katholieke Universiteit (KU) Leuven, Belgium; Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Seunghyeon Han
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Naveed Ilyas
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea; Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, UAE.
| | - Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, South Korea.
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| |
Collapse
|
12
|
Kaur M, Singh D, Kumar V, Lee HN. MLNet: Metaheuristics-Based Lightweight Deep Learning Network for Cervical Cancer Diagnosis. IEEE J Biomed Health Inform 2023; 27:5004-5014. [PMID: 36399582 DOI: 10.1109/jbhi.2022.3223127] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
One of the leading causes of cancer-related deaths among women is cervical cancer. Early diagnosis and treatment can minimize the complications of this cancer. Recently, researchers have designed and implemented many deep learning-based automated cervical cancer diagnosis models. However, the majority of these models suffer from over-fitting, parameter tuning, and gradient vanishing problems. To overcome these problems, in this paper a metaheuristics-based lightweight deep learning network (MLNet) is proposed. Initially, the hyper-parameters tuning problem of convolutional neural network (CNN) is defined as a multi-objective problem. Particle swarm optimization (PSO) is used to optimally define the CNN architecture. Thereafter, Dynamically hybrid niching differential evolution (DHDE) is utilized to optimize the hyper-parameters of CNN layers. Each particle of PSO and solution of DHDE together represent the possible CNN configuration. F-score is used as a fitness function. The proposed MLNet is trained and validated on three benchmark cervical cancer datasets. On the Herlev dataset, MLNet outperforms the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.6254%, 1.5178%, 1.5780%, 1.7145%, and 1.4890%, respectively. Also, on the SIPaKMeD dataset, MLNet achieves better performance than the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 2.1250%, 2.2455%, 1.9074%, 1.9258%, and 1.8975%, respectively. Finally, on the Mendeley LBC dataset, MLNet achieves better performance than the competitive models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.4680%, 1.5845%, 1.3582%, 1.3926%, and 1.4125%, respectively.
Collapse
|
13
|
Lee YM, Lee B, Cho NH, Park JH. Beyond the Microscope: A Technological Overture for Cervical Cancer Detection. Diagnostics (Basel) 2023; 13:3079. [PMID: 37835821 PMCID: PMC10572593 DOI: 10.3390/diagnostics13193079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023] Open
Abstract
Cervical cancer is a common and preventable disease that poses a significant threat to women's health and well-being. It is the fourth most prevalent cancer among women worldwide, with approximately 604,000 new cases and 342,000 deaths in 2020, according to the World Health Organization. Early detection and diagnosis of cervical cancer are crucial for reducing mortality and morbidity rates. The Papanicolaou smear test is a widely used screening method that involves the examination of cervical cells under a microscope to identify any abnormalities. However, this method is time-consuming, labor-intensive, subjective, and prone to human errors. Artificial intelligence techniques have emerged as a promising alternative to improve the accuracy and efficiency of Papanicolaou smear diagnosis. Artificial intelligence techniques can automatically analyze Papanicolaou smear images and classify them into normal or abnormal categories, as well as detect the severity and type of lesions. This paper provides a comprehensive review of the recent advances in artificial intelligence diagnostics of the Papanicolaou smear, focusing on the methods, datasets, performance metrics, and challenges. The paper also discusses the potential applications and future directions of artificial intelligence diagnostics of the Papanicolaou smear.
Collapse
Affiliation(s)
- Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, Cheonan 31116, Republic of Korea;
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Republic of Korea;
| | - Nam-Hoon Cho
- Department of Pathology, Severance Hospital, College of Medicine, Yonsei University, Seoul 03722, Republic of Korea;
| | - Jae Hyun Park
- Department of Surgery, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Wonju 26492, Republic of Korea
| |
Collapse
|
14
|
Alsalatie M, Alquran H, Mustafa WA, Zyout A, Alqudah AM, Kaifi R, Qudsieh S. A New Weighted Deep Learning Feature Using Particle Swarm and Ant Lion Optimization for Cervical Cancer Diagnosis on Pap Smear Images. Diagnostics (Basel) 2023; 13:2762. [PMID: 37685299 PMCID: PMC10487265 DOI: 10.3390/diagnostics13172762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/17/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
One of the most widespread health issues affecting women is cervical cancer. Early detection of cervical cancer through improved screening strategies will reduce cervical cancer-related morbidity and mortality rates worldwide. Using a Pap smear image is a novel method for detecting cervical cancer. Previous studies have focused on whole Pap smear images or extracted nuclei to detect cervical cancer. In this paper, we compared three scenarios of the entire cell, cytoplasm region, or nucleus region only into seven classes of cervical cancer. After applying image augmentation to solve imbalanced data problems, automated features are extracted using three pre-trained convolutional neural networks: AlexNet, DarkNet 19, and NasNet. There are twenty-one features as a result of these scenario combinations. The most important features are split into ten features by the principal component analysis, which reduces the dimensionality. This study employs feature weighting to create an efficient computer-aided cervical cancer diagnosis system. The optimization procedure uses the new evolutionary algorithms known as Ant lion optimization (ALO) and particle swarm optimization (PSO). Finally, two types of machine learning algorithms, support vector machine classifier, and random forest classifier, have been used in this paper to perform classification jobs. With a 99.5% accuracy rate for seven classes using the PSO algorithm, the SVM classifier outperformed the RF, which had a 98.9% accuracy rate in the same region. Our outcome is superior to other studies that used seven classes because of this focus on the tissues rather than just the nucleus. This method will aid physicians in diagnosing precancerous and early-stage cervical cancer by depending on the tissues, rather than on the nucleus. The result can be enhanced using a significant amount of data.
Collapse
Affiliation(s)
- Mohammed Alsalatie
- King Hussein Medical Center, Royal Jordanian Medical Service, The Institute of Biomedical Technology, Amman 11855, Jordan;
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis, Arau 02600, Malaysia
- Advanced Computing (AdvCOMP), Centre of Excellence (CoE), Universiti Malaysia Perlis, Arau 02600, Malaysia
| | - Ala’a Zyout
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Reham Kaifi
- College of Applied Medical Sciences, King Saud Bin Abdulaziz University for Health Sciences, Jeddah 21423, Saudi Arabia
- King Abdullah International Medical Research Center, Jeddah 22384, Saudi Arabia
| | - Suhair Qudsieh
- Department of Obstetrics and Gynecology, Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan;
| |
Collapse
|
15
|
Fan Z, Wu X, Li C, Chen H, Liu W, Zheng Y, Chen J, Li X, Sun H, Jiang T, Grzegorzek M, Li C. CAM-VT: A Weakly supervised cervical cancer nest image identification approach using conjugated attention mechanism and visual transformer. Comput Biol Med 2023; 162:107070. [PMID: 37295389 DOI: 10.1016/j.compbiomed.2023.107070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/27/2023] [Accepted: 05/27/2023] [Indexed: 06/12/2023]
Abstract
Cervical cancer is the fourth most common cancer among women, and cytopathological images are often used to screen for this cancer. However, manual examination is very troublesome and the misdiagnosis rate is high. In addition, cervical cancer nest cells are denser and more complex, with high overlap and opacity, increasing the difficulty of identification. The appearance of the computer aided automatic diagnosis system solves this problem. In this paper, a weakly supervised cervical cancer nest image identification approach using Conjugated Attention Mechanism and Visual Transformer (CAM-VT), which can analyze pap slides quickly and accurately. CAM-VT proposes conjugated attention mechanism and visual transformer modules for local and global feature extraction respectively, and then designs an ensemble learning module to further improve the identification capability. In order to determine a reasonable interpretation, comparative experiments are conducted on our datasets. The average accuracy of the validation set of three repeated experiments using CAM-VT framework is 88.92%, which is higher than the optimal result of 22 well-known deep learning models. Moreover, we conduct ablation experiments and extended experiments on Hematoxylin and Eosin stained gastric histopathological image datasets to verify the ability and generalization ability of the framework. Finally, the top 5 and top 10 positive probability values of cervical nests are 97.36% and 96.84%, which have important clinical and practical significance. The experimental results show that the proposed CAM-VT framework has excellent performance in potential cervical cancer nest image identification tasks for practical clinical work.
Collapse
Affiliation(s)
- Zizhen Fan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiangchen Wu
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Changzhong Li
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yuchao Zheng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jing Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
16
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|
17
|
Cervical cell classification with deep-learning algorithms. Med Biol Eng Comput 2023; 61:821-833. [PMID: 36626113 DOI: 10.1007/s11517-022-02745-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 12/18/2022] [Indexed: 01/11/2023]
Abstract
Cervical cancer is a serious threat to the lives and health of women. The accurate analysis of cervical cell smear images is an important diagnostic basis for cancer identification. However, pathological data are often complex and difficult to analyze accurately because pathology images contain a wide variety of cells. To improve the recognition accuracy of cervical cell smear images, we propose a novel deep-learning model based on the improved Faster R-CNN, shallow feature enhancement networks, and generative adversarial networks. First, we used a global average pooling layer to enhance the robustness of the data feature transformation. Second, we designed a shallow feature enhancement network to improve the localization and recognition of weak cells. Finally, we established a data augmentation network to improve the detection capability of the model. The experimental results demonstrate that our proposed methods are superior to CenterNet, YOLOv5, and Faster R-CNN algorithms in some aspects, such as shorter time consumption, higher recognition precision, and stronger adaptive ability. Its maximum accuracy is 99.81%, and the overall mean average precision is 89.4% for the SIPaKMeD and Herlev datasets. Our method provides a useful reference for cervical cell smear image analysis. The missed diagnosis rate and false diagnosis rate are relatively high for cervical cell smear images of different pathologies and stages. Therefore, our algorithms need to be further improved to achieve a better balance. We will use a hyperspectral microscope to obtain more spectral data of cervical cells and input them into deep-learning models for data processing and classification research. First, we sent training samples of cervical cells into our proposed deep-learning model. Then, we used the proposed model to train eight types of cervical cells. Finally, we utilized the trained classifier to test the untrained samples and obtained the classification results. Fig 1. Deep-learning cervical cell classification framework.
Collapse
|
18
|
Monabbati S, Leo P, Bera K, Michael CW, Nezami BG, Harbhajanka A, Madabhushi A. Automated analysis of computerized morphological features of cell clusters associated with malignancy on bile duct brushing whole slide images. Cancer Med 2023; 12:6365-6378. [PMID: 36281473 PMCID: PMC10028025 DOI: 10.1002/cam4.5365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/01/2022] [Accepted: 08/07/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Bile duct brush specimens are difficult to interpret as they often present inflammatory and reactive backgrounds due to the local effects of stricture, atypical reactive changes, or previously installed stents, and often have low to intermediate cellularity. As a result, diagnosis of biliary adenocarcinomas is challenging and often results in large interobserver variability and low sensitivity OBJECTIVE: In this work, we used computational image analysis to evaluate the role of nuclear morphological and texture features of epithelial cell clusters to predict the presence of pancreatic and biliary tract adenocarcinoma on digitized brush cytology specimens. METHODS Whole slide images from 124 patients, either diagnosed as benign or malignant based on clinicopathological correlation, were collected and randomly split into training (ST , N = 58) and testing (Sv , N = 66) sets, with the exception of cases diagnosed as atypical on cytology were included in Sv . Nuclear boundaries on cell clusters extracted from each image were segmented via a watershed algorithm. A total of 536 quantitative morphometric features pertaining to nuclear shape, size, and aggregate cluster texture were extracted from within the cell clusters. The most predictive features from patients in ST were selected via rank-sum, t-test, and minimum redundancy maximum relevance (mRMR) schemes. The selected features were then used to train three machine-learning classifiers. RESULTS Malignant clusters tended to exhibit lower textural homogeneity within the nucleus, greater textural entropy around the nuclear membrane, and longer minor axis lengths. The sensitivity of cytology alone was 74% (without atypicals) and 46% (with atypicals). With machine diagnosis, the sensitivity improved to 68% from 46% when atypicals were included and treated as nonmalignant false negatives. The specificity of our model was 100% within the atypical category. CONCLUSION We achieved an area under the receiver operating characteristic curve (AUC) of 0.79 on Sv , which included atypical cytological diagnosis.
Collapse
Affiliation(s)
- Shayan Monabbati
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Patrick Leo
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Kaustav Bera
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Claire W. Michael
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Behtash G. Nezami
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Aparna Harbhajanka
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
- Louis Stokes Cleveland Veterans Administration Medical CenterClevelandOhioUSA
| |
Collapse
|
19
|
Kavitha R, Jothi DK, Saravanan K, Swain MP, Gonzáles JLA, Bhardwaj RJ, Adomako E. Ant Colony Optimization-Enabled CNN Deep Learning Technique for Accurate Detection of Cervical Cancer. BIOMED RESEARCH INTERNATIONAL 2023; 2023:1742891. [PMID: 36865486 PMCID: PMC9974247 DOI: 10.1155/2023/1742891] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 10/03/2022] [Accepted: 02/07/2023] [Indexed: 02/23/2023]
Abstract
Cancer is characterized by abnormal cell growth and proliferation, which are both diagnostic indicators of the disease. When cancerous cells enter one organ, there is a risk that they may spread to adjacent tissues and eventually to other organs. Cancer of the cervix of the uterus often initially manifests itself in the uterine cervix, which is located at the very bottom of the uterus. Both the growth and death of cervical cells are characteristic features of this condition. False-negative results provide a significant moral dilemma since they may cause women to get an incorrect diagnosis of cancer, which in turn can result in the woman's premature death from the disease. False-positive results do not raise any significant ethical concerns; but they do require a patient to go through an expensive and time-consuming treatment process, and they also cause the patient to experience tension and anxiety that is not warranted. In order to detect cervical cancer in its earliest stages in women, a screening procedure known as a Pap test is often performed. This article describes a technique for improving images using Brightness Preserving Dynamic Fuzzy Histogram Equalization. To individual components and find the right area of interest, the fuzzy c-means approach is applied. The images are segmented using the fuzzy c-means method to find the right area of interest. The feature selection algorithm is the ACO algorithm. Following that, categorization is carried out utilizing the CNN, MLP, and ANN algorithms.
Collapse
Affiliation(s)
- R. Kavitha
- Sri Ram Nallamani Yadava Arts and Science College, Manonmaniam Sundaranar University, Tirunelveli, India
| | - D. Kiruba Jothi
- Department of Information Technology, Sri Ram Nallamani Yadava college of Arts and Science, Manonmaniam Sundaranar University, Tirunelveli, India
| | - K. Saravanan
- Department of Information Technology, R.M.D. Engineering College, Chennai, India
| | - Mahendra Pratap Swain
- Department of Pharmaceutical Sciences and Technology, Birla Institute of Technology, Mesra, Ranchi, India
| | | | - Rakhi Joshi Bhardwaj
- Department of Computer Engineering, Vishwakarma Institute of Technology, Savitribai Phule Pune University, Pune, India
| | | |
Collapse
|
20
|
Developing a Tuned Three-Layer Perceptron Fed with Trained Deep Convolutional Neural Networks for Cervical Cancer Diagnosis. Diagnostics (Basel) 2023; 13:diagnostics13040686. [PMID: 36832174 PMCID: PMC9955324 DOI: 10.3390/diagnostics13040686] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/14/2023] [Accepted: 02/07/2023] [Indexed: 02/15/2023] Open
Abstract
Cervical cancer is one of the most common types of cancer among women, which has higher death-rate than many other cancer types. The most common way to diagnose cervical cancer is to analyze images of cervical cells, which is performed using Pap smear imaging test. Early and accurate diagnosis can save the lives of many patients and increase the chance of success of treatment methods. Until now, various methods have been proposed to diagnose cervical cancer based on the analysis of Pap smear images. Most of the existing methods can be divided into two groups of methods based on deep learning techniques or machine learning algorithms. In this study, a combination method is presented, whose overall structure is based on a machine learning strategy, where the feature extraction stage is completely separate from the classification stage. However, in the feature extraction stage, deep networks are used. In this paper, a multi-layer perceptron (MLP) neural network fed with deep features is presented. The number of hidden layer neurons is tuned based on four innovative ideas. Additionally, ResNet-34, ResNet-50 and VGG-19 deep networks have been used to feed MLP. In the presented method, the layers related to the classification phase are removed in these two CNN networks, and the outputs feed the MLP after passing through a flatten layer. In order to improve performance, both CNNs are trained on related images using the Adam optimizer. The proposed method has been evaluated on the Herlev benchmark database and has provided 99.23 percent accuracy for the two-classes case and 97.65 percent accuracy for the 7-classes case. The results have shown that the presented method has provided higher accuracy than the baseline networks and many existing methods.
Collapse
|
21
|
Chowdary GJ, G S, M P, Yogarajah P. Nucleus segmentation and classification using residual SE-UNet and feature concatenation approach incervical cytopathology cell images. Technol Cancer Res Treat 2023; 22:15330338221134833. [PMID: 36744768 PMCID: PMC9905035 DOI: 10.1177/15330338221134833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Introduction: Pap smear is considered to be the primary examination for the diagnosis of cervical cancer. But the analysis of pap smear slides is a time-consuming task and tedious as it requires manual intervention. The diagnostic efficiency depends on the medical expertise of the pathologist, and human error often hinders the diagnosis. Automated segmentation and classification of cervical nuclei will help diagnose cervical cancer in earlier stages. Materials and Methods: The proposed methodology includes three models: a Residual-Squeeze-and-Excitation-module based segmentation model, a fusion-based feature extraction model, and a Multi-layer Perceptron classification model. In the fusion-based feature extraction model, three sets of deep features are extracted from these segmented nuclei using the pre-trained and fine-tuned VGG19, VGG-F, and CaffeNet models, and two hand-crafted descriptors, Bag-of-Features and Linear-Binary-Patterns, are extracted for each image. For this work, Herlev, SIPaKMeD, and ISBI2014 datasets are used for evaluation. The Herlev datasetis used for evaluating both segmentation and classification models. Whereas the SIPaKMeD and ISBI2014 are used for evaluating the classification model, and the segmentation model respectively. Results: The segmentation network enhanced the precision and ZSI by 2.04%, and 2.00% on the Herlev dataset, and the precision and recall by 0.68%, and 2.59% on the ISBI2014 dataset. The classification approach enhanced the accuracy, recall, and specificity by 0.59%, 0.47%, and 1.15% on the Herlev dataset, and by 0.02%, 0.15%, and 0.22% on the SIPaKMed dataset. Conclusion: The experiments demonstrate that the proposed work achieves promising performance on segmentation and classification in cervical cytopathology cell images..
Collapse
Affiliation(s)
| | - Suganya G
- Vellore Institute of Technology, Chennai, India
| | | | - Pratheepan Yogarajah
- Ulster University, Northern Ireland, UK,Pratheepan Yogarajah, Ulster University, Northern Ireland, UK.
| |
Collapse
|
22
|
Maurya S, Tiwari S, Mothukuri MC, Tangeda CM, Nandigam RNS, Addagiri DC. A review on recent developments in cancer detection using Machine Learning and Deep Learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
23
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
24
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
25
|
Kalbhor M, Shinde S, Joshi H, Wajire P. Pap smear-based cervical cancer detection using hybrid deep learning and performance evaluation. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2022.2163704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Madhura Kalbhor
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Swati Shinde
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Hrushikesh Joshi
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Pankaj Wajire
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| |
Collapse
|
26
|
Depto DS, Rizvee MM, Rahman A, Zunair H, Rahman MS, Mahdy MRC. Quantifying imbalanced classification methods for leukemia detection. Comput Biol Med 2023; 152:106372. [PMID: 36516574 DOI: 10.1016/j.compbiomed.2022.106372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 11/01/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Uncontrolled proliferation of B-lymphoblast cells is a common characterization of Acute Lymphoblastic Leukemia (ALL). B-lymphoblasts are found in large numbers in peripheral blood in malignant cases. Early detection of the cell in bone marrow is essential as the disease progresses rapidly if left untreated. However, automated classification of the cell is challenging, owing to its fine-grained variability with B-lymphoid precursor cells and imbalanced data points. Deep learning algorithms demonstrate potential for such fine-grained classification as well as suffer from the imbalanced class problem. In this paper, we explore different deep learning-based State-Of-The-Art (SOTA) approaches to tackle imbalanced classification problems. Our experiment includes input, GAN (Generative Adversarial Networks), and loss-based methods to mitigate the issue of imbalanced class on the challenging C-NMC and ALLIDB-2 dataset for leukemia detection. We have shown empirical evidence that loss-based methods outperform GAN-based and input-based methods in imbalanced classification scenarios.
Collapse
Affiliation(s)
- Deponker Sarker Depto
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Md Mashfiq Rizvee
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh; Texas Tech University, Lubbock, TX, United States of America.
| | - Aimon Rahman
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | | | - M Sohel Rahman
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, ECE Building, West Palasi, Dhaka 1205, Bangladesh.
| | - M R C Mahdy
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| |
Collapse
|
27
|
Gao W, Xu C, Li G, Zhang Y, Bai N, Li M. Cervical Cell Image Classification-Based Knowledge Distillation. Biomimetics (Basel) 2022; 7:biomimetics7040195. [PMID: 36412723 PMCID: PMC9680356 DOI: 10.3390/biomimetics7040195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/03/2022] [Accepted: 11/05/2022] [Indexed: 11/12/2022] Open
Abstract
Current deep-learning-based cervical cell classification methods suffer from parameter redundancy and poor model generalization performance, which creates challenges for the intelligent classification of cervical cytology smear images. In this paper, we establish a method for such classification that combines transfer learning and knowledge distillation. This new method not only transfers common features between different source domain data, but also realizes model-to-model knowledge transfer using the unnormalized probability output between models as knowledge. A multi-exit classification network is then introduced as the student network, where a global context module is embedded in each exit branch. A self-distillation method is then proposed to fuse contextual information; deep classifiers in the student network guide shallow classifiers to learn, and multiple classifier outputs are fused using an average integration strategy to form a classifier with strong generalization performance. The experimental results show that the developed method achieves good results using the SIPaKMeD dataset. The accuracy, sensitivity, specificity, and F-measure of the five classifications are 98.52%, 98.53%, 98.68%, 98.59%, respectively. The effectiveness of the method is further verified on a natural image dataset.
Collapse
Affiliation(s)
- Wenjian Gao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
- Correspondence: (C.X.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (C.X.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
28
|
Huang H, You Z, Cai H, Xu J, Lin D. Fast detection method for prostate cancer cells based on an integrated ResNet50 and YoloV5 framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107184. [PMID: 36288685 DOI: 10.1016/j.cmpb.2022.107184] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/10/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE To propose a fast detection method for prostate cancer abnormal cells based on deep learning. The purpose of this method is to quickly and accurately locate and identify abnormal cells, so as to improve the efficiency of prostate precancerous screening and promote the application and popularization of prostate cancer cell assisted screening technology. METHOD The method includes two stages: preliminary screening of abnormal cell images and accurate identification of abnormal cells. In the preliminary screening stage of abnormal cell images, ResNet50 model is used as the image classification network to judge whether the local area contains cell clusters. In the another stage, YoloV5 model is used as the target detection network to locate and recognize abnormal cells in the image containing cell clusters. RESULTS This detection method aims at the pathological cell images obtained by the membrane method. And the double stage models proposed in this paper are compared with the single stage model method using only the target detection model. The results show that through the image classification network based on deep learning, we can first judge whether there are abnormal cells in the local area. If there are abnormal cells, we can further use the target detection method based on candidate box for analysis, which can reduce the reasoning time by 50% and improve the efficiency of abnormal cell detection under the condition of losing a small amount of accuracy and slightly increasing the complexity of the model. CONCLUSION This study proposes a fast detection method for prostate cancer abnormal cells based on deep learning, which can greatly shorten the reasoning time and improve the detection speed. It is able to improve the efficiency of prostate precancerous screening.
Collapse
Affiliation(s)
- Hongyuan Huang
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China.
| | - Zhijiao You
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Huayu Cai
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Jianfeng Xu
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Dongxu Lin
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| |
Collapse
|
29
|
Yin H, Bai L, Jia H, Lin G. Noninvasive assessment of breast cancer molecular subtypes on multiparametric MRI using convolutional neural network with transfer learning. Thorac Cancer 2022; 13:3183-3191. [PMID: 36203226 PMCID: PMC9663668 DOI: 10.1111/1759-7714.14673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND To evaluate the performances of multiparametric MRI-based convolutional neural networks (CNNs) for the preoperative assessment of breast cancer molecular subtypes. METHODS A total of 136 patients with 136 pathologically confirmed invasive breast cancers were randomly divided into training, validation, and testing sets in this retrospective study. The CNN models were established based on contrast-enhanced T1 -weighted imaging (T1 C), Apparent diffusion coefficient (ADC), and T2 -weighted imaging (T2 W) using the training and validation sets. The performances of CNN models were evaluated on the testing set. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were calculated to assess the performance. RESULTS For the separation of each subtype from other subtypes on the testing set, the T1 C-based models yielded AUCs from 0.762 to 0.920; the ADC-based models yielded AUCs from 0.686 to 0.851; and the T2 W-based models achieved AUCs from 0.639 to 0.697. CONCLUSION T1 C-based models performed better than ADC-based models and T2 W-based models in assessing the breast cancer molecular subtypes. The discriminating performances of our CNN models for triple negative and human epidermal growth factor receptor 2-enriched subtypes were better than that of luminal A and luminal B subtypes.
Collapse
Affiliation(s)
- Haolin Yin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Lutian Bai
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Huihui Jia
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Guangwu Lin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| |
Collapse
|
30
|
Song J, Im S, Lee SH, Jang HJ. Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images. Diagnostics (Basel) 2022; 12:2623. [PMID: 36359467 PMCID: PMC9689570 DOI: 10.3390/diagnostics12112623] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 08/11/2023] Open
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
Collapse
Affiliation(s)
- JaeYen Song
- Department of Obstetrics and Gynecology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Soyoung Im
- Department of Hospital Pathology, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 16247, Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| |
Collapse
|
31
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
32
|
Xu C, Li M, Li G, Zhang Y, Sun C, Bai N. Cervical Cell/Clumps Detection in Cytology Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:diagnostics12102477. [PMID: 36292166 PMCID: PMC9600700 DOI: 10.3390/diagnostics12102477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 12/04/2022] Open
Abstract
Cervical cancer is one of the most common and deadliest cancers among women and poses a serious health risk. Automated screening and diagnosis of cervical cancer will help improve the accuracy of cervical cell screening. In recent years, there have been many studies conducted using deep learning methods for automatic cervical cancer screening and diagnosis. Deep-learning-based Convolutional Neural Network (CNN) models require large amounts of data for training, but large cervical cell datasets with annotations are difficult to obtain. Some studies have used transfer learning approaches to handle this problem. However, such studies used the same transfer learning method that is the backbone network initialization by the ImageNet pre-trained model in two different types of tasks, the detection and classification of cervical cell/clumps. Considering the differences between detection and classification tasks, this study proposes the use of COCO pre-trained models when using deep learning methods for cervical cell/clumps detection tasks to better handle limited data set problem at training time. To further improve the model detection performance, based on transfer learning, we conducted multi-scale training according to the actual situation of the dataset. Considering the effect of bounding box loss on the precision of cervical cell/clumps detection, we analyzed the effects of different bounding box losses on the detection performance of the model and demonstrated that using a loss function consistent with the type of pre-trained model can help improve the model performance. We analyzed the effect of mean and std of different datasets on the performance of the model. It was demonstrated that the detection performance was optimal when using the mean and std of the cervical cell dataset used in the current study. Ultimately, based on backbone Resnet50, the mean Average Precision (mAP) of the network model is 61.6% and Average Recall (AR) is 87.7%. Compared to the current values of 48.8% and 64.0% in the used dataset, the model detection performance is significantly improved by 12.8% and 23.7%, respectively.
Collapse
Affiliation(s)
- Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (M.L.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (M.L.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Chengjie Sun
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
33
|
Auxiliary classification of cervical cells based on multi-domain hybrid deep learning framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
34
|
Thakur N, Alam MR, Abdul-Ghafar J, Chong Y. Recent Application of Artificial Intelligence in Non-Gynecological Cancer Cytopathology: A Systematic Review. Cancers (Basel) 2022; 14:cancers14143529. [PMID: 35884593 PMCID: PMC9316753 DOI: 10.3390/cancers14143529] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
Simple Summary Artificial intelligence (AI) has attracted significant interest in the healthcare sector due to its promising results. Cytological examination is a critical step in the initial diagnosis of cancer. Here, we conducted a systematic review with quantitative analysis to understand the current status of AI applications in non-gynecological (non-GYN) cancer cytology. In our analysis, we found that most of the studies focused on classification and segmentation tasks. Overall, AI showed promising results for non-GYN cancer cytopathology analysis. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation across all studies. Abstract State-of-the-art artificial intelligence (AI) has recently gained considerable interest in the healthcare sector and has provided solutions to problems through automated diagnosis. Cytological examination is a crucial step in the initial diagnosis of cancer, although it shows limited diagnostic efficacy. Recently, AI applications in the processing of cytopathological images have shown promising results despite the elementary level of the technology. Here, we performed a systematic review with a quantitative analysis of recent AI applications in non-gynecological (non-GYN) cancer cytology to understand the current technical status. We searched the major online databases, including MEDLINE, Cochrane Library, and EMBASE, for relevant English articles published from January 2010 to January 2021. The searched query terms were: “artificial intelligence”, “image processing”, “deep learning”, “cytopathology”, and “fine-needle aspiration cytology.” Out of 17,000 studies, only 26 studies (26 models) were included in the full-text review, whereas 13 studies were included for quantitative analysis. There were eight classes of AI models treated of according to target organs: thyroid (n = 11, 39%), urinary bladder (n = 6, 21%), lung (n = 4, 14%), breast (n = 2, 7%), pleural effusion (n = 2, 7%), ovary (n = 1, 4%), pancreas (n = 1, 4%), and prostate (n = 1, 4). Most of the studies focused on classification and segmentation tasks. Although most of the studies showed impressive results, the sizes of the training and validation datasets were limited. Overall, AI is also promising for non-GYN cancer cytopathology analysis, such as pathology or gynecological cytology. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation found across all studies. Future studies with larger datasets with high-quality annotations and external validation are required.
Collapse
|
35
|
Chen H, Liu J, Hua C, Feng J, Pang B, Cao D, Li C. Accurate classification of white blood cells by coupling pre-trained ResNet and DenseNet with SCAM mechanism. BMC Bioinformatics 2022; 23:282. [PMID: 35840897 PMCID: PMC9287918 DOI: 10.1186/s12859-022-04824-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 07/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Via counting the different kinds of white blood cells (WBCs), a good quantitative description of a person's health status is obtained, thus forming the critical aspects for the early treatment of several diseases. Thereby, correct classification of WBCs is crucial. Unfortunately, the manual microscopic evaluation is complicated, time-consuming, and subjective, so its statistical reliability becomes limited. Hence, the automatic and accurate identification of WBCs is of great benefit. However, the similarity between WBC samples and the imbalance and insufficiency of samples in the field of medical computer vision bring challenges to intelligent and accurate classification of WBCs. To tackle these challenges, this study proposes a deep learning framework by coupling the pre-trained ResNet and DenseNet with SCAM (spatial and channel attention module) for accurately classifying WBCs. RESULTS In the proposed network, ResNet and DenseNet enables information reusage and new information exploration, respectively, which are both important and compatible for learning good representations. Meanwhile, the SCAM module sequentially infers attention maps from two separate dimensions of space and channel to emphasize important information or suppress unnecessary information, further enhancing the representation power of our model for WBCs to overcome the limitation of sample similarity. Moreover, the data augmentation and transfer learning techniques are used to handle the data of imbalance and insufficiency. In addition, the mixup approach is adopted for modeling the vicinity relation across training samples of different categories to increase the generalizability of the model. By comparing with five representative networks on our developed LDWBC dataset and the publicly available LISC, BCCD, and Raabin WBC datasets, our model achieves the best overall performance. We also implement the occlusion testing by the gradient-weighted class activation mapping (Grad-CAM) algorithm to improve the interpretability of our model. CONCLUSION The proposed method has great potential for application in intelligent and accurate classification of WBCs.
Collapse
Affiliation(s)
- Hua Chen
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China.
| | - Chunbing Hua
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Jing Feng
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Baochuan Pang
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| | - Dehua Cao
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| | - Cheng Li
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| |
Collapse
|
36
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
37
|
Kupas D, Harangi B. Classification of Pap-smear cell images using deep convolutional neural network accelerated by hand-crafted features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1452-1455. [PMID: 36083935 DOI: 10.1109/embc48229.2022.9871171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of cells extracted from Pap-smears is in most cases done using neural network architectures. Nevertheless, the importance of features extracted with digital image processing is also discussed in many related articles. Decision support systems and automated analysis tools of Pap-smears often use these kinds of manually extracted, global features based on clinical expert opinion. In this paper, a solution is introduced where 29 different contextual features are combined with local features learned by a neural network so that it increases classification performance. The weight distribution between the features is also investigated leading to a conclusion that the numerical features are indeed forming an important part of the learning process. Furthermore, extensive testing of the presented methods is done using a dataset annotated by clinical experts. An increase of 3.2% in F1-Score value can be observed when using the combination of contextual and local features. Clinical Relevance - Analysis of images extracted from digital Pap-test using modern machine learning tools is discussed in many scientific papers. The manual classification of the cells can be time-consuming and expensive which requires a high amount of manual labor. Furthermore the result of the manual classification can also be uncertain due to interobserver variability. Considering these, any result that can lead to a more reliable highly accurate classification method is considered valuable in the field of cervical cancer screening.
Collapse
|
38
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
39
|
van der Kamp A, Waterlander TJ, de Bel T, van der Laak J, van den Heuvel-Eibrink MM, Mavinkurve-Groothuis AMC, de Krijger RR. Artificial Intelligence in Pediatric Pathology: The Extinction of a Medical Profession or the Key to a Bright Future? Pediatr Dev Pathol 2022; 25:380-387. [PMID: 35238696 DOI: 10.1177/10935266211059809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial Intelligence (AI) has become of increasing interest over the past decade. While digital image analysis (DIA) is already being used in radiology, it is still in its infancy in pathology. One of the reasons is that large-scale digitization of glass slides has only recently become available. With the advent of digital slide scanners, that digitize glass slides into whole slide images, many labs are now in a transition phase towards digital pathology. However, only few departments worldwide are currently fully digital. Digital pathology provides the ability to annotate large datasets and train computers to develop and validate robust algorithms, similar to radiology. In this opinionated overview, we will give a brief introduction into AI in pathology, discuss the potential positive and negative implications and speculate about the future role of AI in the field of pediatric pathology.
Collapse
Affiliation(s)
- Ananda van der Kamp
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Tomas J Waterlander
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Thomas de Bel
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jeroen van der Laak
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands.,Center for Medical Image Science and Visualization, 4566Linköping University, Linköping, Sweden
| | | | | | - Ronald R de Krijger
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands.,Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
40
|
Yin HL, Jiang Y, Xu Z, Jia HH, Lin GW. Combined diagnosis of multiparametric MRI-based deep learning models facilitates differentiating triple-negative breast cancer from fibroadenoma magnetic resonance BI-RADS 4 lesions. J Cancer Res Clin Oncol 2022; 149:2575-2584. [PMID: 35771263 DOI: 10.1007/s00432-022-04142-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/13/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE To investigate the value of the combined diagnosis of multiparametric MRI-based deep learning models to differentiate triple-negative breast cancer (TNBC) from fibroadenoma magnetic resonance Breast Imaging-Reporting and Data System category 4 (BI-RADS 4) lesions and to evaluate whether the combined diagnosis of these models could improve the diagnostic performance of radiologists. METHODS A total of 319 female patients with 319 pathologically confirmed BI-RADS 4 lesions were randomly divided into training, validation, and testing sets in this retrospective study. The three models were established based on contrast-enhanced T1-weighted imaging, diffusion-weighted imaging, and T2-weighted imaging using the training and validation sets. The artificial intelligence (AI) combination score was calculated according to the results of three models. The diagnostic performances of four radiologists with and without AI assistance were compared with the AI combination score on the testing set. The area under the curve (AUC), sensitivity, specificity, accuracy, and weighted kappa value were calculated to assess the performance. RESULTS The AI combination score yielded an excellent performance (AUC = 0.944) on the testing set. With AI assistance, the AUC for the diagnosis of junior radiologist 1 (JR1) increased from 0.833 to 0.885, and that for JR2 increased from 0.823 to 0.876. The AUCs of senior radiologist 1 (SR1) and SR2 slightly increased from 0.901 and 0.950 to 0.925 and 0.975 after AI assistance, respectively. CONCLUSION Combined diagnosis of multiparametric MRI-based deep learning models to differentiate TNBC from fibroadenoma magnetic resonance BI-RADS 4 lesions can achieve comparable performance to that of SRs and improve the diagnostic performance of JRs.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Zihan Xu
- Lung Cancer Center, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Hui-Hui Jia
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China.
| |
Collapse
|
41
|
Pramanik R, Biswas M, Sen S, Souza Júnior LAD, Papa JP, Sarkar R. A fuzzy distance-based ensemble of deep models for cervical cancer detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106776. [PMID: 35398621 DOI: 10.1016/j.cmpb.2022.106776] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/22/2022] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. METHODS We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. RESULTS In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. CONCLUSION Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.
Collapse
Affiliation(s)
- Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Momojit Biswas
- Department of Metallurgical and Material Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Shibaprasad Sen
- Department of Computer Science and Technology, University of Engineering and Management, Kolkata, 700160, West Bengal, India.
| | - Luis Antonio de Souza Júnior
- Department of Computing, São Carlos Federal University-UFScar, São Carlos, São Paulo, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany.
| | - João Paulo Papa
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany; Department of Computing, São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01, Bauru, São Paulo, Brazil.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
42
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
43
|
Chen W, Shen W, Gao L, Li X. Hybrid Loss-Constrained Lightweight Convolutional Neural Networks for Cervical Cell Classification. SENSORS 2022; 22:s22093272. [PMID: 35590961 PMCID: PMC9101629 DOI: 10.3390/s22093272] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) technologies have resulted in remarkable achievements and conferred massive benefits to computer-aided systems in medical imaging. However, the worldwide usage of AI-based automation-assisted cervical cancer screening systems is hindered by computational cost and resource limitations. Thus, a highly economical and efficient model with enhanced classification ability is much more desirable. This paper proposes a hybrid loss function with label smoothing to improve the distinguishing power of lightweight convolutional neural networks (CNNs) for cervical cell classification. The results strengthen our confidence in hybrid loss-constrained lightweight CNNs, which can achieve satisfactory accuracy with much lower computational cost for the SIPakMeD dataset. In particular, ShufflenetV2 obtained a comparable classification result (96.18% in accuracy, 96.30% in precision, 96.23% in recall, and 99.08% in specificity) with only one-seventh of the memory usage, one-sixth of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). GhostNet achieved an improved classification result (96.39% accuracy, 96.42% precision, 96.39% recall, and 99.09% specificity) with one-half of the memory usage, one-quarter of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). The proposed lightweight CNNs are likely to lead to an easily-applicable and cost-efficient automation-assisted system for cervical cancer diagnosis and prevention.
Collapse
|
44
|
Shinde S, Kalbhor M, Wajire P. DeepCyto: a hybrid framework for cervical cancer classification by using deep feature fusion of cytology images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:6415-6434. [PMID: 35730264 DOI: 10.3934/mbe.2022301] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cervical cancer is the second most commonly seen cancer in women. It affects the cervix portion of the vagina. The most preferred diagnostic test required for screening cervical cancer is the pap smear test. Pap smear is a time-consuming test as it requires detailed analysis by expert cytologists. Cytologists can screen around 100 to 1000 slides depending upon the availability of advanced equipment. Due to this reason Artificial intelligence (AI) based computer-aided diagnosis system for the classification of pap smear images is needed. There are some AI-based solutions proposed in the literature, still an effective and accurate system is under research. In this paper, the deep learning-based hybrid methodology namely DeepCyto is proposed for the classification of pap smear cytology images. The DeepCyto extracts the feature fusion vectors from pre-trained models and passes these to two workflows. Workflow-1 applies principal component analysis and machine learning ensemble to classify the pap smear images. Workflow-2 takes feature fusion vectors as an input and applies an artificial neural network for classification. The experiments are performed on three benchmark datasets namely Herlev, SipakMed, and LBCs. The performance measures of accuracy, precision, recall and F1-score are used to evaluate the effectiveness of the DeepCyto. The experimental results depict that Workflow-2 has given the best performance on all three datasets even with a smaller number of epochs. Also, the performance of the DeepCyto Workflow 2 on multi-cell images of LBCs is better compared to single cell images of other datasets. Thus, DeepCyto is an efficient method for accurate feature extraction as well as pap smear image classification.
Collapse
Affiliation(s)
- Swati Shinde
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| | - Madhura Kalbhor
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| | - Pankaj Wajire
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| |
Collapse
|
45
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
46
|
Wang X, Kittaka M, He Y, Zhang Y, Ueki Y, Kihara D. OC_Finder: Osteoclast Segmentation, Counting, and Classification Using Watershed and Deep Learning. FRONTIERS IN BIOINFORMATICS 2022; 2. [PMID: 35474753 PMCID: PMC9038109 DOI: 10.3389/fbinf.2022.819570] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Osteoclasts are multinucleated cells that exclusively resorb bone matrix proteins and minerals on the bone surface. They differentiate from monocyte/macrophage lineage cells in the presence of osteoclastogenic cytokines such as the receptor activator of nuclear factor-κB ligand (RANKL) and are stained positive for tartrate-resistant acid phosphatase (TRAP). In vitro osteoclast formation assays are commonly used to assess the capacity of osteoclast precursor cells for differentiating into osteoclasts wherein the number of TRAP-positive multinucleated cells is counted as osteoclasts. Osteoclasts are manually identified on cell culture dishes by human eyes, which is a labor-intensive process. Moreover, the manual procedure is not objective and results in lack of reproducibility. To accelerate the process and reduce the workload for counting the number of osteoclasts, we developed OC_Finder, a fully automated system for identifying osteoclasts in microscopic images. OC_Finder consists of cell image segmentation with a watershed algorithm and cell classification using deep learning. OC_Finder detected osteoclasts differentiated from wild-type and Sh3bp2KI/+ precursor cells at a 99.4% accuracy for segmentation and at a 98.1% accuracy for classification. The number of osteoclasts classified by OC_Finder was at the same accuracy level with manual counting by a human expert. OC_Finder also showed consistent performance on additional datasets collected with different microscopes with different settings by different operators. Together, successful development of OC_Finder suggests that deep learning is a useful tool to perform prompt and accurate unbiased classification and detection of specific cell types in microscopic images.
Collapse
Affiliation(s)
- Xiao Wang
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
| | - Mizuho Kittaka
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Yilin He
- School of Software Engineering, Shandong University, Jinan, China
| | - Yiwei Zhang
- Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Yasuyoshi Ueki
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Daisuke Kihara
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
- Department of Biological Sciences, Purdue University, West Lafayette, IN, United States
- Purdue Cancer Research Institute, Purdue University, West Lafayette, IN, United States
- *Correspondence: Daisuke Kihara,
| |
Collapse
|
47
|
A Fast Hybrid Classification Algorithm with Feature Reduction for Medical Images. Appl Bionics Biomech 2022; 2022:1367366. [PMID: 35360292 PMCID: PMC8964210 DOI: 10.1155/2022/1367366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/05/2022] [Indexed: 11/18/2022] Open
Abstract
In this paper, we are introducing a fast hybrid fuzzy classification algorithm with feature reduction for medical images. We incorporated the quantum-based grasshopper computing algorithm (QGH) with feature extraction using fuzzy clustering technique (C-means). QGH integrates quantum computing into machine learning and intelligence applications. The objective of our technique is to the integrate QGH method, specifically into cervical cancer detection that is based on image processing. Many features such as color, geometry, and texture found in the cells imaged in Pap smear lab test are very crucial in cancer diagnosis. Our proposed technique is based on the extraction of the best features using a more than 2600 public Pap smear images and further applies feature reduction technique to reduce the feature space. Performance evaluation of our approach evaluates the influence of the extracted feature on the classification precision by performing two experimental setups. First setup is using all the extracted features which leads to classification without feature bias. The second setup is a fusion technique which utilized QGH with the fuzzy C-means algorithm to choose the best features. In the setups, we allocate the assessment to accuracy based on the selection of best features and of different categories of the cancer. In the last setup, we utilized a fusion technique engaged with statistical techniques to launch a qualitative agreement with the feature selection in several experimental setups.
Collapse
|
48
|
Tao X, Chu X, Guo B, Pan Q, Ji S, Lou W, Lv C, Xie G, Hua K. Scrutinizing high-risk patients from ASC-US cytology via a deep learning model. Cancer Cytopathol 2022; 130:407-414. [PMID: 35290728 DOI: 10.1002/cncy.22560] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 12/14/2022]
Abstract
BACKGROUND Atypical squamous cells of undetermined significance (ASC-US) is the most frequent but ambiguous abnormal Papanicolaou (Pap) interpretation and is generally triaged by high-risk human papillomavirus (hrHPV) testing before colposcopy. This study aimed to evaluate the performance of an artificial intelligence (AI)-based triage system to predict ASC-US cytology for cervical intraepithelial neoplasia 2+ lesions (CIN2+). METHODS More than 60,000 images were used to train this proposed deep learning-based ASC-US triage system, where both cell-level and slide-level information were extracted. In total, 1967 consecutive ASC-US Paps from 2017 to 2019 were included in this study. Histological follow-ups were retrieved to compare the triage performance between the AI system and hrHPV in 622 patients with simultaneous hrHPV testing. RESULTS In the triage of women with ASC-US cytology for CIN2+, our system attained equivalent sensitivity (92.9%; 95% confidence interval [CI], 75.0%-98.8%) and higher specificity (49.7%; 95% CI, 45.6%-53.8%) than hrHPV testing (sensitivity: 89.3%; 95% CI, 70.6%-97.2%; specificity: 34.3%; 95% CI, 30.6%-38.3%) without requiring additional patient examination or testing. Additionally, the independence of this system from hrHPV testing (κ = 0.138) indicated that these 2 different methods could be used to triage ASC-US as an alternative way. CONCLUSION This de novo deep learning-based system can triage ASC-US cytology for CIN2+ with a performance superior to hrHPV testing and without incurring additional expenses.
Collapse
Affiliation(s)
- Xiang Tao
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Xiao Chu
- Ping An Healthcare Technology, Shanghai, China
| | - Bingxue Guo
- Ping An Healthcare Technology, Shanghai, China
| | - Qiuzhi Pan
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Shuting Ji
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Wenjie Lou
- Ping An Healthcare Technology, Shanghai, China
| | | | - Guotong Xie
- Ping An Healthcare Technology, Shanghai, China.,Ping An Healthcare and Technology Company Limited, Shanghai, China.,Ping An International Smart City Technology Company, Shanghai, China
| | - Keqin Hua
- Department of Obstetrics and Gynecology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| |
Collapse
|
49
|
Pantanowitz L. Improving the Pap test with artificial intelligence. Cancer Cytopathol 2022; 130:402-404. [PMID: 35291050 DOI: 10.1002/cncy.22561] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 01/31/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Liron Pantanowitz
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
50
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|