1
|
Song Y, Zou J, Choi KS, Lei B, Qin J. Cell classification with worse-case boosting for intelligent cervical cancer screening. Med Image Anal 2024; 91:103014. [PMID: 37913578 DOI: 10.1016/j.media.2023.103014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 10/10/2023] [Accepted: 10/20/2023] [Indexed: 11/03/2023]
Abstract
Cell classification underpins intelligent cervical cancer screening, a cytology examination that effectively decreases both the morbidity and mortality of cervical cancer. This task, however, is rather challenging, mainly due to the difficulty of collecting a training dataset representative sufficiently of the unseen test data, as there are wide variations of cells' appearance and shape at different cancerous statuses. This difficulty makes the classifier, though trained properly, often classify wrongly for cells that are underrepresented by the training dataset, eventually leading to a wrong screening result. To address it, we propose a new learning algorithm, called worse-case boosting, for classifiers effectively learning from under-representative datasets in cervical cell classification. The key idea is to learn more from worse-case data for which the classifier has a larger gradient norm compared to other training data, so these data are more likely to correspond to underrepresented data, by dynamically assigning them more training iterations and larger loss weights for boosting the generalizability of the classifier on underrepresented data. We achieve this idea by sampling worse-case data per the gradient norm information and then enhancing their loss values to update the classifier. We demonstrate the effectiveness of this new learning algorithm on two publicly available cervical cell classification datasets (the two largest ones to the best of our knowledge), and positive results (4% accuracy improvement) yield in the extensive experiments. The source codes are available at: https://github.com/YouyiSong/Worse-Case-Boosting.
Collapse
Affiliation(s)
- Youyi Song
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Zou
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kup-Sze Choi
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Baiying Lei
- Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, China.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|
3
|
Mustafa WA, Ismail S, Mokhtar FS, Alquran H, Al-Issa Y. Cervical Cancer Detection Techniques: A Chronological Review. Diagnostics (Basel) 2023; 13:diagnostics13101763. [PMID: 37238248 DOI: 10.3390/diagnostics13101763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/12/2023] [Accepted: 05/15/2023] [Indexed: 05/28/2023] Open
Abstract
Cervical cancer is known as a major health problem globally, with high mortality as well as incidence rates. Over the years, there have been significant advancements in cervical cancer detection techniques, leading to improved accuracy, sensitivity, and specificity. This article provides a chronological review of cervical cancer detection techniques, from the traditional Pap smear test to the latest computer-aided detection (CAD) systems. The traditional method for cervical cancer screening is the Pap smear test. It consists of examining cervical cells under a microscope for abnormalities. However, this method is subjective and may miss precancerous lesions, leading to false negatives and a delayed diagnosis. Therefore, a growing interest has been in shown developing CAD methods to enhance cervical cancer screening. However, the effectiveness and reliability of CAD systems are still being evaluated. A systematic review of the literature was performed using the Scopus database to identify relevant studies on cervical cancer detection techniques published between 1996 and 2022. The search terms used included "(cervix OR cervical) AND (cancer OR tumor) AND (detect* OR diagnosis)". Studies were included if they reported on the development or evaluation of cervical cancer detection techniques, including traditional methods and CAD systems. The results of the review showed that CAD technology for cervical cancer detection has come a long way since it was introduced in the 1990s. Early CAD systems utilized image processing and pattern recognition techniques to analyze digital images of cervical cells, with limited success due to low sensitivity and specificity. In the early 2000s, machine learning (ML) algorithms were introduced to the CAD field for cervical cancer detection, allowing for more accurate and automated analysis of digital images of cervical cells. ML-based CAD systems have shown promise in several studies, with improved sensitivity and specificity reported compared to traditional screening methods. In summary, this chronological review of cervical cancer detection techniques highlights the significant advancements made in this field over the past few decades. ML-based CAD systems have shown promise for improving the accuracy and sensitivity of cervical cancer detection. The Hybrid Intelligent System for Cervical Cancer Diagnosis (HISCCD) and the Automated Cervical Screening System (ACSS) are two of the most promising CAD systems. Still, deeper validation and research are required before being broadly accepted. Continued innovation and collaboration in this field may help enhance cervical cancer detection as well as ultimately reduce the disease's burden on women worldwide.
Collapse
Affiliation(s)
- Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Campus Pauh Putra, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Advanced Computing (AdvComp), Centre of Excellence (CoE), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Shahrina Ismail
- Faculty of Science and Technology, Universiti Sains Islam Malaysia (USIM), Bandar Baru Nilai 71800, Negeri Sembilan, Malaysia
| | - Fahirah Syaliza Mokhtar
- Faculty of Business, Economy and Social Development, Universiti Malaysia Terengganu, Kuala Nerus 21300, Terengganu, Malaysia
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, 556, Irbid 21163, Jordan
| | - Yazan Al-Issa
- Department of Computer Engineering, Yarmouk University, Irbid 22110, Jordan
| |
Collapse
|
4
|
Kakotkin VV, Semina EV, Zadorkina TG, Agapov MA. Prevention Strategies and Early Diagnosis of Cervical Cancer: Current State and Prospects. Diagnostics (Basel) 2023; 13:diagnostics13040610. [PMID: 36832098 PMCID: PMC9955852 DOI: 10.3390/diagnostics13040610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 02/03/2023] [Accepted: 02/05/2023] [Indexed: 02/11/2023] Open
Abstract
Cervical cancer ranks third among all new cancer cases and causes of cancer deaths in females. The paper provides an overview of cervical cancer prevention strategies employed in different regions, with incidence and mortality rates ranging from high to low. It assesses the effectiveness of approaches proposed by national healthcare systems by analysing data published in the National Library of Medicine (Pubmed) since 2018 featuring the following keywords: "cervical cancer prevention", "cervical cancer screening", "barriers to cervical cancer prevention", "premalignant cervical lesions" and "current strategies". WHO's 90-70-90 global strategy for cervical cancer prevention and early screening has proven effective in different countries in both mathematical models and clinical practice. The data analysis carried out within this study identified promising approaches to cervical cancer screening and prevention, which can further enhance the effectiveness of the existing WHO strategy and national healthcare systems. One such approach is the application of AI technologies for detecting precancerous cervical lesions and choosing treatment strategies. As such studies show, the use of AI can not only increase detection accuracy but also ease the burden on primary care.
Collapse
Affiliation(s)
- Viktor V. Kakotkin
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
| | - Ekaterina V. Semina
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
| | - Tatiana G. Zadorkina
- Kaliningrad Regional Centre for Specialised Medical Care, Barnaulskaia Street, 6, 236006 Kaliningrad, Russia
| | - Mikhail A. Agapov
- Scientific and Educational Cluster MEDBIO, Immanuel Kant Baltic Federal University, A. Nevskogo St., 14, 236041 Kaliningrad, Russia
- Correspondence: ; Tel.: +7-(4012)-59-55-95
| |
Collapse
|
5
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
6
|
Basu S, Gupta M, Rana P, Gupta P, Arora C. RadFormer: Transformers with global-local attention for interpretable and accurate Gallbladder Cancer detection. Med Image Anal 2023; 83:102676. [PMID: 36455424 DOI: 10.1016/j.media.2022.102676] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 09/17/2022] [Accepted: 10/27/2022] [Indexed: 11/21/2022]
Abstract
We propose a novel deep neural network architecture to learn interpretable representation for medical image analysis. Our architecture generates a global attention for region of interest, and then learns bag of words style deep feature embeddings with local attention. The global, and local feature maps are combined using a contemporary transformer architecture for highly accurate Gallbladder Cancer (GBC) detection from Ultrasound (USG) images. Our experiments indicate that the detection accuracy of our model beats even human radiologists, and advocates its use as the second reader for GBC diagnosis. Bag of words embeddings allow our model to be probed for generating interpretable explanations for GBC detection consistent with the ones reported in medical literature. We show that the proposed model not only helps understand decisions of neural network models but also aids in discovery of new visual features relevant to the diagnosis of GBC. Source-code is available at https://github.com/sbasu276/RadFormer.
Collapse
Affiliation(s)
- Soumen Basu
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India.
| | - Mayank Gupta
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India
| | - Pratyaksha Rana
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education & Research, Chandigarh, India
| | - Pankaj Gupta
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education & Research, Chandigarh, India
| | - Chetan Arora
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India
| |
Collapse
|
7
|
Xu C, Li M, Li G, Zhang Y, Sun C, Bai N. Cervical Cell/Clumps Detection in Cytology Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:2477. [PMID: 36292166 PMCID: PMC9600700 DOI: 10.3390/diagnostics12102477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 12/04/2022] Open
Abstract
Cervical cancer is one of the most common and deadliest cancers among women and poses a serious health risk. Automated screening and diagnosis of cervical cancer will help improve the accuracy of cervical cell screening. In recent years, there have been many studies conducted using deep learning methods for automatic cervical cancer screening and diagnosis. Deep-learning-based Convolutional Neural Network (CNN) models require large amounts of data for training, but large cervical cell datasets with annotations are difficult to obtain. Some studies have used transfer learning approaches to handle this problem. However, such studies used the same transfer learning method that is the backbone network initialization by the ImageNet pre-trained model in two different types of tasks, the detection and classification of cervical cell/clumps. Considering the differences between detection and classification tasks, this study proposes the use of COCO pre-trained models when using deep learning methods for cervical cell/clumps detection tasks to better handle limited data set problem at training time. To further improve the model detection performance, based on transfer learning, we conducted multi-scale training according to the actual situation of the dataset. Considering the effect of bounding box loss on the precision of cervical cell/clumps detection, we analyzed the effects of different bounding box losses on the detection performance of the model and demonstrated that using a loss function consistent with the type of pre-trained model can help improve the model performance. We analyzed the effect of mean and std of different datasets on the performance of the model. It was demonstrated that the detection performance was optimal when using the mean and std of the cervical cell dataset used in the current study. Ultimately, based on backbone Resnet50, the mean Average Precision (mAP) of the network model is 61.6% and Average Recall (AR) is 87.7%. Compared to the current values of 48.8% and 64.0% in the used dataset, the model detection performance is significantly improved by 12.8% and 23.7%, respectively.
Collapse
Affiliation(s)
- Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Chengjie Sun
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
8
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
9
|
Gutiérrez-Enríquez SO, Guerrero-Zacarías MC, Oros-Ovalle C, Terán-Figueroa Y, Acuña-Aradillas JM. Computer System for the Capture and Preparation of Cytopathological Reports for Cervical Cancer Detection and His Utility in Training for Health Personnel. Eur J Investig Health Psychol Educ 2022; 12:1323-1333. [PMID: 36135230 PMCID: PMC9498205 DOI: 10.3390/ejihpe12090092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/24/2022] [Accepted: 08/26/2022] [Indexed: 11/24/2022] Open
Abstract
Health information systems and training are tools that support process management. The current study describes the results of the implementation of technological innovation in the process of the capture and preparation of cytopathological reports. The electronic system was structured based on national standards regarding cervical cancer control. PHP was used to design the software and MYSQL was used for the structure of the database. The total number of health personnel assigned to the cytology department participated, along with a pathologist, who made the records of the patients who came for cervical cytology to a university health center in San Luis Potosi, Mexico. The system was evaluated based on the indicators of structure, process, and results. Structure: comply with the official Mexican regulations for the registration of cervical cancer and electronic health information systems. Process: all records were legible and accurate, with varying percentages of completeness in the patient identification sections (46%) and alternate contact data (80%). Result: percentages above 80% were obtained in the satisfaction of the professionals who used the system. The system was effective as it yielded readable and accurate data that made the process of information capture and delivery of cervical screening results more efficient and faster.
Collapse
Affiliation(s)
| | | | - Cuauhtémoc Oros-Ovalle
- Department of Pathological Anatomy, Central Hospital “Dr. Ignacio Morones Prieto”, San Luis Potosi 78290, Mexico
| | - Yolanda Terán-Figueroa
- Faculty of Nursing and Nutrition, Autonomous University of San Luis Potosi, San Luis Potosi 78290, Mexico
| | | |
Collapse
|
10
|
Zhang X, Lee VC, Rong J, Lee JC, Liu F. Deep convolutional neural networks in thyroid disease detection: A multi-classification comparison by ultrasonography and computed tomography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106823. [PMID: 35489145 DOI: 10.1016/j.cmpb.2022.106823] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 02/20/2022] [Accepted: 04/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE As one of the largest endocrine organs in the human body, the thyroid gland regulates daily metabolism. Early detection of thyroid disease leads to reduced mortality rates. The diagnosis of thyroid disease is usually made by radiologists and pathologists, which heavily relies on their experience and expertise. To mitigate human false-positive diagnostic rates, this paper proves that deep learning-driven techniques yield promising performance for automatic detection of thyroid diseases which offers clinicians assistance regarding diagnostic decision-making. METHOD This research study is the first of its kind, which adopts two pre-operative medical image modalities for multi-classifying thyroid disease types (i.e., normal, thyroiditis, cystic, multi-nodular goiter, adenoma, and cancer). Using the current state-of-the-art performing deep convolutional neural network (CNN) architecture, this study builds a thyroid disease diagnostic model for distinguishing among the disease types. RESULTS The model obtains unprecedented performance for both medical image sets, and it reaches an accuracy of 0.972 and 0.942 for ultrasound images and computed tomography (CT) scans correspondingly. CONCLUSION The experimental results illustrate that the selected CNN can be adapted to both image modalities, indicating the feasibility of the deep learning model and emphasizing its further applications in clinics.
Collapse
Affiliation(s)
- Xinyu Zhang
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia
| | - Vincent Cs Lee
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Jia Rong
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia
| | - James C Lee
- Monash University Endocrine Surgery Unit, Alfred Hospital, Melbourne, VIC 3004, Australia; Department of Surgery, Monash University, Melbourne, VIC 3168, Australia
| | - Feng Liu
- West China Hospital of Sichuan University, Chengdu City, Sichuan Province 332001, China
| |
Collapse
|
11
|
AIN ALIAS NUR, AZANI MUSTAFA WAN, AMINUDIN JAMLOS MOHD, ALKHAYYAT AHMED, SHAKIR AB RAHMAN KHAIRUL, Q. MALIK RAMI. Improvement method for cervical cancer detection: A comparative analysis. Oncol Res 2021. [DOI: 10.32604/or.2022.025897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023] Open
|