1
|
Ledwaba L, Saidu R, Malila B, Kuhn L, Mutsvangwa TE. Automated analysis of digital medical images in cervical cancer screening: A systematic review. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.09.27.24314466. [PMID: 39399017 PMCID: PMC11469345 DOI: 10.1101/2024.09.27.24314466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
Background Cervical cancer screening programs are poorly implemented in LMICs due to a shortage of specialists and expensive diagnostic infrastructure. To address the barriers of implementation researchers have been developing low-cost portable devices and automating image analysis for decision support.However, as the knowledge base is growing rapidly, progress on the implementation status of novel imaging devices and algorithms in cervical cancer screening has become unclear. The aim of this project was to provide a systematic review summarizing the full range of automated technology systems used in cervical cancer screening. Method A search on academic databases was conducted and the search results were screened by two independent reviewers. Study selection was based on eligibility in meeting the terms of inclusion and exclusion criteria which were outlined using a Population, Intervention, Comparator and Outcome framework. Results 17 studies reported algorithms developed with source images from mobile device, viz. Pocket Colposcope, MobileODT EVA Colpo, Smartphone Camera, Smartphone-based Endoscope System, Smartscope, mHRME, and PiHRME. While 56 studies reported algorithms with source images from conventional/commercial acquisition devices. Most interventions were in the feasibility stage of development, undergoing initial clinical validations. Conclusion Researchers have proven superior prediction performance of computer aided diagnostics (CAD) in colposcopy (>80% accuracies) versus manual analysis (<70.0% accuracies). Furthermore, this review summarized evidence of the algorithms which are being created utilizing portable devices, to circumvent constraints prohibiting wider implementation in LMICs (such as expensive diagnostic infrastructure). However clinical validation of novel devices with CAD is not yet implemented adequately in LMICs.
Collapse
Affiliation(s)
- Leshego Ledwaba
- Division of Biomedical Engineering, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, Western Cape, South Africa
| | - Rakiya Saidu
- Obstetrics and Gynaecology, Groote Schuur Hospital/University of Cape Town, Cape Town, Western Cape, South Africa
| | - Bessie Malila
- Division of Biomedical Engineering, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, Western Cape, South Africa
| | - Louise Kuhn
- Gertrude H. Sergievsky Center, Vagelos College of Physicians and Surgeons; and Department of Epidemiology, Mailman School of Public Health, Columbia University Irving Medical Center, New York, New York
| | - Tinashe E.M. Mutsvangwa
- Division of Biomedical Engineering, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, Western Cape, South Africa
| |
Collapse
|
2
|
Vaickus LJ, Kerr DA, Velez Torres JM, Levy J. Artificial Intelligence Applications in Cytopathology: Current State of the Art. Surg Pathol Clin 2024; 17:521-531. [PMID: 39129146 DOI: 10.1016/j.path.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of "atypical" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA.
| | - Darcy A Kerr
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA. https://twitter.com/darcykerrMD
| | - Jaylou M Velez Torres
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Joshua Levy
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Cedars-Sinai Medical Center, 8700 Beverly Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
3
|
Wang R, Li Q, Shi G, Li Q, Zhong D. A deep learning framework for predicting endometrial cancer from cytopathologic images with different staining styles. PLoS One 2024; 19:e0306549. [PMID: 39083516 PMCID: PMC11290691 DOI: 10.1371/journal.pone.0306549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 06/18/2024] [Indexed: 08/02/2024] Open
Abstract
Endometrial cancer screening is crucial for clinical treatment. Currently, cytopathologists analyze cytopathology images is considered a popular screening method, but manual diagnosis is time-consuming and laborious. Deep learning can provide objective guidance efficiency. But endometrial cytopathology images often come from different medical centers with different staining styles. It decreases the generalization ability of deep learning models in cytopathology images analysis, leading to poor performance. This study presents a robust automated screening framework for endometrial cancer that can be applied to cytopathology images with different staining styles, and provide an objective diagnostic reference for cytopathologists, thus contributing to clinical treatment. We collected and built the XJTU-EC dataset, the first cytopathology dataset that includes segmentation and classification labels. And we propose an efficient two-stage framework for adapting different staining style images, and screening endometrial cancer at the cellular level. Specifically, in the first stage, a novel CM-UNet is utilized to segment cell clumps, with a channel attention (CA) module and a multi-level semantic supervision (MSS) module. It can ignore staining variance and focus on extracting semantic information for segmentation. In the second stage, we propose a robust and effective classification algorithm based on contrastive learning, ECRNet. By momentum-based updating and adding labeled memory banks, it can reduce most of the false negative results. On the XJTU-EC dataset, CM-UNet achieves an excellent segmentation performance, and ECRNet obtains an accuracy of 98.50%, a precision of 99.32% and a sensitivity of 97.67% on the test set, which outperforms other competitive classical models. Our method robustly predicts endometrial cancer on cytopathologic images with different staining styles, which will further advance research in endometrial cancer screening and provide early diagnosis for patients. The code will be available on GitHub.
Collapse
Affiliation(s)
- Ruijie Wang
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Qing Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Guizhi Shi
- Laboratory Animal Center, Institute of Biophysics, Chinese Academy of Sciences, and the University of Chinese Academy of Sciences, Beijing, China
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
- Pazhou Laboratory, Guangzhou, P.R. China
- Research Institute of Xi’an Jiaotong University, Zhejiang, Hangzhou, P.R. China
| |
Collapse
|
4
|
Ke J, Liu K, Sun Y, Xue Y, Huang J, Lu Y, Dai J, Chen Y, Han X, Shen Y, Shen D. Artifact Detection and Restoration in Histology Images With Stain-Style and Structural Preservation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3487-3500. [PMID: 37352087 DOI: 10.1109/tmi.2023.3288940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
The artifacts in histology images may encumber the accurate interpretation of medical information and cause misdiagnosis. Accordingly, prepending manual quality control of artifacts considerably decreases the degree of automation. To close this gap, we propose a methodical pre-processing framework to detect and restore artifacts, which minimizes their impact on downstream AI diagnostic tasks. First, the artifact recognition network AR-Classifier first differentiates common artifacts from normal tissues, e.g., tissue folds, marking dye, tattoo pigment, spot, and out-of-focus, and also catalogs artifact patches by their restorability. Then, the succeeding artifact restoration network AR-CycleGAN performs de-artifact processing where stain styles and tissue structures can be maximally retained. We construct a benchmark for performance evaluation, curated from both clinically collected WSIs and public datasets of colorectal and breast cancer. The functional structures are compared with state-of-the-art methods, and also comprehensively evaluated by multiple metrics across multiple tasks, including artifact classification, artifact restoration, downstream diagnostic tasks of tumor classification and nuclei segmentation. The proposed system allows full automation of deep learning based histology image analysis without human intervention. Moreover, the structure-independent characteristic enables its processing with various artifact subtypes. The source code and data in this research are available at https://github.com/yunboer/AR-classifier-and-AR-CycleGAN.
Collapse
|
5
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
6
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
7
|
Using deep learning to predict survival outcome in non-surgical cervical cancer patients based on pathological images. J Cancer Res Clin Oncol 2023:10.1007/s00432-022-04446-8. [PMID: 36653539 PMCID: PMC10356676 DOI: 10.1007/s00432-022-04446-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/19/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE We analyzed clinical features and the representative HE-stained pathologic images to predict 5-year overall survival via the deep-learning approach in cervical cancer patients in order to assist oncologists in designing the optimal treatment strategies. METHODS The research retrospectively collected 238 non-surgical cervical cancer patients treated with radiochemotherapy from 2014 to 2017. These patients were randomly divided into the training set (n = 165) and test set (n = 73). Then, we extract deep features after segmenting the HE-stained image into patches of size 224 × 224. A Lasso-Cox model was constructed with clinical data to predict 5-year OS. C-index evaluated this model performance with 95% CI, calibration curve, and ROC. RESULTS Based on multivariate analysis, 2 of 11 clinical characteristics (C-index 0.68) and 2 of 2048 pathomic features (C-index 0.74) and clinical-pathomic model (C-index 0.83) of nomograms predict 5-year survival in the training set, respectively. In test set, compared with the pathomic and clinical characteristics used alone, the clinical-pathomic model had an AUC of 0.750 (95% CI 0.540-0.959), the clinical predictor model had an AUC of 0.729 (95% CI 0.551-0.909), and the pathomic model AUC was 0.703 (95% CI 0.487-0.919). Based on appropriate nomogram scores, we divided patients into high-risk and low-risk groups, and Kaplan-Meier survival probability curves for both groups showed statistical differences. CONCLUSION We built a clinical-pathomic model to predict 5-year OS in non-surgical cervical cancer patients, which may be a promising method to improve the precision of personalized therapy.
Collapse
|
8
|
Sukegawa S, Tanaka F, Nakano K, Hara T, Yoshii K, Yamashita K, Ono S, Takabatake K, Kawai H, Nagatsuka H, Furuki Y. Effective deep learning for oral exfoliative cytology classification. Sci Rep 2022; 12:13281. [PMID: 35918498 PMCID: PMC9346110 DOI: 10.1038/s41598-022-17602-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 07/28/2022] [Indexed: 12/24/2022] Open
Abstract
The use of sharpness aware minimization (SAM) as an optimizer that achieves high performance for convolutional neural networks (CNNs) is attracting attention in various fields of deep learning. We used deep learning to perform classification diagnosis in oral exfoliative cytology and to analyze performance, using SAM as an optimization algorithm to improve classification accuracy. The whole image of the oral exfoliation cytology slide was cut into tiles and labeled by an oral pathologist. CNN was VGG16, and stochastic gradient descent (SGD) and SAM were used as optimizers. Each was analyzed with and without a learning rate scheduler in 300 epochs. The performance metrics used were accuracy, precision, recall, specificity, F1 score, AUC, and statistical and effect size. All optimizers performed better with the rate scheduler. In particular, the SAM effect size had high accuracy (11.2) and AUC (11.0). SAM had the best performance of all models with a learning rate scheduler. (AUC = 0.9328) SAM tended to suppress overfitting compared to SGD. In oral exfoliation cytology classification, CNNs using SAM rate scheduler showed the highest classification performance. These results suggest that SAM can play an important role in primary screening of the oral cytological diagnostic environment.
Collapse
Affiliation(s)
- Shintaro Sukegawa
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-machi, Takamatsu, Kagawa, 760-8557, Japan. .,Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan.
| | - Futa Tanaka
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Keisuke Nakano
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Takeshi Hara
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan.,Center for Healthcare Information Technology, Tokai National Higher Education and Research System, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Kazumasa Yoshii
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | | | - Sawako Ono
- Department of Pathology, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-machi, Takamatsu, Kagawa, 760-8557, Japan
| | - Kiyofumi Takabatake
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Hotaka Kawai
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Hitoshi Nagatsuka
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Yoshihiko Furuki
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-machi, Takamatsu, Kagawa, 760-8557, Japan
| |
Collapse
|
9
|
Wang CW, Lee YC, Chang CC, Lin YJ, Liou YA, Hsu PC, Chang CC, Sai AKO, Wang CH, Chao TK. A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker. Cancers (Basel) 2022; 14:cancers14071651. [PMID: 35406422 PMCID: PMC8996991 DOI: 10.3390/cancers14071651] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
Ovarian cancer is a common malignant gynecological disease. Molecular target therapy, i.e., antiangiogenesis with bevacizumab, was found to be effective in some patients of epithelial ovarian cancer (EOC). Although careful patient selection is essential, there are currently no biomarkers available for routine therapeutic usage. To the authors’ best knowledge, this is the first automated precision oncology framework to effectively identify and select EOC and peritoneal serous papillary carcinoma (PSPC) patients with positive therapeutic effect. From March 2013 to January 2021, we have a database, containing four kinds of immunohistochemical tissue samples, including AIM2, c3, C5 and NLRP3, from patients diagnosed with EOC and PSPC and treated with bevacizumab in a hospital-based retrospective study. We developed a hybrid deep learning framework and weakly supervised deep learning models for each potential biomarker, and the experimental results show that the proposed model in combination with AIM2 achieves high accuracy 0.92, recall 0.97, F-measure 0.93 and AUC 0.97 for the first experiment (66% training and 34%testing) and high accuracy 0.86 ± 0.07, precision 0.9 ± 0.07, recall 0.85 ± 0.06, F-measure 0.87 ± 0.06 and AUC 0.91 ± 0.05 for the second experiment using five-fold cross validation, respectively. Both Kaplan-Meier PFS analysis and Cox proportional hazards model analysis further confirmed that the proposed AIM2-DL model is able to distinguish patients gaining positive therapeutic effects with low cancer recurrence from patients with disease progression after treatment (p < 0.005).
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Cheng-Chang Chang
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-An Liou
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Po-Chao Hsu
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Chun-Chieh Chang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Aung-Kyaw-Oo Sai
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Department of Otolaryngology-Head and Neck Surgery, National Defense Medical Center, Taipei 11490, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
- Correspondence:
| |
Collapse
|
10
|
Gynecology Meets Big Data in the Disruptive Innovation Medical Era: State-of-Art and Future Prospects. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18105058. [PMID: 34064710 PMCID: PMC8151939 DOI: 10.3390/ijerph18105058] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/08/2021] [Accepted: 05/10/2021] [Indexed: 12/14/2022]
Abstract
Tremendous scientific and technological achievements have been revolutionizing the current medical era, changing the way in which physicians practice their profession and deliver healthcare provisions. This is due to the convergence of various advancements related to digitalization and the use of information and communication technologies (ICTs)—ranging from the internet of things (IoT) and the internet of medical things (IoMT) to the fields of robotics, virtual and augmented reality, and massively parallel and cloud computing. Further progress has been made in the fields of addictive manufacturing and three-dimensional (3D) printing, sophisticated statistical tools such as big data visualization and analytics (BDVA) and artificial intelligence (AI), the use of mobile and smartphone applications (apps), remote monitoring and wearable sensors, and e-learning, among others. Within this new conceptual framework, big data represents a massive set of data characterized by different properties and features. These can be categorized both from a quantitative and qualitative standpoint, and include data generated from wet-lab and microarrays (molecular big data), databases and registries (clinical/computational big data), imaging techniques (such as radiomics, imaging big data) and web searches (the so-called infodemiology, digital big data). The present review aims to show how big and smart data can revolutionize gynecology by shedding light on female reproductive health, both in terms of physiology and pathophysiology. More specifically, they appear to have potential uses in the field of gynecology to increase its accuracy and precision, stratify patients, provide opportunities for personalized treatment options rather than delivering a package of “one-size-fits-it-all” healthcare management provisions, and enhance its effectiveness at each stage (health promotion, prevention, diagnosis, prognosis, and therapeutics).
Collapse
|