1
|
Rani ND, Babu M. Improved rank-based recursive feature elimination method based ovarian cancer detection model via customized deep architecture. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108358. [PMID: 39191100 DOI: 10.1016/j.cmpb.2024.108358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 06/22/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024]
Abstract
BACKGROUND Ovarian cancer is often considered the most lethal gynecological cancer because it tends to be diagnosed at an advanced stage, leading to limited treatment options and poorer outcomes. Several factors contribute to the challenges in managing ovarian cancer, namely rapid metastasis, genetic factors, reproductive history, etc. This necessitates the prompt and precise diagnosis of ovarian cancer in order to carry out efficient treatment plans and give patients who are all impacted by OC the care and support they need. METHODS This CCLSTM model is suggested under four essential stages including preprocessing, feature extraction, feature selection and detection. Initially, the input data is preprocessed using Improved Two-step Data Normalization. Subsequently, features such as statistical, modified entropy, raw features and mutual information are extracted from the normalized data. Next, obtained features undergo the Improved Rank-based Recursive Feature Elimination method (IR-RFE) to select the most suitable features. Finally, the proposed CCLSTM model takes the selected features as input and provides a final detection outcome. RESULTS Furthermore, the performance of the proposed CCLSTM technique is examined through a thorough assessment using diverse analyses Additionally, the CCLSTM schemes show a sensitivity value of 0.948, whereas the sensitivity ratings for ALO-LSTM + ALOCNN, Bi-GRU, LSTM, RNN, KNN, CNN, and DCNN are 0.808, 0.893, 0.829, 0.851, 0.765, 0.872, and 0.893, respectively. CONCLUSION In the end, the development of CNN and the addition of LSTM technique have produced an ovarian cancer detection technique that is more accurate and consistent compared to other existing strategies.
Collapse
Affiliation(s)
- Namani Deepika Rani
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, 500075, Telangana, India.
| | - Mahesh Babu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, 500075, Telangana, India
| |
Collapse
|
2
|
Giourga M, Petropoulos I, Stavros S, Potiris A, Gerede A, Sapantzoglou I, Fanaki M, Papamattheou E, Karasmani C, Karampitsakos T, Topis S, Zikopoulos A, Daskalakis G, Domali E. Enhancing Ovarian Tumor Diagnosis: Performance of Convolutional Neural Networks in Classifying Ovarian Masses Using Ultrasound Images. J Clin Med 2024; 13:4123. [PMID: 39064163 PMCID: PMC11277638 DOI: 10.3390/jcm13144123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/29/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
Background/Objectives: This study aims to create a strong binary classifier and evaluate the performance of pre-trained convolutional neural networks (CNNs) to effectively distinguish between benign and malignant ovarian tumors from still ultrasound images. Methods: The dataset consisted of 3510 ultrasound images from 585 women with ovarian tumors, 390 benign and 195 malignant, that were classified by experts and verified by histopathology. A 20% to80% split for training and validation was applied within a k-fold cross-validation framework, ensuring comprehensive utilization of the dataset. The final classifier was an aggregate of three pre-trained CNNs (VGG16, ResNet50, and InceptionNet), with experimentation focusing on the aggregation weights and decision threshold probability for the classification of each mass. Results: The aggregate model outperformed all individual models, achieving an average sensitivity of 96.5% and specificity of 88.1% compared to the subjective assessment's (SA) 95.9% sensitivity and 93.9% specificity. All the above results were calculated at a decision threshold probability of 0.2. Notably, misclassifications made by the model were similar to those made by SA. Conclusions: CNNs and AI-assisted image analysis can enhance the diagnosis and aid ultrasonographers with less experience by minimizing errors. Further research is needed to fine-tune CNNs and validate their performance in diverse clinical settings, potentially leading to even higher sensitivity and overall accuracy.
Collapse
Affiliation(s)
- Maria Giourga
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Ioannis Petropoulos
- School of Electrical & Computer Engineering, National Technical University of Athens, 15772 Athens, Greece
| | - Sofoklis Stavros
- Third Department of Obstetrics and Gynecology, University Hospital “ATTIKON”, Medical School of the National and Kapodistrian University of Athens, 12462 Athens, Greece; (S.S.); (A.P.); (T.K.); (S.T.); (A.Z.)
| | - Anastasios Potiris
- Third Department of Obstetrics and Gynecology, University Hospital “ATTIKON”, Medical School of the National and Kapodistrian University of Athens, 12462 Athens, Greece; (S.S.); (A.P.); (T.K.); (S.T.); (A.Z.)
| | - Angeliki Gerede
- Department of Obstetrics and Gynecology, University of Thrace, 68100 Alexandroupolis, Greece;
| | - Ioakeim Sapantzoglou
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Maria Fanaki
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Eleni Papamattheou
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Christina Karasmani
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Theodoros Karampitsakos
- Third Department of Obstetrics and Gynecology, University Hospital “ATTIKON”, Medical School of the National and Kapodistrian University of Athens, 12462 Athens, Greece; (S.S.); (A.P.); (T.K.); (S.T.); (A.Z.)
| | - Spyridon Topis
- Third Department of Obstetrics and Gynecology, University Hospital “ATTIKON”, Medical School of the National and Kapodistrian University of Athens, 12462 Athens, Greece; (S.S.); (A.P.); (T.K.); (S.T.); (A.Z.)
| | - Athanasios Zikopoulos
- Third Department of Obstetrics and Gynecology, University Hospital “ATTIKON”, Medical School of the National and Kapodistrian University of Athens, 12462 Athens, Greece; (S.S.); (A.P.); (T.K.); (S.T.); (A.Z.)
| | - Georgios Daskalakis
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| | - Ekaterini Domali
- 1st Department of Obstetrics and Gynecology, National and Kapodistrian University of Athens, 11528 Athens, Greece; (I.S.); (M.F.); (E.P.); (C.K.); (G.D.); (E.D.)
| |
Collapse
|
3
|
Duque VG, Marquardt A, Velikova Y, Lacourpaille L, Nordez A, Crouzier M, Lee HJ, Mateus D, Navab N. Ultrasound segmentation analysis via distinct and completed anatomical borders. Int J Comput Assist Radiol Surg 2024; 19:1419-1427. [PMID: 38789884 DOI: 10.1007/s11548-024-03170-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024]
Abstract
PURPOSE Segmenting ultrasound images is important for precise area and/or volume calculations, ensuring reliable diagnosis and effective treatment evaluation for diseases. Recently, many segmentation methods have been proposed and shown impressive performance. However, currently, there is no deeper understanding of how networks segment target regions or how they define the boundaries. In this paper, we present a new approach that analyzes ultrasound segmentation networks in terms of learned borders because border delimitation is challenging in ultrasound. METHODS We propose a way to split the boundaries for ultrasound images into distinct and completed. By exploiting the Grad-CAM of the split borders, we analyze the areas each network pays attention to. Further, we calculate the ratio of correct predictions for distinct and completed borders. We conducted experiments on an in-house leg ultrasound dataset (LEG-3D-US) as well as on two additional public datasets of thyroid, nerves, and one private for prostate. RESULTS Quantitatively, the networks exhibit around 10% improvement in handling completed borders compared to distinct borders. Similar to doctors, the network struggles to define the borders in less visible areas. Additionally, the Seg-Grad-CAM analysis underscores how completion uses distinct borders and landmarks, while distinct focuses mainly on the shiny structures. We also observe variations depending on the attention mechanism of each architecture. CONCLUSION In this work, we highlight the importance of studying ultrasound borders differently than other modalities such as MRI or CT. We split the borders into distinct and completed, similar to clinicians, and show the quality of the network-learned information for these two types of borders. Additionally, we open-source a 3D leg ultrasound dataset to the community https://github.com/Al3xand1a/segmentation-border-analysis .
Collapse
Affiliation(s)
- Vanessa Gonzalez Duque
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany.
- Munich Center for Machine Learning, Munich, Germany.
- LS2N Laboratory, Ecole Centrale Nantes, Nantes, France.
- MIP Laboratory, EA 4334, 44000, Nantes, France.
| | - Alexandra Marquardt
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| | - Yordanka Velikova
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| | | | | | | | - Hong Joo Lee
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
| | - Diana Mateus
- LS2N Laboratory, Ecole Centrale Nantes, Nantes, France
| | - Nassir Navab
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| |
Collapse
|
4
|
Du Y, Wang T, Qu L, Li H, Guo Q, Wang H, Liu X, Wu X, Song Z. Preoperative Molecular Subtype Classification Prediction of Ovarian Cancer Based on Multi-Parametric Magnetic Resonance Imaging Multi-Sequence Feature Fusion Network. Bioengineering (Basel) 2024; 11:472. [PMID: 38790338 PMCID: PMC11117786 DOI: 10.3390/bioengineering11050472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/22/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
In the study of the deep learning classification of medical images, deep learning models are applied to analyze images, aiming to achieve the goals of assisting diagnosis and preoperative assessment. Currently, most research classifies and predicts normal and cancer cells by inputting single-parameter images into trained models. However, for ovarian cancer (OC), identifying its different subtypes is crucial for predicting disease prognosis. In particular, the need to distinguish high-grade serous carcinoma from clear cell carcinoma preoperatively through non-invasive means has not been fully addressed. This study proposes a deep learning (DL) method based on the fusion of multi-parametric magnetic resonance imaging (mpMRI) data, aimed at improving the accuracy of preoperative ovarian cancer subtype classification. By constructing a new deep learning network architecture that integrates various sequence features, this architecture achieves the high-precision prediction of the typing of high-grade serous carcinoma and clear cell carcinoma, achieving an AUC of 91.62% and an AP of 95.13% in the classification of ovarian cancer subtypes.
Collapse
Affiliation(s)
- Yijiang Du
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; (Y.D.)
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Tingting Wang
- Department of Nuclear Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; (Y.D.)
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Haiming Li
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Fudan University, Shanghai 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Qinhao Guo
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Fudan University, Shanghai 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; (Y.D.)
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Xinyuan Liu
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Fudan University, Shanghai 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xiaohua Wu
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Fudan University, Shanghai 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; (Y.D.)
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China
| |
Collapse
|
5
|
Wang Y, Lin W, Zhuang X, Wang X, He Y, Li L, Lyu G. Advances in artificial intelligence for the diagnosis and treatment of ovarian cancer (Review). Oncol Rep 2024; 51:46. [PMID: 38240090 PMCID: PMC10828921 DOI: 10.3892/or.2024.8705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/05/2024] [Indexed: 01/23/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a crucial technique for extracting high‑throughput information from various sources, including medical images, pathological images, and genomics, transcriptomics, proteomics and metabolomics data. AI has been widely used in the field of diagnosis, for the differentiation of benign and malignant ovarian cancer (OC), and for prognostic assessment, with favorable results. Notably, AI‑based radiomics has proven to be a non‑invasive, convenient and economical approach, making it an essential asset in a gynecological setting. The present study reviews the application of AI in the diagnosis, differentiation and prognostic assessment of OC. It is suggested that AI‑based multi‑omics studies have the potential to improve the diagnostic and prognostic predictive ability in patients with OC, thereby facilitating the realization of precision medicine.
Collapse
Affiliation(s)
- Yanli Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Weihong Lin
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiaoling Zhuang
- Department of Pathology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiali Wang
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| | - Yifang He
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Luhong Li
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Guorong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| |
Collapse
|
6
|
Fuentes AM, Milligan K, Wiebe M, Narayan A, Lum JJ, Brolo AG, Andrews JL, Jirasek A. Stratification of tumour cell radiation response and metabolic signatures visualization with Raman spectroscopy and explainable convolutional neural network. Analyst 2024; 149:1645-1657. [PMID: 38312026 DOI: 10.1039/d3an01797d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Reprogramming of cellular metabolism is a driving factor of tumour progression and radiation therapy resistance. Identifying biochemical signatures associated with tumour radioresistance may assist with the development of targeted treatment strategies to improve clinical outcomes. Raman spectroscopy (RS) can monitor post-irradiation biomolecular changes and signatures of radiation response in tumour cells in a label-free manner. Convolutional Neural Networks (CNN) perform feature extraction directly from data in an end-to-end learning manner, with high classification performance. Furthermore, recently developed CNN explainability techniques help visualize the critical discriminative features captured by the model. In this work, a CNN is developed to characterize tumour response to radiotherapy based on its degree of radioresistance. The model was trained to classify Raman spectra of three human tumour cell lines as radiosensitive (LNCaP) or radioresistant (MCF7, H460) over a range of treatment doses and data collection time points. Additionally, a method based on Gradient-Weighted Class Activation Mapping (Grad-CAM) was used to determine response-specific salient Raman peaks influencing the CNN predictions. The CNN effectively classified the cell spectra, with accuracy, sensitivity, specificity, and F1 score exceeding 99.8%. Grad-CAM heatmaps of H460 and MCF7 cell spectra (radioresistant) exhibited high contributions from Raman bands tentatively assigned to glycogen, amino acids, and nucleic acids. Conversely, heatmaps of LNCaP cells (radiosensitive) revealed activations at lipid and phospholipid bands. Finally, Grad-CAM variable importance scores were derived for glycogen, asparagine, and phosphatidylcholine, and we show that their trends over cell line, dose, and acquisition time agreed with previously established models. Thus, the CNN can accurately detect biomolecular differences in the Raman spectra of tumour cells of varying radiosensitivity without requiring manual feature extraction. Finally, Grad-CAM may help identify metabolic signatures associated with the observed categories, offering the potential for automated clinical tumour radiation response characterization.
Collapse
Affiliation(s)
- Alejandra M Fuentes
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Kirsty Milligan
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Mitchell Wiebe
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Apurva Narayan
- Department of Computer Science, Western University, London, Canada
- Department of Computer Science, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Julian J Lum
- Department of Biochemistry and Microbiology, The University of Victoria, Victoria, Canada
- Trev and Joyce Deeley Research Centre, BC Cancer, Victoria, Canada
| | - Alexandre G Brolo
- Department of Chemistry, The University of Victoria, Victoria, Canada
| | - Jeffrey L Andrews
- Department of Statistics, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Andrew Jirasek
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| |
Collapse
|
7
|
Sadeghi MH, Sina S, Omidi H, Farshchitabrizi AH, Alavi M. Deep learning in ovarian cancer diagnosis: a comprehensive review of various imaging modalities. Pol J Radiol 2024; 89:e30-e48. [PMID: 38371888 PMCID: PMC10867948 DOI: 10.5114/pjr.2024.134817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 12/27/2023] [Indexed: 02/20/2024] Open
Abstract
Ovarian cancer poses a major worldwide health issue, marked by high death rates and a deficiency in reliable diagnostic methods. The precise and prompt detection of ovarian cancer holds great importance in advancing patient outcomes and determining suitable treatment plans. Medical imaging techniques are vital in diagnosing ovarian cancer, but achieving accurate diagnoses remains challenging. Deep learning (DL), particularly convolutional neural networks (CNNs), has emerged as a promising solution to improve the accuracy of ovarian cancer detection. This systematic review explores the role of DL in improving the diagnostic accuracy for ovarian cancer. The methodology involved the establishment of research questions, inclusion and exclusion criteria, and a comprehensive search strategy across relevant databases. The selected studies focused on DL techniques applied to ovarian cancer diagnosis using medical imaging modalities, as well as tumour differentiation and radiomics. Data extraction, analysis, and synthesis were performed to summarize the characteristics and findings of the selected studies. The review emphasizes the potential of DL in enhancing the diagnosis of ovarian cancer by accelerating the diagnostic process and offering more precise and efficient solutions. DL models have demonstrated their effectiveness in categorizing ovarian tissues and achieving comparable diagnostic performance to that of experienced radiologists. The integration of DL into ovarian cancer diagnosis holds the promise of improving patient outcomes, refining treatment approaches, and supporting well-informed decision-making. Nevertheless, additional research and validation are necessary to ensure the dependability and applicability of DL models in everyday clinical settings.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Shiraz University, Shiraz, Iran
- Radiation Research Center, Shiraz University, Shiraz, Iran
| | | | | | | |
Collapse
|
8
|
Mitchell S, Nikolopoulos M, El-Zarka A, Al-Karawi D, Al-Zaidi S, Ghai A, Gaughran JE, Sayasneh A. Artificial Intelligence in Ultrasound Diagnoses of Ovarian Cancer: A Systematic Review and Meta-Analysis. Cancers (Basel) 2024; 16:422. [PMID: 38275863 PMCID: PMC10813993 DOI: 10.3390/cancers16020422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 01/27/2024] Open
Abstract
Ovarian cancer is the sixth most common malignancy, with a 35% survival rate across all stages at 10 years. Ultrasound is widely used for ovarian tumour diagnosis, and accurate pre-operative diagnosis is essential for appropriate patient management. Artificial intelligence is an emerging field within gynaecology and has been shown to aid in the ultrasound diagnosis of ovarian cancers. For this study, Embase and MEDLINE databases were searched, and all original clinical studies that used artificial intelligence in ultrasound examinations for the diagnosis of ovarian malignancies were screened. Studies using histopathological findings as the standard were included. The diagnostic performance of each study was analysed, and all the diagnostic performances were pooled and assessed. The initial search identified 3726 papers, of which 63 were suitable for abstract screening. Fourteen studies that used artificial intelligence in ultrasound diagnoses of ovarian malignancies and had histopathological findings as a standard were included in the final analysis, each of which had different sample sizes and used different methods; these studies examined a combined total of 15,358 ultrasound images. The overall sensitivity was 81% (95% CI, 0.80-0.82), and specificity was 92% (95% CI, 0.92-0.93), indicating that artificial intelligence demonstrates good performance in ultrasound diagnoses of ovarian cancer. Further prospective work is required to further validate AI for its use in clinical practice.
Collapse
Affiliation(s)
- Sian Mitchell
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Manolis Nikolopoulos
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Alaa El-Zarka
- Department of Gynaecology, Alexandria Faculty of Medicine, Alexandria 21433, Egypt
| | | | | | - Avi Ghai
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, King’s College London, Strand, London WC2R 2LS, UK
| | - Jonathan E. Gaughran
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Ahmad Sayasneh
- Department of Gynaecological Oncology, Surgical Oncology Directorate, Cancer Centre, Guy’s Hospital, Great Maze Pond, London SE1 9RT, UK
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, St Thomas Hospital, Westminster Bridge Road, London SE1 7EH, UK
| |
Collapse
|
9
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
10
|
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Performance Analysis of Segmentation and Classification of CT-Scanned Ovarian Tumours Using U-Net and Deep Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:2282. [PMID: 37443676 DOI: 10.3390/diagnostics13132282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/29/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Difficulty in detecting tumours in early stages is the major cause of mortalities in patients, despite the advancements in treatment and research regarding ovarian cancer. Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to CT scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories-benign and malignant tumours. Classification was performed using deep learning models like CNN, ResNet, DenseNet, Inception-ResNet, VGG16 and Xception, along with machine learning models such as Random Forest, Gradient Boosting, AdaBoosting and XGBoosting. DenseNet 121 emerges as the best model on this dataset after applying optimization on the machine learning models by obtaining an accuracy of 95.7%. The current work demonstrates the comparison of multiple CNN architectures with common machine learning algorithms, with and without optimization techniques applied.
Collapse
Affiliation(s)
- Ashwini Kodipalli
- Department of Artificial Intelligence & Data Science, Global Academy of Technology, Bangalore 560098, India
| | - Steven L Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Vaishnavi Gururaj
- Department of Computer Science, George Mason University, Fairfax, VA 22030, USA
| | - Shriya Varada Rameshbabu
- Department of Computer Science & Engineering, Global Academy of Technology, Bangalore 560098, India
| | - Santosh Dasar
- Department of Radiologist, SDM College of Medical Sciences and Hospital, Dharwad 580009, India
| |
Collapse
|