1
|
Behera SK, Das A, Sethy PK. Deep fine-KNN classification of ovarian cancer subtypes using efficientNet-B0 extracted features: a comprehensive analysis. J Cancer Res Clin Oncol 2024; 150:361. [PMID: 39052091 PMCID: PMC11272718 DOI: 10.1007/s00432-024-05879-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 07/03/2024] [Indexed: 07/27/2024]
Abstract
This study presents a robust approach for the classification of ovarian cancer subtypes through the integration of deep learning and k-nearest neighbor (KNN) methods. The proposed model leverages the powerful feature extraction capabilities of EfficientNet-B0, utilizing its deep features for subsequent fine-grained classification using the fine-KNN approach. The UBC-OCEAN dataset, encompassing histopathological images of five distinct ovarian cancer subtypes, namely, high-grade serous carcinoma (HGSC), clear-cell ovarian carcinoma (CC), endometrioid carcinoma (EC), low-grade serous carcinoma (LGSC), and mucinous carcinoma (MC), served as the foundation for our investigation. With a dataset comprising 725 images, divided into 80% for training and 20% for testing, our model exhibits exceptional performance. Both the validation and testing phases achieved 100% accuracy, underscoring the efficacy of the proposed methodology. In addition, the area under the curve (AUC), a key metric for evaluating the model's discriminative ability, demonstrated high performance across various subtypes, with AUC values of 0.94, 0.78, 0.69, 0.92, and 0.94 for MC. Furthermore, the positive likelihood ratios (LR+) were indicative of the model's diagnostic utility, with notable values for each subtype: CC (27.294), EC (9.441), HGSC (12.588), LGSC (17.942), and MC (17.942). These findings demonstrate the effectiveness of the model in distinguishing between ovarian cancer subtypes, positioning it as a promising tool for diagnostic applications. The demonstrated accuracy, AUC values, and LR+ values underscore the potential of the model as a valuable diagnostic tool, contributing to the advancement of precision medicine in the field of ovarian cancer research.
Collapse
Affiliation(s)
- Santi Kumari Behera
- Department of Computer Science and Engineering, VSSUT, Burla, Odisha, 768018, India
| | - Ashis Das
- Department of Computer Science and Engineering, SUIIT, Sambalpur University, Burla, Odisha, 768019, India
| | - Prabira Kumar Sethy
- Department of Electronics and Communication Engineering, Guru Ghasidas Vishwavidyalaya, Bilaspur, Chhattisgarh, 495009, India.
- Department of Electronics, Sambalpur University, Jyoti Vihar, Burla, Odisha, 768019, India.
| |
Collapse
|
2
|
Kang B, Chen S, Wang G, Huang Y, Wu H, He J, Li X, Xi G, Wu G, Zhuo S. Ovarian cancer identification technology based on deep learning and second harmonic generation imaging. JOURNAL OF BIOPHOTONICS 2024:e202400200. [PMID: 38955356 DOI: 10.1002/jbio.202400200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 06/10/2024] [Accepted: 06/12/2024] [Indexed: 07/04/2024]
Abstract
Ovarian cancer is among the most common gynecological cancers and the eighth leading cause of cancer-related deaths among women worldwide. Surgery is among the most important options for cancer treatment. During surgery, a biopsy is generally required to screen for lesions; however, traditional case examinations are time consuming and laborious and require extensive experience and knowledge from pathologists. Therefore, this study proposes a simple, fast, and label-free ovarian cancer diagnosis method that combines second harmonic generation (SHG) imaging and deep learning. Unstained fresh human ovarian tissues were subjected to SHG imaging and accurately characterized using the Pyramid Vision Transformer V2 (PVTv2) model. The results showed that the SHG imaged collagen fibers could quantify ovarian cancer. In addition, the PVTv2 model could accurately differentiate the 3240 SHG images obtained from our imaging collection into benign, normal, and malignant images, with a final accuracy of 98.4%. These results demonstrate the great potential of SHG imaging techniques combined with deep learning models for diagnosing the diseased ovarian tissues.
Collapse
Affiliation(s)
- Bingzi Kang
- School of Science, Jimei University, Xiamen, China
| | - Siyu Chen
- College of Computer Engineering, Jimei University, Xiamen, China
| | | | - Yuhang Huang
- School of Science, Jimei University, Xiamen, China
| | - Han Wu
- School of Science, Jimei University, Xiamen, China
| | - Jiajia He
- School of Science, Jimei University, Xiamen, China
| | - Xiaolu Li
- School of Science, Jimei University, Xiamen, China
| | - Gangqin Xi
- School of Science, Jimei University, Xiamen, China
| | - Guizhu Wu
- Department of Gynecology, Obstetrics and Gynecology Hospital, School of Medicine, Tongji University, Shanghai, China
| | | |
Collapse
|
3
|
Li L, He L, Guo W, Ma J, Sun G, Ma H. PMFFNet: A hybrid network based on feature pyramid for ovarian tumor segmentation. PLoS One 2024; 19:e0299360. [PMID: 38557660 PMCID: PMC10984528 DOI: 10.1371/journal.pone.0299360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 02/09/2024] [Indexed: 04/04/2024] Open
Abstract
Ovarian cancer is a highly lethal malignancy in the field of oncology. Generally speaking, the segmentation of ovarian medical images is a necessary prerequisite for the diagnosis and treatment planning. Therefore, accurately segmenting ovarian tumors is of utmost importance. In this work, we propose a hybrid network called PMFFNet to improve the segmentation accuracy of ovarian tumors. The PMFFNet utilizes an encoder-decoder architecture. Specifically, the encoder incorporates the ViTAEv2 model to extract inter-layer multi-scale features from the feature pyramid. To address the limitation of fixed window size that hinders sufficient interaction of information, we introduce Varied-Size Window Attention (VSA) to the ViTAEv2 model to capture rich contextual information. Additionally, recognizing the significance of multi-scale features, we introduce the Multi-scale Feature Fusion Block (MFB) module. The MFB module enhances the network's capacity to learn intricate features by capturing both local and multi-scale information, thereby enabling more precise segmentation of ovarian tumors. Finally, in conjunction with our designed decoder, our model achieves outstanding performance on the MMOTU dataset. The results are highly promising, with the model achieving scores of 97.24%, 91.15%, and 87.25% in mACC, mIoU, and mDice metrics, respectively. When compared to several Unet-based and advanced models, our approach demonstrates the best segmentation performance.
Collapse
Affiliation(s)
- Lang Li
- School of Software, Xinjiang University, Urumqi, Xinjiang, China
| | - Liang He
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Wenjia Guo
- Cancer Institute, Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Jing Ma
- School of Computer Science and Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Gang Sun
- Department of Breast and Thyroid Surgery, The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
- Xinjiang Cancer Center, Key Laboratory of Oncology of Xinjiang Uyghur Autonomous Region, Urumqi, Xinjiang, China
| | - Hongbing Ma
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
4
|
Mitchell S, Nikolopoulos M, El-Zarka A, Al-Karawi D, Al-Zaidi S, Ghai A, Gaughran JE, Sayasneh A. Artificial Intelligence in Ultrasound Diagnoses of Ovarian Cancer: A Systematic Review and Meta-Analysis. Cancers (Basel) 2024; 16:422. [PMID: 38275863 PMCID: PMC10813993 DOI: 10.3390/cancers16020422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 01/27/2024] Open
Abstract
Ovarian cancer is the sixth most common malignancy, with a 35% survival rate across all stages at 10 years. Ultrasound is widely used for ovarian tumour diagnosis, and accurate pre-operative diagnosis is essential for appropriate patient management. Artificial intelligence is an emerging field within gynaecology and has been shown to aid in the ultrasound diagnosis of ovarian cancers. For this study, Embase and MEDLINE databases were searched, and all original clinical studies that used artificial intelligence in ultrasound examinations for the diagnosis of ovarian malignancies were screened. Studies using histopathological findings as the standard were included. The diagnostic performance of each study was analysed, and all the diagnostic performances were pooled and assessed. The initial search identified 3726 papers, of which 63 were suitable for abstract screening. Fourteen studies that used artificial intelligence in ultrasound diagnoses of ovarian malignancies and had histopathological findings as a standard were included in the final analysis, each of which had different sample sizes and used different methods; these studies examined a combined total of 15,358 ultrasound images. The overall sensitivity was 81% (95% CI, 0.80-0.82), and specificity was 92% (95% CI, 0.92-0.93), indicating that artificial intelligence demonstrates good performance in ultrasound diagnoses of ovarian cancer. Further prospective work is required to further validate AI for its use in clinical practice.
Collapse
Affiliation(s)
- Sian Mitchell
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Manolis Nikolopoulos
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Alaa El-Zarka
- Department of Gynaecology, Alexandria Faculty of Medicine, Alexandria 21433, Egypt
| | | | | | - Avi Ghai
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, King’s College London, Strand, London WC2R 2LS, UK
| | - Jonathan E. Gaughran
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Ahmad Sayasneh
- Department of Gynaecological Oncology, Surgical Oncology Directorate, Cancer Centre, Guy’s Hospital, Great Maze Pond, London SE1 9RT, UK
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, St Thomas Hospital, Westminster Bridge Road, London SE1 7EH, UK
| |
Collapse
|
5
|
Miao K, Zhao N, Lv Q, He X, Xu M, Dong X, Li D, Shao X. Prediction of benign and malignant ovarian tumors using Resnet34 on ultrasound images. J Obstet Gynaecol Res 2023; 49:2910-2917. [PMID: 37696522 DOI: 10.1111/jog.15788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 08/24/2023] [Indexed: 09/13/2023]
Abstract
OBJECTIVE To develop deep learning (DL) prediction models using transvaginal ultrasound (TVS), transabdominal ultrasound (TAS), and color Doppler flow imaging (CDFI) of TVS (CDFI_TVS) to automatically predict benign or malignant ovarian tumors. METHODS This retrospective study included women with ovarian tumors who underwent ultrasound between August 2018 and October 2022. Histopathological analysis was used as a reference standard. The dataset was preprocessed by clipping, flipping, and rotating images to generate a larger, more complicated, and diverse dataset to improve accuracy and generalizability. The dataset was then divided into training (80%) and test (20%) sets. The weights of the models, modified from the residual network (ResNet) with the TVS, TAS, and CDFI_TVS images (hereafter, referred to as DLTVS , DLTAS , and DLCDFI_TVS , respectively) were developed. The area under the receiver operating characteristic curve (AUC) analysis in the test set was used to compare the predictive value of DL for malignancy. RESULTS A total of 2340 images from 1350 women with adnexal masses were included. DLTVS had an AUC of 0.95 (95% CI: 0.93-0.97) for classifying malignant and benign ovarian tumors, comparable with that of DLTAS (AUC, 0.95; 95% CI: 0.91-0.98; p = 0.96) and DLCDFI_TVS (AUC, 0.88; 95% CI: 0.84-0.93; p = 0.02). Decision curve analysis indicated that DLTVS performed better than DLTAS and DLCDFI_TVS . CONCLUSION We developed DL models based on TVS, TAS, and CDFI_TVS on ultrasound images to predict benign and malignant ovarian tumors with high diagnostic performance. The DLTVS model had the best prediction compared with the DLTAS and DLCDFI_TVS models.
Collapse
Affiliation(s)
- Kuo Miao
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Ning Zhao
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Qian Lv
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xin He
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Mingda Xu
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaoqiu Dong
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Dandan Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xiaohui Shao
- Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
6
|
Poalelungi DG, Musat CL, Fulga A, Neagu M, Neagu AI, Piraianu AI, Fulga I. Advancing Patient Care: How Artificial Intelligence Is Transforming Healthcare. J Pers Med 2023; 13:1214. [PMID: 37623465 PMCID: PMC10455458 DOI: 10.3390/jpm13081214] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 07/27/2023] [Accepted: 07/28/2023] [Indexed: 08/26/2023] Open
Abstract
Artificial Intelligence (AI) has emerged as a transformative technology with immense potential in the field of medicine. By leveraging machine learning and deep learning, AI can assist in diagnosis, treatment selection, and patient monitoring, enabling more accurate and efficient healthcare delivery. The widespread implementation of AI in healthcare has the role to revolutionize patients' outcomes and transform the way healthcare is practiced, leading to improved accessibility, affordability, and quality of care. This article explores the diverse applications and reviews the current state of AI adoption in healthcare. It concludes by emphasizing the need for collaboration between physicians and technology experts to harness the full potential of AI.
Collapse
Affiliation(s)
- Diana Gina Poalelungi
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
| | - Carmina Liana Musat
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Ana Fulga
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Marius Neagu
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Anca Iulia Neagu
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
- ‘Saint John’ Clinical Emergency Hospital for Children, 800487 Galati, Romania
| | - Alin Ionut Piraianu
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Iuliu Fulga
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| |
Collapse
|
7
|
Wu M, Cui G, Lv S, Chen L, Tian Z, Yang M, Bai W. Deep convolutional neural networks for multiple histologic types of ovarian tumors classification in ultrasound images. Front Oncol 2023; 13:1154200. [PMID: 37427129 PMCID: PMC10326903 DOI: 10.3389/fonc.2023.1154200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/12/2023] [Indexed: 07/11/2023] Open
Abstract
Objective This study aimed to evaluate and validate the performance of deep convolutional neural networks when discriminating different histologic types of ovarian tumor in ultrasound (US) images. Material and methods Our retrospective study took 1142 US images from 328 patients from January 2019 to June 2021. Two tasks were proposed based on US images. Task 1 was to classify benign and high-grade serous carcinoma in original ovarian tumor US images, in which benign ovarian tumor was divided into six classes: mature cystic teratoma, endometriotic cyst, serous cystadenoma, granulosa-theca cell tumor, mucinous cystadenoma and simple cyst. The US images in task 2 were segmented. Deep convolutional neural networks (DCNN) were applied to classify different types of ovarian tumors in detail. We used transfer learning on six pre-trained DCNNs: VGG16, GoogleNet, ResNet34, ResNext50, DensNet121 and DensNet201. Several metrics were adopted to assess the model performance: accuracy, sensitivity, specificity, FI-score and the area under the receiver operating characteristic curve (AUC). Results The DCNN performed better in labeled US images than in original US images. The best predictive performance came from the ResNext50 model. The model had an overall accuracy of 0.952 for in directly classifying the seven histologic types of ovarian tumors. It achieved a sensitivity of 90% and a specificity of 99.2% for high-grade serous carcinoma, and a sensitivity of over 90% and a specificity of over 95% in most benign pathological categories. Conclusion DCNN is a promising technique for classifying different histologic types of ovarian tumors in US images, and provide valuable computer-aided information.
Collapse
Affiliation(s)
- Meijing Wu
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Guangxia Cui
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Shuchang Lv
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Lijiang Chen
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Zongmei Tian
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Min Yang
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Wenpei Bai
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
8
|
Yoldemir T. Evaluation and management of endometriosis. Climacteric 2023; 26:248-255. [PMID: 37051875 DOI: 10.1080/13697137.2023.2190882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
The initial diagnostic investigations for endometriosis are physical examination and pelvic ultrasound. The pelvic examination should include a speculum examination and vaginal palpation. Mobility, fixation and/or tenderness of the uterus and site-specific tenderness in the pelvis should be evaluated. Transvaginal ultrasound and pelvic magnetic resonance imaging are recommended to evaluate the extent of the endometriosis and to determine whether any urinary tract or bowel procedures might also be required during surgical resection. Quality of life should be assessed by using the Endometriosis Health Profile-30, its short version EHP-5 or the generic quality of life questionnaire SF-36. Management of endometriosis is recommended when it has a functional impact (pain, infertility) or causes organ dysfunction. Many gynecological societies have published different guidelines for the evaluation and management of endometriosis. However, the complexity of this disease together with the different available treatments lead to significant discrepancies between the recommendations. Postmenopausal endometriosis should be considered when a patient has a history of symptoms before menopause including dysmenorrhea, dyspareunia, dyschezia, infertility and chronic pelvic pain. Malignant transformation of endometriosis is estimated to occur in about 0.7-1.6% of women affected by endometriosis. Endometriosis is associated with an increased risk of ovarian cancer, specifically clear cell, endometrioid and low-grade serous types.
Collapse
Affiliation(s)
- T Yoldemir
- Department of Obstetrics and Gynaecology, Marmara University School of Medicine, Istanbul, Turkey
| |
Collapse
|
9
|
Pang J, Xiu W, Ma X. Application of Artificial Intelligence in the Diagnosis, Treatment, and Prognostic Evaluation of Mediastinal Malignant Tumors. J Clin Med 2023; 12:jcm12082818. [PMID: 37109155 PMCID: PMC10144939 DOI: 10.3390/jcm12082818] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/01/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is widely utilized in the medical field, promoting medical advances. Malignant tumors are the critical focus of medical research and improvement of clinical diagnosis and treatment. Mediastinal malignancy is an important tumor that attracts increasing attention today due to the difficulties in treatment. Combined with artificial intelligence, challenges from drug discovery to survival improvement are constantly being overcome. This article reviews the progress of the use of AI in the diagnosis, treatment, and prognostic prospects of mediastinal malignant tumors based on current literature findings.
Collapse
Affiliation(s)
- Jiyun Pang
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Weigang Xiu
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Xuelei Ma
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
10
|
Koch AH, Jeelof LS, Muntinga CLP, Gootzen TA, van de Kruis NMA, Nederend J, Boers T, van der Sommen F, Piek JMJ. Analysis of computer-aided diagnostics in the preoperative diagnosis of ovarian cancer: a systematic review. Insights Imaging 2023; 14:34. [PMID: 36790570 PMCID: PMC9931983 DOI: 10.1186/s13244-022-01345-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 12/05/2022] [Indexed: 02/16/2023] Open
Abstract
OBJECTIVES Different noninvasive imaging methods to predict the chance of malignancy of ovarian tumors are available. However, their predictive value is limited due to subjectivity of the reviewer. Therefore, more objective prediction models are needed. Computer-aided diagnostics (CAD) could be such a model, since it lacks bias that comes with currently used models. In this study, we evaluated the available data on CAD in predicting the chance of malignancy of ovarian tumors. METHODS We searched for all published studies investigating diagnostic accuracy of CAD based on ultrasound, CT and MRI in pre-surgical patients with an ovarian tumor compared to reference standards. RESULTS In thirty-one included studies, extracted features from three different imaging techniques were used in different mathematical models. All studies assessed CAD based on machine learning on ultrasound, CT scan and MRI scan images. Per imaging method, subsequently ultrasound, CT and MRI, sensitivities ranged from 40.3 to 100%; 84.6-100% and 66.7-100% and specificities ranged from 76.3-100%; 69-100% and 77.8-100%. Results could not be pooled, due to broad heterogeneity. Although the majority of studies report high performances, they are at considerable risk of overfitting due to the absence of an independent test set. CONCLUSION Based on this literature review, different CAD for ultrasound, CT scans and MRI scans seem promising to aid physicians in assessing ovarian tumors through their objective and potentially cost-effective character. However, performance should be evaluated per imaging technique. Prospective and larger datasets with external validation are desired to make their results generalizable.
Collapse
Affiliation(s)
- Anna H. Koch
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Lara S. Jeelof
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Caroline L. P. Muntinga
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - T. A. Gootzen
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Nienke M. A. van de Kruis
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Joost Nederend
- grid.413532.20000 0004 0398 8384Department of Radiology, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Tim Boers
- grid.6852.90000 0004 0398 8763Department of Electrical Engineering, VCA Group, University of Technology Eindhoven, 5600 MB Eindhoven, Noord-Brabant The Netherlands
| | - Fons van der Sommen
- grid.6852.90000 0004 0398 8763Department of Electrical Engineering, VCA Group, University of Technology Eindhoven, 5600 MB Eindhoven, Noord-Brabant The Netherlands
| | - Jurgen M. J. Piek
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| |
Collapse
|
11
|
Ma L, Huang L, Chen Y, Zhang L, Nie D, He W, Qi X. AI diagnostic performance based on multiple imaging modalities for ovarian tumor: A systematic review and meta-analysis. Front Oncol 2023; 13:1133491. [PMID: 37152032 PMCID: PMC10160474 DOI: 10.3389/fonc.2023.1133491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/30/2023] [Indexed: 05/09/2023] Open
Abstract
Background In recent years, AI has been applied to disease diagnosis in many medical and engineering researches. We aimed to explore the diagnostic performance of the models based on different imaging modalities for ovarian cancer. Methods PubMed, EMBASE, Web of Science, and Wanfang Database were searched. The search scope was all published Chinese and English literatures about AI diagnosis of benign and malignant ovarian tumors. The literature was screened and data extracted according to inclusion and exclusion criteria. Quadas-2 was used to evaluate the quality of the included literature, STATA 17.0. was used for statistical analysis, and forest plots and funnel plots were drawn to visualize the study results. Results A total of 11 studies were included, 3 of them were modeled based on ultrasound, 6 based on MRI, and 2 based on CT. The pooled AUROCs of studies based on ultrasound, MRI and CT were 0.94 (95% CI 0.88-1.00), 0.82 (95% CI 0.71-0.93) and 0.82 (95% Cl 0.78-0.86), respectively. The values of I2 were 99.92%, 99.91% and 92.64% based on ultrasound, MRI and CT. Funnel plot suggested no publication bias. Conclusion The models based on ultrasound have the best performance in diagnostic of ovarian cancer.
Collapse
Affiliation(s)
- Lin Ma
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Liqiong Huang
- Department of Ultrasound, Chengdu First People's Hospital, Chengdu, Chengdu, China
| | - Yan Chen
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Lei Zhang
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Dunli Nie
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Wenjing He
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoxue Qi
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
- *Correspondence: Xiaoxue Qi,
| |
Collapse
|
12
|
Wang S, Xu X, Du H, Chen Y, Mei W. Attention feature fusion methodology with additional constraint for ovarian lesion diagnosis on magnetic resonance images. Med Phys 2023; 50:297-310. [PMID: 35975618 DOI: 10.1002/mp.15937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 06/25/2022] [Accepted: 07/24/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE It is challenging for radiologists and gynecologists to identify the type of ovarian lesions by reading magnetic resonance (MR) images. Recently developed convolutional neural networks (CNNs) have made great progress in computer vision, but their architectures still need modification if they are used in processing medical images. This study aims to improve the feature extraction capability of CNNs, thus promoting the diagnostic performance in discriminating between benign and malignant ovarian lesions. METHODS We introduce a feature fusion architecture and insert the attention models in the neural network. The features extracted from different middle layers are integrated with reoptimized spatial and channel weights. We add a loss function to constrain the additional probability vector generated from the integrated features, thus guiding the middle layers to emphasize useful information. We analyzed 159 lesions imaged by dynamic contrast-enhanced MR imaging (DCE-MRI), including 73 benign lesions and 86 malignant lesions. Senior radiologists selected and labeled the tumor regions based on the pathology reports. Then, the tumor regions were cropped into 7494 nonoverlapping image patches for training and testing. The type of a single tumor was determined by the average probability scores of the image patches belonging to it. RESULTS We implemented fivefold cross-validation to characterize our proposed method, and the distribution of performance matrics was reported. For all the test image patches, the average accuracy of our method is 70.5% with an average area under the curve (AUC) of 0.785, while the baseline is 69.4% and 0.773, and for the diagnosis of single tumors, our model achieved an average accuracy of 82.4% and average AUC of 0.916, which were better than the baseline (81.8% and 0.899). Moreover, we evaluated the performance of our proposed method utilizing different CNN backbones and different attention mechanisms. CONCLUSIONS The texture features extracted from different middle layers are crucial for ovarian lesion diagnosis. Our proposed method can enhance the feature extraction capabilities of different layers of the network, thereby improving diagnostic performance.
Collapse
Affiliation(s)
- Shuai Wang
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Xiaojuan Xu
- Department of Diagnostic Imaging, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking, Union Medical College, Beijing, China
| | - Huiqian Du
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Yan Chen
- Department of Diagnostic Imaging, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking, Union Medical College, Beijing, China
| | - Wenbo Mei
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
13
|
Wu M, Zhao Y, Dong X, Jin Y, Cheng S, Zhang N, Xu S, Gu S, Wu Y, Yang J, Yao L, Wang Y. Artificial intelligence-based preoperative prediction system for diagnosis and prognosis in epithelial ovarian cancer: A multicenter study. Front Oncol 2022; 12:975703. [PMID: 36212430 PMCID: PMC9532858 DOI: 10.3389/fonc.2022.975703] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 08/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ovarian cancer (OC) is the most lethal gynecological malignancy, with limited early screening methods and poor prognosis. Artificial intelligence technology has made a great breakthrough in cancer diagnosis. Purpose We aim to develop a specific interpretable machine learning (ML) prediction model for the diagnosis and prognosis of epithelial ovarian cancer (EOC) based on a variety of biomarkers. Methods A total of 521 patients with EOC and 144 patients with benign gynecological diseases were enrolled including derivation datasets and an external validation cohort. The predicted information was acquired by 9 supervised ML methods, through 34 parameters. Behind predicted reasons for the best ML were improved by using the SHapley Additive exPlanations (SHAP) algorithm. In addition, the prognosis of EOC was analyzed by unsupervised clustering and Kaplan–Meier (KM) survival analysis. Results ML technology was superior to conventional logistic regression in predicting EOC diagnosis and XGBoost performed best in the external validation datasets. The AUC values of distinguishing EOC and benign disease patients, determining pathological type, grade and clinical stage were 0.958 (0.926-0.989), 0.792 (0.701-0.8834), 0.819 (0.687-0.950) and 0.68 (0.573-0.788) respectively. For negative CA-125 EOC patients, the AUC performance of XGBoost model was 0.835(0.763-0.907). We used unsupervised cluster analysis to identify EOC subgroups with significantly poor overall survival (p-value <0.0001) and recurrence-free survival (p-value <0.0001). Conclusions Based on the preoperative characteristics, we proved that ML algorithm can provide an acceptable diagnosis and prognosis prediction model for EOC patients. Meanwhile, SHAP analysis can improve the interpretability of ML models and contribute to precision medicine.
Collapse
Affiliation(s)
- Meixuan Wu
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Yaqian Zhao
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Xuhui Dong
- Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Yue Jin
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Shanshan Cheng
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Nan Zhang
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Shilin Xu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Sijia Gu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Yongsong Wu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Jiani Yang
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| | - Liangqing Yao
- Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| | - Yu Wang
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| |
Collapse
|
14
|
Advances in the Preoperative Identification of Uterine Sarcoma. Cancers (Basel) 2022; 14:cancers14143517. [PMID: 35884577 PMCID: PMC9318633 DOI: 10.3390/cancers14143517] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/02/2022] [Accepted: 07/06/2022] [Indexed: 12/04/2022] Open
Abstract
Simple Summary As a lethal malignant tumor, uterine sarcomas lack specific diagnostic criteria due to their similar presentation with uterine fibroids, clinicians are prone to make the wrong diagnosis or adopt incorrect treatment methods, which leads to rapid tumor progression and increased metastatic propensity. In recent years, with the improvement of medical level and awareness of uterine sarcoma, more and more studies have proposed new methods for preoperative differentiation of uterine sarcoma and uterine fibroids. This review outlines the up-to-date knowledge about preoperative differentiation of uterine sarcoma and uterine fibroids, including laboratory tests, imaging examinations, radiomics and machine learning-related methods, preoperative biopsy, integrated model and other relevant emerging technologies, and provides recommendations for future research. Abstract Uterine sarcomas are rare malignant tumors of the uterus with a high degree of malignancy. Their clinical manifestations, imaging examination findings, and laboratory test results overlap with those of uterine fibroids. No reliable diagnostic criteria can distinguish uterine sarcomas from other uterine tumors, and the final diagnosis is usually only made after surgery based on histopathological evaluation. Conservative or minimally invasive treatment of patients with uterine sarcomas misdiagnosed preoperatively as uterine fibroids will shorten patient survival. Herein, we will summarize recent advances in the preoperative diagnosis of uterine sarcomas, including epidemiology and clinical manifestations, laboratory tests, imaging examinations, radiomics and machine learning-related methods, preoperative biopsy, integrated model and other relevant emerging technologies.
Collapse
|
15
|
Chen H, Yang BW, Qian L, Meng YS, Bai XH, Hong XW, He X, Jiang MJ, Yuan F, Du QW, Feng WW. Deep Learning Prediction of Ovarian Malignancy at US Compared with O-RADS and Expert Assessment. Radiology 2022; 304:106-113. [PMID: 35412367 DOI: 10.1148/radiol.211367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Background Deep learning (DL) algorithms could improve the classification of ovarian tumors assessed with multimodal US. Purpose To develop DL algorithms for the automated classification of benign versus malignant ovarian tumors assessed with US and to compare algorithm performance to Ovarian-Adnexal Reporting and Data System (O-RADS) and subjective expert assessment for malignancy. Materials and Methods This retrospective study included consecutive women with ovarian tumors undergoing gray scale and color Doppler US from January 2019 to November 2019. Histopathologic analysis was the reference standard. The data set was divided into training (70%), validation (10%), and test (20%) sets. Algorithms modified from residual network (ResNet) with two fusion strategies (feature fusion [hereafter, DLfeature] or decision fusion [hereafter, DLdecision]) were developed. DL prediction of malignancy was compared with O-RADS risk categorization and expert assessment by area under the receiver operating characteristic curve (AUC) analysis in the test set. Results A total of 422 women (mean age, 46.4 years ± 14.8 [SD]) with 304 benign and 118 malignant tumors were included; there were 337 women in the training and validation data set and 85 women in the test data set. DLfeature had an AUC of 0.93 (95% CI: 0.85, 0.97) for classifying malignant from benign ovarian tumors, comparable with O-RADS (AUC, 0.92; 95% CI: 0.85, 0.97; P = .88) and expert assessment (AUC, 0.97; 95% CI: 0.91, 0.99; P = .07), and similar to DLdecision (AUC, 0.90; 95% CI: 0.82, 0.96; P = .29). DLdecision, DLfeature, O-RADS, and expert assessment achieved sensitivities of 92%, 92%, 92%, and 96%, respectively, and specificities of 80%, 85%, 89%, and 87%, respectively, for malignancy. Conclusion Deep learning algorithms developed by using multimodal US images may distinguish malignant from benign ovarian tumors with diagnostic performance comparable to expert subjective and Ovarian-Adnexal Reporting and Data System assessment. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hui Chen
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Bo-Wen Yang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Le Qian
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Yi-Shuang Meng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiang-Hui Bai
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiao-Wei Hong
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xin He
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Mei-Jiao Jiang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Fei Yuan
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Qin-Wen Du
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Wei-Wei Feng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| |
Collapse
|
16
|
SC-Dynamic R-CNN: A Self-Calibrated Dynamic R-CNN Model for Lung Cancer Lesion Detection. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9452157. [PMID: 35387227 PMCID: PMC8979747 DOI: 10.1155/2022/9452157] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 02/24/2022] [Accepted: 03/06/2022] [Indexed: 12/21/2022]
Abstract
Lung cancer has complex biological characteristics and a high degree of malignancy. It has always been the number one “killer” in cancer, threatening human life and health. The diagnosis and early treatment of lung cancer still require improvement and further development. With high morbidity and mortality, there is an urgent need for an accurate diagnosis method. However, the existing computer-aided detection system has a complicated process and low detection accuracy. To solve this problem, this paper proposed a two-stage detection method based on the dynamic region-based convolutional neural network (Dynamic R-CNN). We divide lung cancer into squamous cell carcinoma, adenocarcinoma, and small cell carcinoma. By adding the self-calibrated convolution module into the feature network, we extracted more abundant lung cancer features and proposed a new regression loss function to further improve the detection performance of lung cancer. After experimental verification, the mAP (mean average precision) of the model can reach 88.1% on the lung cancer dataset and it performed particularly well with a high IoU (intersection over union) threshold. This method has a good performance in the detection of lung cancer and can improve the efficiency of doctors' diagnoses. It can avoid false detection and miss detection to a certain extent.
Collapse
|
17
|
Yu M, Han M, Li X, Wei X, Jiang H, Chen H, Yu R. Adaptive soft erasure with edge self-attention for weakly supervised semantic segmentation: Thyroid ultrasound image case study. Comput Biol Med 2022; 144:105347. [PMID: 35276549 DOI: 10.1016/j.compbiomed.2022.105347] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/17/2022] [Accepted: 02/22/2022] [Indexed: 12/01/2022]
Abstract
[S U M M A R Y] Weakly supervised segmentation for medical images ease the reliance of models on pixel-level annotation while advancing the field of computer-aided diagnosis. However, the differences in nodule size in thyroid ultrasound images and the limitations of class activation maps in weakly supervised segmentation methods typically lead to under- and/or over-segmentation problems in real predictions. To alleviate this problem, we propose a weakly supervised segmentation neural network approach. This new method is based on a dual branch soft erase module that expands the foreground response region while constraining the erroneous expansion of the foreground region by the enhancement of background features. The sensitivity of this neural network to the nodule scale size is further enhanced by the scale feature adaptation module, which in turn generates integral and high-quality segmentation masks. In addition, while the nodule area can be significantly expanded through soft erase module and scale feature adaptation module, the activation effect in the nodule edge area is still not satisfactory, so that we further add an edge-based attention mechanism to strengthen the nodule edge segmentation effect. The results of experiments performed on the thyroid ultrasound image dataset showed that our new approach significantly outperformed existing weakly supervised semantic segmentation methods, e.g., 5.9% and 6.3% more accurate than the second-based results in terms of Jaccard and Dice coefficients, respectively.
Collapse
Affiliation(s)
- Mei Yu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin, China
| | - Ming Han
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin, China
| | - Xuewei Li
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin, China.
| | - Xi Wei
- Tianjin Medical University Cancer Hospital, Tianjin, China
| | - Han Jiang
- The OpenBayes (Tianjin) IT Co., Ltd., Tianjin, China
| | - Huiling Chen
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, Zhejiang, China
| | - Ruiguo Yu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin, China
| |
Collapse
|
18
|
Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: a retrospective, multicentre, diagnostic study. THE LANCET DIGITAL HEALTH 2022; 4:e179-e187. [DOI: 10.1016/s2589-7500(21)00278-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 10/15/2021] [Accepted: 12/02/2021] [Indexed: 11/19/2022]
|
19
|
Wang H, Liu C, Zhao Z, Zhang C, Wang X, Li H, Wu H, Liu X, Li C, Qi L, Ma W. Application of Deep Convolutional Neural Networks for Discriminating Benign, Borderline, and Malignant Serous Ovarian Tumors From Ultrasound Images. Front Oncol 2021; 11:770683. [PMID: 34988015 PMCID: PMC8720926 DOI: 10.3389/fonc.2021.770683] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 11/29/2021] [Indexed: 11/18/2022] Open
Abstract
Objective This study aimed to evaluate the performance of the deep convolutional neural network (DCNN) to discriminate between benign, borderline, and malignant serous ovarian tumors (SOTs) on ultrasound(US) images. Material and Methods This retrospective study included 279 pathology-confirmed SOTs US images from 265 patients from March 2013 to December 2016. Two- and three-class classification task based on US images were proposed to classify benign, borderline, and malignant SOTs using a DCNN. The 2-class classification task was divided into two subtasks: benign vs. borderline & malignant (task A), borderline vs. malignant (task B). Five DCNN architectures, namely VGG16, GoogLeNet, ResNet34, MobileNet, and DenseNet, were trained and model performance before and after transfer learning was tested. Model performance was analyzed using accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). Results The best overall performance was for the ResNet34 model, which also achieved the better performance after transfer learning. When classifying benign and non-benign tumors, the AUC was 0.96, the sensitivity was 0.91, and the specificity was 0.91. When predicting malignancy and borderline tumors, the AUC was 0.91, the sensitivity was 0.98, and the specificity was 0.74. The model had an overall accuracy of 0.75 for in directly classifying the three categories of benign, malignant and borderline SOTs, and a sensitivity of 0.89 for malignant, which was better than the overall diagnostic accuracy of 0.67 and sensitivity of 0.75 for malignant of the senior ultrasonographer. Conclusion DCNN model analysis of US images can provide complementary clinical diagnostic information and is thus a promising technique for effective differentiation of benign, borderline, and malignant SOTs.
Collapse
Affiliation(s)
- Huiquan Wang
- School of Electrical and Electronic Engineering, TianGong University, Tianjin, China
| | - Chunli Liu
- School of Electrical and Electronic Engineering, TianGong University, Tianjin, China
| | - Zhe Zhao
- School of Electrical and Electronic Engineering, TianGong University, Tianjin, China
| | - Chao Zhang
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Xin Wang
- Department of Epidemiology and Biostatistics, West China School of Public Health, Sichuan University, Chengdu, China
| | - Huiyang Li
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Haixiao Wu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Xiaofeng Liu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
| | - Chunxiang Li
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
| | - Lisha Qi
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- *Correspondence: Lisha Qi, ; Wenjuan Ma,
| | - Wenjuan Ma
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- *Correspondence: Lisha Qi, ; Wenjuan Ma,
| |
Collapse
|
20
|
Wang X, Li H, Wang L, Yu Y, Zhou H, Wang L, Song T. An improved YOLOv3 model for detecting location information of ovarian cancer from CT images. INTELL DATA ANAL 2021. [DOI: 10.3233/ida-205542] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Ovarian cancer is a malignant tumor that poses a serious threat to women’s lives. Computer-aided diagnosis (CAD) systems can classify the type of ovarian tumors, but few of them can provide exactly the location information of ovarian cancer cells. Recently, deep learning technology becomes hot for automatic detection of cancer cells, particularly for detecting their locations. In this work, we propose a novel end-to-end network YOLO-OC (Ovarian cancer) model, which can extract the characteristics of ovarian cancer more efficiently. In our method, deformable convolution is used to enhance the model’s ability to learn geometric deformation in space. Squeeze-and-Excitation (SE) module is proposed to automatically learn the importance of different channel features. Data experiments are conducted on datasets collected from The Affiliated Hospital of Qingdao University Medical College, China. Experimental results show that our YOLO-OC model achieves 91.83%, 85.66% and 73.82% on mean average precision mAP@.5, mAP@.75 and mAP@[.5,.95], respectively, which performs better than Faster R-CNN, SSD and RetinaNet on both accuracy and efficiency.
Collapse
Affiliation(s)
- Xun Wang
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Hanlin Li
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Lisheng Wang
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Yongzhi Yu
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Hao Zhou
- Department of Gynaecology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Lei Wang
- Department of Gynaecology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Tao Song
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
- Department of Artificial Intelligence, Faculty of Computer Science, Polytechnical University of Madrid, Campus de Montegancedo, Madrid, Spain
| |
Collapse
|
21
|
Akazawa M, Hashimoto K. Artificial intelligence in gynecologic cancers: Current status and future challenges - A systematic review. Artif Intell Med 2021; 120:102164. [PMID: 34629152 DOI: 10.1016/j.artmed.2021.102164] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 05/28/2021] [Accepted: 08/31/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Over the past years, the application of artificial intelligence (AI) in medicine has increased rapidly, especially in diagnostics, and in the near future, the role of AI in medicine will become progressively more important. In this study, we elucidated the state of AI research on gynecologic cancers. METHODS A search was conducted in three databases-PubMed, Web of Science, and Scopus-for research papers dated between January 2010 and December 2020. As keywords, we used "artificial intelligence," "deep learning," "machine learning," and "neural network," combined with "cervical cancer," "endometrial cancer," "uterine cancer," and "ovarian cancer." We excluded genomic and molecular research, as well as automated pap-smear diagnoses and digital colposcopy. RESULTS Of 1632 articles, 71 were eligible, including 34 on cervical cancer, 13 on endometrial cancer, three on uterine sarcoma, and 21 on ovarian cancer. A total of 35 studies (49%) used imaging data and 36 studies (51%) used value-based data as the input data. Magnetic resonance imaging (MRI), computed tomography (CT), ultrasound, cytology, and hysteroscopy data were used as imaging data, and the patients' backgrounds, blood examinations, tumor markers, and indices in pathological examination were used as value-based data. The targets of prediction were definitive diagnosis and prognostic outcome, including overall survival and lymph node metastasis. The size of the dataset was relatively small because 64 studies (90%) included less than 1000 cases, and the median size was 214 cases. The models were evaluated by accuracy scores, area under the receiver operating curve (AUC), and sensitivity/specificity. Owing to the heterogeneity, a quantitative synthesis was not appropriate in this review. CONCLUSIONS In gynecologic oncology, more studies have been conducted on cervical cancer than on ovarian and endometrial cancers. Prognoses were mainly used in the study of cervical cancer, whereas diagnoses were primarily used for studying ovarian cancer. The proficiency of the study design for endometrial cancer and uterine sarcoma was unclear because of the small number of studies conducted. The small size of the dataset and the lack of a dataset for external validation were indicated as the challenges of the studies.
Collapse
Affiliation(s)
- Munetoshi Akazawa
- Department of Obstetrics and Gynecology, Tokyo Women's Medical University Medical Center East, Tokyo, Japan.
| | - Kazunori Hashimoto
- Department of Obstetrics and Gynecology, Tokyo Women's Medical University Medical Center East, Tokyo, Japan
| |
Collapse
|
22
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
23
|
Sone K, Toyohara Y, Taguchi A, Miyamoto Y, Tanikawa M, Uchino-Mori M, Iriyama T, Tsuruga T, Osuga Y. Application of artificial intelligence in gynecologic malignancies: A review. J Obstet Gynaecol Res 2021; 47:2577-2585. [PMID: 33973305 DOI: 10.1111/jog.14818] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 04/25/2021] [Indexed: 12/28/2022]
Abstract
With the development of machine learning and deep learning models, artificial intelligence is now being applied to the field of medicine. In oncology, the use of artificial intelligence for the diagnostic evaluation of medical images such as radiographic images, omics analysis using genome data, and clinical information has been increasing in recent years. There have been increasing numbers of reports on the use of artificial intelligence in the field of gynecologic malignancies, and we introduce and review these studies. For cervical and endometrial cancers, the evaluation of medical images, such as colposcopy, hysteroscopy, and magnetic resonance images, using artificial intelligence is frequently reported. In ovarian cancer, many reports combine the assessment of medical images with the multi-omics analysis of clinical and genomic data using artificial intelligence. However, few study results can be implemented in clinical practice, and further research is needed in the future.
Collapse
Affiliation(s)
- Kenbun Sone
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yusuke Toyohara
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ayumi Taguchi
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yuichiro Miyamoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Michihiro Tanikawa
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Mayuyo Uchino-Mori
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Takayuki Iriyama
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tetsushi Tsuruga
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Osuga
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
24
|
Guerriero S, Pascual M, Ajossa S, Neri M, Musa E, Graupera B, Rodriguez I, Alcazar JL. Artificial intelligence (AI) in the detection of rectosigmoid deep endometriosis. Eur J Obstet Gynecol Reprod Biol 2021; 261:29-33. [PMID: 33873085 DOI: 10.1016/j.ejogrb.2021.04.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVES The aim of this study was to compare the accuracy of seven classical Machine Learning (ML) models trained with ultrasound (US) soft markers to raise suspicion of endometriotic bowel involvement. MATERIALS AND METHODS Input data to the models was retrieved from a database of a previously published study on bowel endometriosis performed on 333 patients. The following models have been tested: k-nearest neighbors algorithm (k-NN), Naive Bayes, Neural Networks (NNET-neuralnet), Support Vector Machine (SVM), Decision Tree, Random Forest, and Logistic Regression. The data driven strategy has been to split randomly the complete dataset in two different datasets. The training dataset and the test dataset with a 67 % and 33 % of the original cases respectively. All models were trained on the training dataset and the predictions have been evaluated using the test dataset. The best model was chosen based on the accuracy demonstrated on the test dataset. The information used in all the models were: age; presence of US signs of uterine adenomyosis; presence of an endometrioma; adhesions of the ovary to the uterus; presence of "kissing ovaries"; absence of sliding sign. All models have been trained using CARET package in R with ten repeated 10-fold cross-validation. Accuracy, Sensitivity, Specificity, positive (PPV) and negative (NPV) predictive value were calculated using a 50 % threshold. Presence of intestinal involvement was defined in all cases in the test dataset with an estimated probability greater than 0.5. RESULTS In our previous study from where the inputs were retrieved, 106 women had a final expert US diagnosis of rectosigmoid endometriosis. In term of diagnostic accuracy the best model was the Neural Net (Accuracy, 0.73; sensitivity, 0.72; specificity 0.73; PPV 0.52; and NPV 0.86) but without significant difference with the others. CONCLUSIONS The accuracy of ultrasound soft markers in raising suspicion of rectosigmoid endometriosis using Artificial Intelligence (AI) models showed similar results to the logistic model.
Collapse
Affiliation(s)
- Stefano Guerriero
- Centro Integrato di Procreazione Medicalmente Assistita (PMA) e Diagnostica Ostetrico-Ginecologica, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy; University of Cagliari, Cagliari, Italy.
| | - MariaAngela Pascual
- Department of Obstetrics, Gynecology, and Reproduction, Hospital Universitari Dexeus, Spain
| | - Silvia Ajossa
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Manuela Neri
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Eleonora Musa
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Betlem Graupera
- Department of Obstetrics, Gynecology, and Reproduction, Hospital Universitari Dexeus, Spain
| | - Ignacio Rodriguez
- Unidad Epidemiología y Estadística, Departamento de Obstetricia, Ginecología y Reproducción, Hospital Universitario Quirón Dexeus, Barcelona, Spain
| | - Juan Luis Alcazar
- Department of Obstetrics and Gynecology, Clínica Universidad de Navarra, School of Medicine, University of Navarra, Pamplona, Spain
| |
Collapse
|
25
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
26
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|
27
|
Hassan M, Ali S, Alquhayz H, Safdar K. Developing intelligent medical image modality classification system using deep transfer learning and LDA. Sci Rep 2020; 10:12868. [PMID: 32732962 PMCID: PMC7393510 DOI: 10.1038/s41598-020-69813-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 07/19/2020] [Indexed: 01/07/2023] Open
Abstract
Rapid advancement in imaging technology generates an enormous amount of heterogeneous medical data for disease diagnosis and rehabilitation process. Radiologists may require related clinical cases from medical archives for analysis and disease diagnosis. It is challenging to retrieve the associated clinical cases automatically, efficiently and accurately from the substantial medical image archive due to diversity in diseases and imaging modalities. We proposed an efficient and accurate approach for medical image modality classification that can used for retrieval of clinical cases from large medical repositories. The proposed approach is developed using transfer learning concept with pre-trained ResNet50 Deep learning model for optimized features extraction followed by linear discriminant analysis classification (TLRN-LDA). Extensive experiments are performed on challenging standard benchmark ImageCLEF-2012 dataset of 31 classes. The developed approach yields improved average classification accuracy of 87.91%, which is higher up-to 10% compared to the state-of-the-art approaches on the same dataset. Moreover, hand-crafted features are extracted for comparison. Performance of TLRN-LDA system demonstrates the effectiveness over state-of-the-art systems. The developed approach may be deployed to diagnostic centers to assist the practitioners for accurate and efficient clinical case retrieval and disease diagnosis.
Collapse
Affiliation(s)
- Mehdi Hassan
- Department of Computer Science, Air University, PAF Complex Sector E-9, Islamabad, Pakistan.
| | - Safdar Ali
- Directorate General National Repository, Islamabad, Pakistan
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, Al-Majmaah, 11952, Saudi Arabia
| | - Khushbakht Safdar
- Al Nafees Medical College and Teaching Hospital, ISRA University, Lehtrar Road, Islamabad, Pakistan
| |
Collapse
|
28
|
|