1
|
Sun C, Fan E, Huang L, Zhang Z. Performance of radiomics in preoperative determination of malignant potential and Ki-67 expression levels in gastrointestinal stromal tumors: a systematic review and meta-analysis. Acta Radiol 2024; 65:1307-1318. [PMID: 39411915 DOI: 10.1177/02841851241285958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2024]
Abstract
Empirical evidence for radiomics predicting the malignant potential and Ki-67 expression in gastrointestinal stromal tumors (GISTs) is lacking. The aim of this review article was to explore the preoperative discriminative performance of radiomics in assessing the malignant potential, mitotic index, and Ki-67 expression levels of GISTs. We systematically searched PubMed, EMBASE, Web of Science, and the Cochrane Library. The search was conducted up to 30 September 2023. Quality assessment was performed using the Radiomics Quality Score (RQS). A total of 35 original studies were included in the analysis. Among them, 26 studies focused on determining malignant potential, three studies on mitotic index discrimination, and six studies on Ki-67 discrimination. In the validation set, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of radiomics in the determination of high malignant potential were 0.74 (95% CI=0.69-0.78), 0.90 (95% CI=0.83-0.94), and 0.81 (95% CI=0.14-0.99), respectively. For moderately to highly malignant potential, the sensitivity, specificity, and AUC were 0.86 (95% CI=0.83-0.88), 0.73 (95% CI=0.67-0.78), and 0.88 (95% CI=0.27-0.99), respectively. Regarding the determination of high mitotic index, the sensitivity, specificity, and AUC of radiomics were 0.86 (95% CI=0.83-0.88), 0.73 (95% CI=0.67-0.78), and 0.88 (95% CI=0.27-0.99), respectively. When determining high Ki-67 expression, the combined sensitivity, specificity, and AUC were 0.74 (95% CI=0.65-0.81), 0.81 (95% CI=0.74-0.86), and 0.84 (95% CI=0.61-0.95), respectively. Radiomics demonstrates promising discriminative performance in the preoperative assessment of malignant potential, mitotic index, and Ki-67 expression levels in GISTs.
Collapse
Affiliation(s)
- Chengyu Sun
- Department of Colorectal Surgery, Xuzhou Clinical School of Xuzhou Medical University, Xuzhou, Jiangsu, PR China
| | - Enguo Fan
- State Key Laboratory of Medical Molecular Biology, Department of Microbiology and Parasitology, Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, Beijing, PR China
| | - Luqiao Huang
- Department of Colorectal Surgery, Xuzhou Central Hospital, Xuzhou, Jiangsu, PR China
| | - Zhengguo Zhang
- Department of Colorectal Surgery, Xuzhou Central Hospital, Xuzhou, Jiangsu, PR China
| |
Collapse
|
2
|
He H, Liu Y, Zhou X, Zhan J, Wang C, Shen Y, Chen H, Chen L, Zhang Q. Can incorporating image resolution into neural networks improve kidney tumor classification performance in ultrasound images? Med Biol Eng Comput 2024:10.1007/s11517-024-03188-8. [PMID: 39215783 DOI: 10.1007/s11517-024-03188-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Accepted: 08/22/2024] [Indexed: 09/04/2024]
Abstract
Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.
Collapse
Affiliation(s)
- Haihao He
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Yuhan Liu
- Department of Ultrasound, Huadong Hospital, Fudan University, 211 West Yan'an Road, Shanghai, 200040, China
| | - Xin Zhou
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Jia Zhan
- Department of Ultrasound, Huadong Hospital, Fudan University, 211 West Yan'an Road, Shanghai, 200040, China
| | - Changyan Wang
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Yiwen Shen
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Haobo Chen
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Lin Chen
- Department of Ultrasound, Huadong Hospital, Fudan University, 211 West Yan'an Road, Shanghai, 200040, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China.
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| |
Collapse
|
3
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
4
|
Zhuo M, Tang Y, Guo J, Qian Q, Xue E, Chen Z. Reply to comment on predicting the risk stratification of gastrointestinal stromal tumors using machine learning‑based ultrasound radiomics. J Med Ultrason (2001) 2024; 51:377-378. [PMID: 38466516 DOI: 10.1007/s10396-024-01425-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 02/07/2024] [Indexed: 03/13/2024]
Affiliation(s)
- Minling Zhuo
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China
| | - Yi Tang
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China
| | - Jingjing Guo
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China
| | - Qingfu Qian
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China
| | - Ensheng Xue
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China
| | - Zhikui Chen
- Department of Ultrasound, Fujian Medical University Affiliated Union Hospital, No. 29 Xinquan Road, Fuzhou, 350001, Fujian, China.
| |
Collapse
|
5
|
Huang SC, Pareek A, Jensen M, Lungren MP, Yeung S, Chaudhari AS. Self-supervised learning for medical image classification: a systematic review and implementation guidelines. NPJ Digit Med 2023; 6:74. [PMID: 37100953 PMCID: PMC10131505 DOI: 10.1038/s41746-023-00811-0] [Citation(s) in RCA: 48] [Impact Index Per Article: 48.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 03/30/2023] [Indexed: 04/28/2023] Open
Abstract
Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.
Collapse
Affiliation(s)
- Shih-Cheng Huang
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA.
| | - Anuj Pareek
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
| | - Malte Jensen
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Matthew P Lungren
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Serena Yeung
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Clinical Excellence Research Center, Stanford University School of Medicine, Stanford, CA, USA
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
- Stanford Cardiovascular Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
6
|
Artificial intelligence-assisted endoscopic ultrasound in the diagnosis of gastrointestinal stromal tumors: a meta-analysis. Surg Endosc 2023; 37:1649-1657. [PMID: 36100781 DOI: 10.1007/s00464-022-09597-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND AND AIMS Endoscopic ultrasonography (EUS) is useful for the diagnosis of gastrointestinal stromal tumors (GISTs), but is limited by subjective interpretation. Studies on artificial intelligence (AI)-assisted diagnosis are under development. Here, we used a meta-analysis to evaluate the diagnostic performance of AI in the diagnosis of GISTs using EUS images. METHODS PubMed, Ovid Medline, Embase, Web of science, and the Cochrane Library databases were searched for studies based on the EUS using AI for the diagnosis of GISTs, and a meta-analysis was performed to examine the accuracy. RESULTS Overall, 7 studies were included in our meta-analysis. A total of 2431 patients containing more than 36,186 images were used as the overall dataset, of which 480 patients were used for the final testing. The pooled sensitivity, specificity, positive, and negative likelihood ratio (LR) of AI-assisted EUS for differentiating GISTs from other submucosal tumors (SMTs) were 0.92 (95% confidence interval [CI] 0.89-0.95), 0.82 (95% CI 0.75-0.87), 4.55 (95% CI 2.64-7.84), and 0.12 (95% CI 0.07-0.20), respectively. The summary diagnostic odds ratio (DOR) and the area under the curve were 64.70 (95% CI 23.83-175.69) and 0.950 (Q* = 0.891). CONCLUSIONS AI-assisted EUS showed high accuracy for the automatic endoscopic diagnosis of GISTs, which could be used as a valuable complementary method for the differentiation of SMTs in the future.
Collapse
|
7
|
Wang Y, Wang Y, Ren J, Jia L, Ma L, Yin X, Yang F, Gao BL. Malignancy risk of gastrointestinal stromal tumors evaluated with noninvasive radiomics: A multi-center study. Front Oncol 2022; 12:966743. [PMID: 36052224 PMCID: PMC9425090 DOI: 10.3389/fonc.2022.966743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 07/25/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose This study was to investigate the diagnostic efficacy of radiomics models based on the enhanced CT images in differentiating the malignant risk of gastrointestinal stromal tumors (GIST) in comparison with the clinical indicators model and traditional CT diagnostic criteria. Materials and methods A total of 342 patients with GISTs confirmed histopathologically were enrolled from five medical centers. Data of patients wrom two centers comprised the training group (n=196), and data from the remaining three centers constituted the validation group (n=146). After CT image segmentation and feature extraction and selection, the arterial phase model and venous phase model were established. The maximum diameter of the tumor and internal necrosis were used to establish a clinical indicators model. The traditional CT diagnostic criteria were established for the classification of malignant potential of tumor. The performance of the four models was assessed using the receiver operating characteristics curve. Reuslts In the training group, the area under the curves(AUCs) of the arterial phase model, venous phase model, clinical indicators model, and traditional CT diagnostic criteria were 0.930 [95% confidence interval (CI): 0.895-0.965), 0.933 (95%CI 0.898-0.967), 0.917 (95%CI 0.872-0.961) and 0.782 (95%CI 0.717-0.848), respectively. In the validation group, the AUCs of the models were 0.960 (95%CI 0.930-0.990), 0.961 (95% CI 0.930-0.992), 0.922 (95%CI 0.884-0.960) and 0.768 (95%CI 0.692-0.844), respectively. No significant difference was detected in the AUC between the arterial phase model, venous phase model, and clinical indicators model by the DeLong test, whereas a significant difference was observed between the traditional CT diagnostic criteria and the other three models. Conclusion The radiomics model using the morphological features of GISTs play a significant role in tumor risk stratification and can provide a reference for clinical diagnosis and treatment plan.
Collapse
Affiliation(s)
- Yun Wang
- Affiliated Hospital of Hebei University/Hebei University (Clinical Medical College), Baoding, China
| | - Yurui Wang
- Tangshan Gongren Hospital, Tangshan, China
| | - Jialiang Ren
- General Electric Pharmaceutical Co., Ltd, Shanghai, China
| | - Linyi Jia
- Xingtai People’s Hospital, Xingtai, China
| | - Luyao Ma
- Affiliated Hospital of Hebei University/Hebei University (Clinical Medical College), Baoding, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University/Hebei University (Clinical Medical College), Baoding, China
- *Correspondence: Xiaoping Yin, ; Fei Yang,
| | - Fei Yang
- Medical Imaging Department, The First Affiliated Hospital of Hebei North University, Zhangjiakou, China
- *Correspondence: Xiaoping Yin, ; Fei Yang,
| | - Bu-Lang Gao
- Affiliated Hospital of Hebei University/Hebei University (Clinical Medical College), Baoding, China
| |
Collapse
|
8
|
Zhuang H, Bao A, Tan Y, Wang H, Xie Q, Qiu M, Xiong W, Liao F. Application and prospect of artificial intelligence in digestive endoscopy. Expert Rev Gastroenterol Hepatol 2022; 16:21-31. [PMID: 34937459 DOI: 10.1080/17474124.2022.2020646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
INTRODUCTION With the progress of science and technology, artificial intelligence represented by deep learning has gradually begun to be applied in the medical field. Artificial intelligence has been applied to benign gastrointestinal lesions, tumors, early cancer, inflammatory bowel disease, gallbladder, pancreas, and other diseases. This review summarizes the latest research results on artificial intelligence in digestive endoscopy and discusses the prospect of artificial intelligence in digestive system diseases. AREAS COVERED We retrieved relevant documents on artificial intelligence in digestive tract diseases from PubMed and Medline. This review elaborates on the knowledge of computer-aided diagnosis in digestive endoscopy. EXPERT OPINION Artificial intelligence significantly improves diagnostic accuracy, reduces physicians' workload, and provides a shred of evidence for clinical diagnosis and treatment. Shortly, artificial intelligence will have high application value in the field of medicine.
Collapse
Affiliation(s)
- Huangming Zhuang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Anyu Bao
- Clinical Laboratory, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yulin Tan
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Hanyu Wang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Qingfang Xie
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Meiqi Qiu
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Wanli Xiong
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Fei Liao
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| |
Collapse
|