1
|
Khalaf K, Terrin M, Jovani M, Rizkala T, Spadaccini M, Pawlak KM, Colombo M, Andreozzi M, Fugazza A, Facciorusso A, Grizzi F, Hassan C, Repici A, Carrara S. A Comprehensive Guide to Artificial Intelligence in Endoscopic Ultrasound. J Clin Med 2023; 12:jcm12113757. [PMID: 37297953 DOI: 10.3390/jcm12113757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 05/28/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND Endoscopic Ultrasound (EUS) is widely used for the diagnosis of bilio-pancreatic and gastrointestinal (GI) tract diseases, for the evaluation of subepithelial lesions, and for sampling of lymph nodes and solid masses located next to the GI tract. The role of Artificial Intelligence in healthcare in growing. This review aimed to provide an overview of the current state of AI in EUS from imaging to pathological diagnosis and training. METHODS AI algorithms can assist in lesion detection and characterization in EUS by analyzing EUS images and identifying suspicious areas that may require further clinical evaluation or biopsy sampling. Deep learning techniques, such as convolutional neural networks (CNNs), have shown great potential for tumor identification and subepithelial lesion (SEL) evaluation by extracting important features from EUS images and using them to classify or segment the images. RESULTS AI models with new features can increase the accuracy of diagnoses, provide faster diagnoses, identify subtle differences in disease presentation that may be missed by human eyes, and provide more information and insights into disease pathology. CONCLUSIONS The integration of AI in EUS images and biopsies has the potential to improve the diagnostic accuracy, leading to better patient outcomes and to a reduction in repeated procedures in case of non-diagnostic biopsies.
Collapse
Affiliation(s)
- Kareem Khalaf
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Maria Terrin
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Manol Jovani
- Division of Gastroenterology, Maimonides Medical Center, SUNY Downstate University, Brooklyn, NY 11219, USA
| | - Tommy Rizkala
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Marco Spadaccini
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Katarzyna M Pawlak
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Matteo Colombo
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Marta Andreozzi
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Alessandro Fugazza
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Antonio Facciorusso
- Section of Gastroenterology, Department of Medical and Surgical Sciences, University of Foggia, 71122 Foggia, Italy
| | - Fabio Grizzi
- Department of Immunology and Inflammation, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Cesare Hassan
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Alessandro Repici
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Silvia Carrara
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| |
Collapse
|
2
|
Levman J, Ewenson B, Apaloo J, Berger D, Tyrrell PN. Error Consistency for Machine Learning Evaluation and Validation with Application to Biomedical Diagnostics. Diagnostics (Basel) 2023; 13:diagnostics13071315. [PMID: 37046533 PMCID: PMC10093437 DOI: 10.3390/diagnostics13071315] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 04/05/2023] Open
Abstract
Supervised machine learning classification is the most common example of artificial intelligence (AI) in industry and in academic research. These technologies predict whether a series of measurements belong to one of multiple groups of examples on which the machine was previously trained. Prior to real-world deployment, all implementations need to be carefully evaluated with hold-out validation, where the algorithm is tested on different samples than it was provided for training, in order to ensure the generalizability and reliability of AI models. However, established methods for performing hold-out validation do not assess the consistency of the mistakes that the AI model makes during hold-out validation. Here, we show that in addition to standard methods, an enhanced technique for performing hold-out validation—that also assesses the consistency of the sample-wise mistakes made by the learning algorithm—can assist in the evaluation and design of reliable and predictable AI models. The technique can be applied to the validation of any supervised learning classification application, and we demonstrate the use of the technique on a variety of example biomedical diagnostic applications, which help illustrate the importance of producing reliable AI models. The validation software created is made publicly available, assisting anyone developing AI models for any supervised classification application in the creation of more reliable and predictable technologies.
Collapse
Affiliation(s)
- Jacob Levman
- Department of Computer Science, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02129, USA
- Nova Scotia Health Authority, Halifax, NS B3H 1V7, Canada
| | - Bryan Ewenson
- Department of Computer Science, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada
| | - Joe Apaloo
- Department of Mathematics and Statistics, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada
| | - Derek Berger
- Department of Computer Science, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada
| | - Pascal N. Tyrrell
- Department of Medical Imaging, Institute of Medical Science, University of Toronto, Toronto, ON M5T 1W7, Canada
- Department of Statistical Sciences, University of Toronto, Toronto, ON M5T 1W7, Canada
| |
Collapse
|
3
|
Keczer B, Benke M, Marjai T, Horváth M, Miheller P, Szücs Á, Harsányi L, Szijártó A, Hritz I. Quantitative Software Analysis of Endoscopic Ultrasound Images of Pancreatic Cystic Lesions. Diagnostics (Basel) 2022; 12:diagnostics12092105. [PMID: 36140506 PMCID: PMC9498186 DOI: 10.3390/diagnostics12092105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/13/2022] [Accepted: 08/26/2022] [Indexed: 11/28/2022] Open
Abstract
Endoscopic ultrasonography (EUS) is the most accurate imaging modality for the evaluation of different types of pancreatic cystic lesions. Our aim was to analyze EUS images of pancreatic cystic lesions using an image processing software. We specified the echogenicity of the lesions by measuring the gray value of pixels inside the selected areas. The images were divided into groups (serous cystic neoplasm /SCN/, intraductal papillary mucinous neoplasms and mucinous cystic neoplasms /Non-SCN/ and Pseudocyst) according to the pathology results of the lesions. Overall, 170 images were processed by the software: 81 in Non-SCN, 30 in SCN and 59 in Pseudocyst group. The mean gray value of the entire lesion in the Non-SCN group was significantly higher than in the SCN group (27.8 vs. 18.8; p < 0.0005). The area ratio in the SCN, Non-SCN and Pseudocyst groups was 57%, 39% and 61%, respectively; significantly lower in the Non-SCN group than in the SCN or Pseudocyst groups (p < 0.0005 and p < 0.0005, respectively). The lesion density was also significantly higher in the Non-SCN group compared to the SCN or Pseudocyst groups (4186.6/mm2 vs. 2833.8/mm2 vs. 2981.6/mm2; p < 0.0005 and p < 0.0005, respectively). The EUS image analysis process may have the potential to be a diagnostic tool for the evaluation and differentiation of pancreatic cystic lesions.
Collapse
|
4
|
Pane K, Zanfardino M, Grimaldi AM, Baldassarre G, Salvatore M, Incoronato M, Franzese M. Discovering Common miRNA Signatures Underlying Female-Specific Cancers via a Machine Learning Approach Driven by the Cancer Hallmark ERBB. Biomedicines 2022; 10:biomedicines10061306. [PMID: 35740327 PMCID: PMC9219956 DOI: 10.3390/biomedicines10061306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/25/2022] [Accepted: 05/29/2022] [Indexed: 11/29/2022] Open
Abstract
Big data processing, using omics data integration and machine learning (ML) methods, drive efforts to discover diagnostic and prognostic biomarkers for clinical decision making. Previously, we used the TCGA database for gene expression profiling of breast, ovary, and endometrial cancers, and identified a top-scoring network centered on the ERBB2 gene, which plays a crucial role in carcinogenesis in the three estrogen-dependent tumors. Here, we focused on microRNA expression signature similarity, asking whether they could target the ERBB family. We applied an ML approach on integrated TCGA miRNA profiling of breast, endometrium, and ovarian cancer to identify common miRNA signatures differentiating tumor and normal conditions. Using the ML-based algorithm and the miRTarBase database, we found 205 features and 158 miRNAs targeting ERBB isoforms, respectively. By merging the results of both databases and ranking each feature according to the weighted Support Vector Machine model, we prioritized 42 features, with accuracy (0.98), AUC (0.93–95% CI 0.917–0.94), sensitivity (0.85), and specificity (0.99), indicating their diagnostic capability to discriminate between the two conditions. In vitro validations by qRT-PCR experiments, using model and parental cell lines for each tumor type showed that five miRNAs (hsa-mir-323a-3p, hsa-mir-323b-3p, hsa-mir-331-3p, hsa-mir-381-3p, and hsa-mir-1301-3p) had expressed trend concordance between breast, ovarian, and endometrium cancer cell lines compared with normal lines, confirming our in silico predictions. This shows that an integrated computational approach combined with biological knowledge, could identify expression signatures as potential diagnostic biomarkers common to multiple tumors.
Collapse
Affiliation(s)
- Katia Pane
- IRCCS Synlab SDN, 80143 Naples, Italy; (K.P.); (A.M.G.); (M.S.); (M.I.); (M.F.)
| | - Mario Zanfardino
- IRCCS Synlab SDN, 80143 Naples, Italy; (K.P.); (A.M.G.); (M.S.); (M.I.); (M.F.)
- Correspondence:
| | - Anna Maria Grimaldi
- IRCCS Synlab SDN, 80143 Naples, Italy; (K.P.); (A.M.G.); (M.S.); (M.I.); (M.F.)
| | - Gustavo Baldassarre
- Molecular Oncology Unit, Centro di Riferimento Oncologico di Aviano (CRO), IRCCS, National Cancer Institute, 33081 Aviano, Italy;
| | - Marco Salvatore
- IRCCS Synlab SDN, 80143 Naples, Italy; (K.P.); (A.M.G.); (M.S.); (M.I.); (M.F.)
| | | | - Monica Franzese
- IRCCS Synlab SDN, 80143 Naples, Italy; (K.P.); (A.M.G.); (M.S.); (M.I.); (M.F.)
| |
Collapse
|
5
|
Beck J, Ren L, Huang S, Berger E, Bardales K, Mannheimer J, Mazcko C, LeBlanc A. Canine and murine models of osteosarcoma. Vet Pathol 2022; 59:399-414. [PMID: 35341404 PMCID: PMC9290378 DOI: 10.1177/03009858221083038] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Osteosarcoma (OS) is the most common malignant bone tumor in children. Despite efforts to develop and implement new therapies, patient outcomes have not measurably improved since the 1980s. Metastasis continues to be the main source of patient mortality, with 30% of cases developing metastatic disease within 5 years of diagnosis. Research models are critical in the advancement of cancer research and include a variety of species. For example, xenograft and patient-derived xenograft (PDX) mouse models provide opportunities to study human tumor cells in vivo while transgenic models have offered significant insight into the molecular mechanisms underlying OS development. A growing recognition of naturally occurring cancers in companion species has led to new insights into how veterinary patients can contribute to studies of cancer biology and drug development. The study of canine cases, including the use of diagnostic tissue archives and clinical trials, offers a potential mechanism to further canine and human cancer research. Advancement in the field of OS research requires continued development and appropriate use of animal models. In this review, animal models of OS are described with a focus on the mouse and tumor-bearing pet dog as parallel and complementary models of human OS.
Collapse
Affiliation(s)
| | - Ling Ren
- National Cancer Institute, Bethesda, MD
| | | | | | - Kathleen Bardales
- National Cancer Institute, Bethesda, MD
- University of Pennsylvania, Philadelphia, PA
| | | | | | | |
Collapse
|
6
|
Krauze AV, Zhuge Y, Zhao R, Tasci E, Camphausen K. AI-Driven Image Analysis in Central Nervous System Tumors-Traditional Machine Learning, Deep Learning and Hybrid Models. JOURNAL OF BIOTECHNOLOGY AND BIOMEDICINE 2022; 5:1-19. [PMID: 35106480 PMCID: PMC8802234 DOI: 10.26502/jbb.2642-91280046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The interpretation of imaging in medicine in general and in oncology specifically remains problematic due to several limitations which include the need to incorporate detailed clinical history, patient and disease-specific history, clinical exam features, previous and ongoing treatment, and account for the dependency on reproducible human interpretation of multiple factors with incomplete data linkage. To standardize reporting, minimize bias, expedite management, and improve outcomes, the use of Artificial Intelligence (AI) has gained significant prominence in imaging analysis. In oncology, AI methods have as a result been explored in most cancer types with ongoing progress in employing AI towards imaging for oncology treatment, assessing treatment response, and understanding and communicating prognosis. Challenges remain with limited available data sets, variability in imaging changes over time augmented by a growing heterogeneity in analysis approaches. We review the imaging analysis workflow and examine how hand-crafted features also referred to as traditional Machine Learning (ML), Deep Learning (DL) approaches, and hybrid analyses, are being employed in AI-driven imaging analysis in central nervous system tumors. ML, DL, and hybrid approaches coexist, and their combination may produce superior results although data in this space is as yet novel, and conclusions and pitfalls have yet to be fully explored. We note the growing technical complexities that may become increasingly separated from the clinic and enforce the acute need for clinician engagement to guide progress and ensure that conclusions derived from AI-driven imaging analysis reflect that same level of scrutiny lent to other avenues of clinical research.
Collapse
Affiliation(s)
- A V Krauze
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - Y Zhuge
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - R Zhao
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, Canada
| | - E Tasci
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - K Camphausen
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| |
Collapse
|
7
|
Dagli MM, Rajesh A, Asaad M, Butler CE. The Use of Artificial Intelligence and Machine Learning in Surgery: A Comprehensive Literature Review. Am Surg 2021:31348211065101. [PMID: 34958252 DOI: 10.1177/00031348211065101] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Interest in the use of artificial intelligence (AI) and machine learning (ML) in medicine has grown exponentially over the last few years. With its ability to enhance speed, precision, and efficiency, AI has immense potential, especially in the field of surgery. This article aims to provide a comprehensive literature review of artificial intelligence as it applies to surgery and discuss practical examples, current applications, and challenges to the adoption of this technology. Furthermore, we elaborate on the utility of natural language processing and computer vision in improving surgical outcomes, research, and patient care.
Collapse
Affiliation(s)
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
8
|
Adeoye J, Tan JY, Choi SW, Thomson P. Prediction models applying machine learning to oral cavity cancer outcomes: A systematic review. Int J Med Inform 2021; 154:104557. [PMID: 34455119 DOI: 10.1016/j.ijmedinf.2021.104557] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVES Machine learning platforms are now being introduced into modern oncological practice for classification and prediction of patient outcomes. To determine the current status of the application of these learning models as adjunctive decision-making tools in oral cavity cancer management, this systematic review aims to summarize the accuracy of machine-learning based models for disease outcomes. METHODS Electronic databases including PubMed, Scopus, EMBASE, Cochrane Library, LILACS, SciELO, PsychINFO, and Web of Science were searched up until December 21, 2020. Pertinent articles detailing the development and accuracy of machine learning prediction models for oral cavity cancer outcomes were selected in a two-stage process. Quality assessment was conducted using the Quality in Prognosis Studies (QUIPS) tool and results of base studies were qualitatively synthesized by all authors. Outcomes of interest were malignant transformation of precancer lesions, cervical lymph node metastasis, as well as treatment response, and prognosis of oral cavity cancer. RESULTS Twenty-seven articles out of 950 citations identified from electronic and manual searching were included in this study. Five studies had low bias concerns on the QUIPS tool. Prediction of malignant transformation, cervical lymph node metastasis, treatment response, and prognosis were reported in three, six, eight, and eleven articles respectively. Accuracy of these learning models on the internal or external validation sets ranged from 0.85 to 0.97 for malignant transformation prediction, 0.78-0.91 for cervical lymph node metastasis prediction, 0.64-1.00 for treatment response prediction, and 0.71-0.99 for prognosis prediction. In general, most trained algorithms predicting these outcomes performed better than alternate methods of prediction. We also found that models including molecular markers in training data had better accuracy estimates for malignant transformation, treatment response, and prognosis prediction. CONCLUSION Machine learning algorithms have a satisfactory to excellent accuracy for predicting three of four oral cavity cancer outcomes i.e., malignant transformation, nodal metastasis, and prognosis. However, considering the training approach of many available classifiers, these models may not be streamlined enough for clinical application currently.
Collapse
Affiliation(s)
- John Adeoye
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region
| | - Jia Yan Tan
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region.
| | - Siu-Wai Choi
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region.
| | - Peter Thomson
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region
| |
Collapse
|
9
|
Roy S, Whitehead TD, Li S, Ademuyiwa FO, Wahl RL, Dehdashti F, Shoghi KI. Co-clinical FDG-PET radiomic signature in predicting response to neoadjuvant chemotherapy in triple-negative breast cancer. Eur J Nucl Med Mol Imaging 2021; 49:550-562. [PMID: 34328530 PMCID: PMC8800941 DOI: 10.1007/s00259-021-05489-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 07/04/2021] [Indexed: 02/07/2023]
Abstract
Purpose We sought to exploit the heterogeneity afforded by patient-derived tumor xenografts (PDX) to first, optimize and identify robust radiomic features to predict response to therapy in subtype-matched triple negative breast cancer (TNBC) PDX, and second, to implement PDX-optimized image features in a TNBC co-clinical study to predict response to therapy using machine learning (ML) algorithms. Methods TNBC patients and subtype-matched PDX were recruited into a co-clinical FDG-PET imaging trial to predict response to therapy. One hundred thirty-one imaging features were extracted from PDX and human-segmented tumors. Robust image features were identified based on reproducibility, cross-correlation, and volume independence. A rank importance of predictors using ReliefF was used to identify predictive radiomic features in the preclinical PDX trial in conjunction with ML algorithms: classification and regression tree (CART), Naïve Bayes (NB), and support vector machines (SVM). The top four PDX-optimized image features, defined as radiomic signatures (RadSig), from each task were then used to predict or assess response to therapy. Performance of RadSig in predicting/assessing response was compared to SUVmean, SUVmax, and lean body mass-normalized SULpeak measures. Results Sixty-four out of 131 preclinical imaging features were identified as robust. NB-RadSig performed highest in predicting and assessing response to therapy in the preclinical PDX trial. In the clinical study, the performance of SVM-RadSig and NB-RadSig to predict and assess response was practically identical and superior to SUVmean, SUVmax, and SULpeak measures. Conclusions We optimized robust FDG-PET radiomic signatures (RadSig) to predict and assess response to therapy in the context of a co-clinical imaging trial. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05489-8.
Collapse
Affiliation(s)
- Sudipta Roy
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Timothy D Whitehead
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Shunqiang Li
- Department of Medicine, Division of Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Foluso O Ademuyiwa
- Department of Medicine, Division of Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Richard L Wahl
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA.,Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Farrokh Dehdashti
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Kooresh I Shoghi
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA. .,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, USA.
| |
Collapse
|
10
|
Malpani R, Petty CW, Bhatt N, Staib LH, Chapiro J. Use of Artificial Intelligence in Non-Oncologic Interventional Radiology: Current State and Future Directions. DIGESTIVE DISEASE INTERVENTIONS 2021; 5:331-337. [PMID: 35005333 PMCID: PMC8740955 DOI: 10.1055/s-0041-1726300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The future of radiology is disproportionately linked to the applications of artificial intelligence (AI). Recent exponential advancements in AI are already beginning to augment the clinical practice of radiology. Driven by a paucity of review articles in the area, this article aims to discuss applications of AI in non-oncologic IR across procedural planning, execution, and follow-up along with a discussion on the future directions of the field. Applications in vascular imaging, radiomics, touchless software interactions, robotics, natural language processing, post-procedural outcome prediction, device navigation, and image acquisition are included. Familiarity with AI study analysis will help open the current 'black box' of AI research and help bridge the gap between the research laboratory and clinical practice.
Collapse
Affiliation(s)
- Rohil Malpani
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Christopher W. Petty
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Neha Bhatt
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Lawrence H. Staib
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| |
Collapse
|