1
|
Fass O, Rogers BD, Gyawali CP. Artificial Intelligence Tools for Improving Manometric Diagnosis of Esophageal Dysmotility. Curr Gastroenterol Rep 2024; 26:115-123. [PMID: 38324172 PMCID: PMC10960670 DOI: 10.1007/s11894-024-00921-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2024] [Indexed: 02/08/2024]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is a broad term that pertains to a computer's ability to mimic and sometimes surpass human intelligence in interpretation of large datasets. The adoption of AI in gastrointestinal motility has been slower compared to other areas such as polyp detection and interpretation of histopathology. RECENT FINDINGS Within esophageal physiologic testing, AI can automate interpretation of image-based tests, especially high resolution manometry (HRM) and functional luminal imaging probe (FLIP) studies. Basic tasks such as identification of landmarks, determining adequacy of the HRM study and identification from achalasia from non-achalasia patterns are achieved with good accuracy. However, existing AI systems compare AI interpretation to expert analysis rather than to clinical outcome from management based on AI diagnosis. The use of AI methods is much less advanced within the field of ambulatory reflux monitoring, where challenges exist in assimilation of data from multiple impedance and pH channels. There remains potential for replication of the AI successes within esophageal physiologic testing to HRM of the anorectum, and to innovative and novel methods of evaluating gastric electrical activity and motor function. The use of AI has tremendous potential to improve detection of dysmotility within the esophagus using esophageal physiologic testing, as well as in other regions of the gastrointestinal tract. Eventually, integration of patient presentation, demographics and alternate test results to individual motility test interpretation will improve diagnostic precision and prognostication using AI tools.
Collapse
Affiliation(s)
- Ofer Fass
- Division of Gastroenterology and Hepatology, Stanford University, Stanford, CA, USA
| | - Benjamin D Rogers
- Division of Gastroenterology, Hepatology and Nutrition, University of Louisville School of Medicine, Louisville, KY, USA
- Division of Gastroenterology, Washington University School of Medicine, 660 South Euclid Ave., Campus Box 8124, Saint Louis, MO, 63110, USA
| | - C Prakash Gyawali
- Division of Gastroenterology, Washington University School of Medicine, 660 South Euclid Ave., Campus Box 8124, Saint Louis, MO, 63110, USA.
| |
Collapse
|
2
|
Saraiva MM, Pouca MV, Ribeiro T, Afonso J, Cardoso H, Sousa P, Ferreira J, Macedo G, Junior IF. Artificial Intelligence and Anorectal Manometry: Automatic Detection and Differentiation of Anorectal Motility Patterns-A Proof-of-Concept Study. Clin Transl Gastroenterol 2023; 14:e00555. [PMID: 36520781 PMCID: PMC10584284 DOI: 10.14309/ctg.0000000000000555] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 11/18/2022] [Indexed: 10/20/2023] Open
Abstract
INTRODUCTION Anorectal manometry (ARM) is the gold standard for the evaluation of anorectal functional disorders, prevalent in the population. Nevertheless, the accessibility to this examination is limited, and the complexity of data analysis and report is a significant drawback. This pilot study aimed to develop and validate an artificial intelligence model to automatically differentiate motility patterns of fecal incontinence (FI) from obstructed defecation (OD) using ARM data. METHODS We developed and tested multiple machine learning algorithms for the automatic interpretation of ARM data. Four models were tested: k-nearest neighbors, support vector machines, random forests, and gradient boosting (xGB). These models were trained using a stratified 5-fold strategy. Their performance was assessed after fine-tuning of each model's hyperparameters, using 90% of data for training and 10% of data for testing. RESULTS A total of 827 ARM examinations were used in this study. After fine-tuning, the xGB model presented an overall accuracy (84.6% ± 2.9%), similar to that of random forests (82.7% ± 4.8%) and support vector machines (81.0% ± 8.0%) and higher that of k-nearest neighbors (74.4% ± 3.8%). The xGB models showed the highest discriminating performance between OD and FI, with an area under the curve of 0.939. DISCUSSION The tested machine learning algorithms, particularly the xGB model, accurately differentiated between FI and OD manometric patterns. Subsequent development of these tools may optimize the access to ARM studies, which may have a significant impact on the management of patients with anorectal functional diseases.
Collapse
Affiliation(s)
- Miguel Mascarenhas Saraiva
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Maria Vila Pouca
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI—Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - João Afonso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Hélder Cardoso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Pedro Sousa
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI—Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI—Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Ilario Froehner Junior
- Department of Gastrointestinal Motility, Nossa Senhora das Graças Hospital, Curitiba, Paraná, Brazil
- Department of Coloproctology, Pelvia—Gastrointestinal Motility and Continence, Curitiba, Paraná, Brazil
| |
Collapse
|
3
|
Kou W, Soni P, Klug MW, Etemadi M, Kahrilas PJ, Pandolfino JE, Carlson DA. An artificial intelligence platform provides an accurate interpretation of esophageal motility from Functional Lumen Imaging Probe Panometry studies. Neurogastroenterol Motil 2023:e14549. [PMID: 36808777 DOI: 10.1111/nmo.14549] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/18/2023] [Accepted: 01/30/2023] [Indexed: 02/23/2023]
Abstract
BACKGROUND Functional lumen imaging probe (FLIP) Panometry is performed at the time of sedated endoscopy and evaluates esophageal motility in response to distension. This study aimed to develop and test an automated artificial intelligence (AI) platform that could interpret FLIP Panometry studies. METHODS The study cohort included 678 consecutive patients and 35 asymptomatic controls that completed FLIP Panometry during endoscopy and high-resolution manometry (HRM). "True" study labels for model training and testing were assigned by experienced esophagologists per a hierarchical classification scheme. The supervised, deep learning, AI model generated FLIP Panometry heatmaps from raw FLIP data and based on convolutional neural networks assigned esophageal motility labels using a two-stage prediction model. Model performance was tested on a 15% held-out test set (n = 103); the remainder of the studies were utilized for model training (n = 610). KEY RESULTS "True" FLIP labels across the entire cohort included 190 (27%) "normal," 265 (37%) "not normal/not achalasia," and 258 (36%) "achalasia." On the test set, both the Normal/Not normal and the achalasia/not achalasia models achieved an accuracy of 89% (with 89%/88% recall, 90%/89% precision, respectively). Of 28 patients with achalasia (per HRM) in the test set, 0 were predicted as "normal" and 93% as "achalasia" by the AI model. CONCLUSIONS An AI platform provided accurate interpretation of FLIP Panometry esophageal motility studies from a single center compared with the impression of experienced FLIP Panometry interpreters. This platform may provide useful clinical decision support for esophageal motility diagnosis from FLIP Panometry studies performed at the time of endoscopy.
Collapse
Affiliation(s)
- Wenjun Kou
- Division of Gastroenterology and Hepatology, Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Priyanka Soni
- Department of Anesthesiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Matthew W Klug
- Department of Information Services, Northwestern Medicine, Chicago, Illinois, USA
| | - Mozziyar Etemadi
- Department of Anesthesiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA.,Department of Information Services, Northwestern Medicine, Chicago, Illinois, USA
| | - Peter J Kahrilas
- Division of Gastroenterology and Hepatology, Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - John E Pandolfino
- Division of Gastroenterology and Hepatology, Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Dustin A Carlson
- Division of Gastroenterology and Hepatology, Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
4
|
Surdea-Blaga T, Sebestyen G, Czako Z, Hangan A, Dumitrascu DL, Ismaiel A, David L, Zsigmond I, Chiarioni G, Savarino E, Leucuta DC, Popa SL. Automated Chicago Classification for Esophageal Motility Disorder Diagnosis Using Machine Learning. SENSORS 2022; 22:s22145227. [PMID: 35890906 PMCID: PMC9323128 DOI: 10.3390/s22145227] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/04/2022] [Accepted: 07/08/2022] [Indexed: 02/04/2023]
Abstract
The goal of this paper is to provide a Machine Learning-based solution that can be utilized to automate the Chicago Classification algorithm, the state-of-the-art scheme for esophageal motility disease identification. First, the photos were preprocessed by locating the area of interest—the precise instant of swallowing. After resizing and rescaling the photos, they were utilized as input for the Deep Learning models. The InceptionV3 Deep Learning model was used to identify the precise class of the IRP. We used the DenseNet201 CNN architecture to classify the images into 5 different classes of swallowing disorders. Finally, we combined the results of the two trained ML models to automate the Chicago Classification algorithm. With this solution we obtained a top-1 accuracy and f1-score of 86% with no human intervention, automating the whole flow, from image preprocessing until Chicago classification and diagnosis.
Collapse
Affiliation(s)
- Teodora Surdea-Blaga
- Second Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (T.S.-B.); (D.L.D.); (A.I.); (L.D.); (S.L.P.)
| | - Gheorghe Sebestyen
- Computer Science Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (Z.C.); (A.H.)
- Correspondence:
| | - Zoltan Czako
- Computer Science Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (Z.C.); (A.H.)
| | - Anca Hangan
- Computer Science Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (Z.C.); (A.H.)
| | - Dan Lucian Dumitrascu
- Second Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (T.S.-B.); (D.L.D.); (A.I.); (L.D.); (S.L.P.)
| | - Abdulrahman Ismaiel
- Second Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (T.S.-B.); (D.L.D.); (A.I.); (L.D.); (S.L.P.)
| | - Liliana David
- Second Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (T.S.-B.); (D.L.D.); (A.I.); (L.D.); (S.L.P.)
| | - Imre Zsigmond
- Faculty of Mathematics and Computer Science, Babes-Bolyai University, 400347 Cluj-Napoca, Romania;
| | - Giuseppe Chiarioni
- Division of Gastroenterology, AOUI Verona, University of Verona, 37134 Verona, Italy;
| | - Edoardo Savarino
- Gastroenterology Unit, Department of Surgery, Oncology and Gastroenterology, University of Padua, 35122 Padova, Italy;
| | - Daniel Corneliu Leucuta
- Department of Medical Informatics and Biostatistics, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania;
| | - Stefan Lucian Popa
- Second Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (T.S.-B.); (D.L.D.); (A.I.); (L.D.); (S.L.P.)
| |
Collapse
|