1
|
Bhati D, Neha F, Amiruzzaman M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. J Imaging 2024; 10:239. [PMID: 39452402 PMCID: PMC11508748 DOI: 10.3390/jimaging10100239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 09/14/2024] [Accepted: 09/21/2024] [Indexed: 10/26/2024] Open
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.
Collapse
Affiliation(s)
- Deepshikha Bhati
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Fnu Neha
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA 19383, USA;
| |
Collapse
|
2
|
Spada C, Piccirelli S, Hassan C, Ferrari C, Toth E, González-Suárez B, Keuchel M, McAlindon M, Finta Á, Rosztóczy A, Dray X, Salvi D, Riccioni ME, Benamouzig R, Chattree A, Humphries A, Saurin JC, Despott EJ, Murino A, Johansson GW, Giordano A, Baltes P, Sidhu R, Szalai M, Helle K, Nemeth A, Nowak T, Lin R, Costamagna G. AI-assisted capsule endoscopy reading in suspected small bowel bleeding: a multicentre prospective study. Lancet Digit Health 2024; 6:e345-e353. [PMID: 38670743 DOI: 10.1016/s2589-7500(24)00048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/20/2024] [Accepted: 03/04/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND Capsule endoscopy reading is time consuming, and readers are required to maintain attention so as not to miss significant findings. Deep convolutional neural networks can recognise relevant findings, possibly exceeding human performances and reducing the reading time of capsule endoscopy. Our primary aim was to assess the non-inferiority of artificial intelligence (AI)-assisted reading versus standard reading for potentially small bowel bleeding lesions (high P2, moderate P1; Saurin classification) at per-patient analysis. The mean reading time in both reading modalities was evaluated among the secondary endpoints. METHODS Patients aged 18 years or older with suspected small bowel bleeding (with anaemia with or without melena or haematochezia, and negative bidirectional endoscopy) were prospectively enrolled at 14 European centres. Patients underwent small bowel capsule endoscopy with the Navicam SB system (Ankon, China), which is provided with a deep neural network-based AI system (ProScan) for automatic detection of lesions. Initial reading was performed in standard reading mode. Second blinded reading was performed with AI assistance (the AI operated a first-automated reading, and only AI-selected images were assessed by human readers). The primary endpoint was to assess the non-inferiority of AI-assisted reading versus standard reading in the detection (diagnostic yield) of potentially small bowel bleeding P1 and P2 lesions in a per-patient analysis. This study is registered with ClinicalTrials.gov, NCT04821349. FINDINGS From Feb 17, 2021 to Dec 29, 2021, 137 patients were prospectively enrolled. 133 patients were included in the final analysis (73 [55%] female, mean age 66·5 years [SD 14·4]; 112 [84%] completed capsule endoscopy). At per-patient analysis, the diagnostic yield of P1 and P2 lesions in AI-assisted reading (98 [73·7%] of 133 lesions) was non-inferior (p<0·0001) and superior (p=0·0213) to standard reading (82 [62·4%] of 133; 95% CI 3·6-19·0). Mean small bowel reading time was 33·7 min (SD 22·9) in standard reading and 3·8 min (3·3) in AI-assisted reading (p<0·0001). INTERPRETATION AI-assisted reading might provide more accurate and faster detection of clinically relevant small bowel bleeding lesions than standard reading. FUNDING ANKON Technologies, China and AnX Robotica, USA provided the NaviCam SB system.
Collapse
Affiliation(s)
- Cristiano Spada
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Stefania Piccirelli
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.
| | - Cesare Hassan
- IRCCS Humanitas Research Hospital, Department of Biomedical Sciences, Rozzano, Milan, Italy
| | - Clarissa Ferrari
- Unit of Research and Clinical Trials, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy
| | - Ervin Toth
- Skåne University Hospital, Lund University, Department of Gastroenterology, Malmö, Sweden
| | - Begoña González-Suárez
- Hospital Clínic of Barcelona, Endoscopy Unit, Gastroenterology Department, Barcelona, Spain
| | - Martin Keuchel
- Agaplesion Bethesda Krankenhaus Bergedorf, Academic Teaching Hospital of the University of Hamburg, Clinic for Internal Medicine, Hamburg, Germany
| | - Marc McAlindon
- Sheffield Teaching Hospitals NHS Trust, Academic Department of Gastroenterology and Hepatology, Sheffield, UK; Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Ádám Finta
- Endo-Kapszula Health Centre and Endoscopy Unit, Department of Gastroenterology, Székesfehérvár, Hungary
| | - András Rosztóczy
- University of Szeged, Department of Internal Medicine, Szeged, Hungary
| | - Xavier Dray
- Sorbonne University, Saint Antoine Hospital, APHP, Centre for Digestive Endoscopy, Paris, France
| | - Daniele Salvi
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Maria Elena Riccioni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Digestive Endoscopy Unit, Rome, Italy
| | - Robert Benamouzig
- Hôpital Avicenne, Université Paris 13, Service de Gastroenterologie, Bobigny, France
| | - Amit Chattree
- South Tyneside and Sunderland NHS Foundation Trust, Gastroenterology, Stockton-on-Tees, UK
| | - Adam Humphries
- St Mark's Hospital and Academic Institute, Department of Gastroenterology, Middlesex, UK
| | - Jean-Christophe Saurin
- Hospices Civils de Lyon-Centre Hospitalier Universitaire, Gastroenterology Department, Lyon, France
| | - Edward J Despott
- The Royal Free Hospital and University College London (UCL) Institute for Liver and Digestive Health, Royal Free Unit for Endoscopy, London, UK
| | - Alberto Murino
- The Royal Free Hospital and University College London (UCL) Institute for Liver and Digestive Health, Royal Free Unit for Endoscopy, London, UK
| | | | - Antonio Giordano
- Hospital Clínic of Barcelona, Endoscopy Unit, Gastroenterology Department, Barcelona, Spain
| | - Peter Baltes
- Agaplesion Bethesda Krankenhaus Bergedorf, Academic Teaching Hospital of the University of Hamburg, Clinic for Internal Medicine, Hamburg, Germany
| | - Reena Sidhu
- Sheffield Teaching Hospitals NHS Trust, Academic Department of Gastroenterology and Hepatology, Sheffield, UK; Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Milan Szalai
- Endo-Kapszula Health Centre and Endoscopy Unit, Department of Gastroenterology, Székesfehérvár, Hungary
| | - Krisztina Helle
- University of Szeged, Department of Internal Medicine, Szeged, Hungary
| | - Artur Nemeth
- Skåne University Hospital, Lund University, Department of Gastroenterology, Malmö, Sweden
| | | | - Rong Lin
- Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Department of Gastroenterology, Wuhan, China
| | - Guido Costamagna
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| |
Collapse
|
3
|
Zhao SQ, Liu WT. Progress in artificial intelligence assisted digestive endoscopy diagnosis of digestive system diseases. WORLD CHINESE JOURNAL OF DIGESTOLOGY 2024; 32:171-181. [DOI: 10.11569/wcjd.v32.i3.171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2024]
|
4
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
5
|
Deinsberger J, Moschitz I, Marquart E, Manz-Varga AK, Gschwandtner ME, Brugger J, Rinner C, Böhler K, Tschandl P, Weber B. Entwicklung eines Lokalisations-basierten Algorithmus zur Vorhersage der Ätiologie von Ulcera cruris. J Dtsch Dermatol Ges 2023; 21:1339-1350. [PMID: 37946636 DOI: 10.1111/ddg.15192_g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 06/22/2023] [Indexed: 11/12/2023]
Abstract
ZusammenfassungHintergrundDie diagnostische Abklärung des Ulcus cruris ist zeit‐ und kostenintensiv. Ziel dieser Studie war es, die Ulkuslokalisation als diagnostisches Kriterium zu bewerten und einen diagnostischen Algorithmus zur Unterstützung in der Diagnostik bereitzustellen.Patienten und MethodikDie Studie umfasste 277 Patienten mit Ulcera cruris. Es wurden die folgenden fünf Gruppen definiert: Ulcus cruris venosum, arterielle Ulzera, gemischte Ulzera, Arteriolosklerose und Vaskulitis. Mittels computergestütztem Oberflächenrendering wurden die Prädilektionsstellen der verschiedenen Ulkustypen bewertet. Die Ergebnisse wurden in ein multinomiales logistisches Regressionsmodell integriert, um die Wahrscheinlichkeit einer bestimmten Diagnose in Abhängigkeit von Lokalisation, Alter, bilateraler Beteiligung und Anzahl der Ulzera zu berechnen. Zusätzlich wurde eine neuronale Netzwerk‐Bildanalyse durchgeführt.ErgebnisseDie Mehrheit der venösen Ulzera fand sich in der medialen Malleolarregion. Arterielle Ulzera waren am häufigsten auf der dorsalen Seite des Vorfußes zu finden. Arteriolosklerotische Ulzera waren zumeist im mittleren Drittel des lateralen Unterschenkels lokalisiert. Vaskulitische Ulzera schienen zufällig verteilt zu sein und waren deutlich kleiner, häufiger multilokulär und bilateral. Das multinomiale logistische Regressionsmodell zeigte eine insgesamt zufriedenstellende Leistung mit einer geschätzten Genauigkeit von 0,68 bei ungesehenen Daten.SchlussfolgerungenDer vorgestellte Algorithmus auf Grundlage der Ulkuslokalisation kann als unterstützendes Instrument zur Eingrenzung potenzieller Differenzialdiagnosen und als Hilfestellung für die Einleitung diagnostischer Maßnahmen dienen.
Collapse
Affiliation(s)
- Julia Deinsberger
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| | - Irina Moschitz
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| | - Elias Marquart
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| | | | - Michael E Gschwandtner
- Klinische Abteilung für Angiologie, Universitätsklinik für Innere Medizin II, Medizinische Universität Wien, Wien, Österreich
| | - Jonas Brugger
- Zentrum für Medical Data Science, Medizinische Universität Wien, Wien, Österreich
| | - Christoph Rinner
- Zentrum für Medical Data Science, Medizinische Universität Wien, Wien, Österreich
| | - Kornelia Böhler
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| | - Philipp Tschandl
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| | - Benedikt Weber
- Universitätsklinik für Dermatologie, Medizinische Universität Wien, Wien, Österreich
| |
Collapse
|
6
|
Deinsberger J, Moschitz I, Marquart E, Manz-Varga AK, Gschwandtner ME, Brugger J, Rinner C, Böhler K, Tschandl P, Weber B. Development of a localization-based algorithm for the prediction of leg ulcer etiology. J Dtsch Dermatol Ges 2023; 21:1339-1349. [PMID: 37658661 DOI: 10.1111/ddg.15192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 06/22/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND Diagnostic work-up of leg ulcers is time- and cost-intensive. This study aimed at evaluating ulcer location as a diagnostic criterium and providing a diagnostic algorithm to facilitate differential diagnosis. PATIENTS AND METHODS The study consisted of 277 patients with lower leg ulcers. The following five groups were defined: Venous leg ulcer, arterial ulcers, mixed ulcer, arteriolosclerosis, and vasculitis. Using computational surface rendering, predilection sites of different ulcer types were evaluated. The results were integrated in a multinomial logistic regression model to calculate the likelihood of a specific diagnosis depending on location, age, bilateral involvement, and ulcer count. Additionally, neural network image analysis was performed. RESULTS The majority of venous ulcers extended to the medial malleolar region. Arterial ulcers were most frequently located on the dorsal aspect of the forefoot. Arteriolosclerotic ulcers were distinctly localized at the middle third of the lower leg. Vasculitic ulcers appeared to be randomly distributed and were markedly smaller, multilocular and bilateral. The multinomial logistic regression model showed an overall satisfactory performance with an estimated accuracy of 0.68 on unseen data. CONCLUSIONS The presented algorithm based on ulcer location may serve as a basic tool to narrow down potential diagnoses and guide further diagnostic work-up.
Collapse
Affiliation(s)
- Julia Deinsberger
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Irina Moschitz
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Elias Marquart
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | | | - Michael E Gschwandtner
- Division of Angiology, 2nd Department of Medicine, Medical University of Vienna, Vienna, Austria
| | - Jonas Brugger
- Center for Medical Data Science, Medical University of Vienna, Vienna, Austria
| | - Christoph Rinner
- Center for Medical Data Science, Medical University of Vienna, Vienna, Austria
| | - Kornelia Böhler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Benedikt Weber
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
7
|
Chu Y, Huang F, Gao M, Zou DW, Zhong J, Wu W, Wang Q, Shen XN, Gong TT, Li YY, Wang LF. Convolutional neural network-based segmentation network applied to image recognition of angiodysplasias lesion under capsule endoscopy. World J Gastroenterol 2023; 29:879-889. [PMID: 36816625 PMCID: PMC9932427 DOI: 10.3748/wjg.v29.i5.879] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/26/2022] [Accepted: 01/12/2023] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Small intestinal vascular malformations (angiodysplasias) are common causes of small intestinal bleeding. While capsule endoscopy has become the primary diagnostic method for angiodysplasia, manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload, which affects the accuracy of diagnosis.
AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine, achieve automatic disease detection, and shorten the capsule endoscopy (CE) reading time.
METHODS A convolutional neural network semantic segmentation model with a feature fusion method, which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour, thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions, was proposed. Resnet-50 was used as the skeleton network to design the fusion mechanism, fuse the shallow and depth features, and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia. The training set and test set were constructed and compared with PSPNet, Deeplab3+, and UperNet.
RESULTS The test set constructed in the study achieved satisfactory results, where pixel accuracy was 99%, mean intersection over union was 0.69, negative predictive value was 98.74%, and positive predictive value was 94.27%. The model parameter was 46.38 M, the float calculation was 467.2 G, and the time length to segment and recognize a picture was 0.6 s.
CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.
Collapse
Affiliation(s)
- Ye Chu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Fang Huang
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Min Gao
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Duo-Wu Zou
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Jie Zhong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Wei Wu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Qi Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Xiao-Nan Shen
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Ting-Ting Gong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Yuan-Yi Li
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Li-Fu Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| |
Collapse
|
8
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
9
|
Yin TK, Huang KL, Chiu SR, Yang YQ, Chang BR. Endoscopy Artefact Detection by Deep Transfer Learning of Baseline Models. J Digit Imaging 2022; 35:1101-1110. [PMID: 35478060 PMCID: PMC9582060 DOI: 10.1007/s10278-022-00627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 10/18/2022] Open
Abstract
To visualise the tumours inside the body on a screen, a long and thin tube is inserted with a light source and a camera at the tip to obtain video frames inside organs in endoscopy. However, multiple artefacts exist in these video frames that cause difficulty during the diagnosis of cancers. In this research, deep learning was applied to detect eight kinds of artefacts: specularity, bubbles, saturation, contrast, blood, instrument, blur, and imaging artefacts. Based on transfer learning with pre-trained parameters and fine-tuning, two state-of-the-art methods were applied for detection: faster region-based convolutional neural networks (Faster R-CNN) and EfficientDet. Experiments were implemented on the grand challenge dataset, Endoscopy Artefact Detection and Segmentation (EAD2020). To validate our approach in this study, we used phase I of 2,200 frames and phase II of 331 frames in the original training dataset with ground-truth annotations as training and testing dataset, respectively. Among the tested methods, EfficientDet-D2 achieves a score of 0.2008 (mAPd[Formula: see text]0.6+mIoUd[Formula: see text]0.4) on the dataset that is better than three other baselines: Faster-RCNN, YOLOv3, and RetinaNet, and competitive to the best non-baseline result scored 0.25123 on the leaderboard although our testing was on phase II of 331 frames instead of the original 200 testing frames. Without extra improvement techniques beyond basic neural networks such as test-time augmentation, we showed that a simple baseline could achieve state-of-the-art performance in detecting artefacts in endoscopy. In conclusion, we proposed the combination of EfficientDet-D2 with suitable data augmentation and pre-trained parameters during fine-tuning training to detect the artefacts in endoscopy.
Collapse
Affiliation(s)
- Tang-Kai Yin
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan.
| | - Kai-Lun Huang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Si-Rong Chiu
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Yu-Qi Yang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Bao-Rong Chang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| |
Collapse
|
10
|
A Robust Deep Model for Classification of Peptic Ulcer and Other Digestive Tract Disorders Using Endoscopic Images. Biomedicines 2022; 10:biomedicines10092195. [PMID: 36140296 PMCID: PMC9496137 DOI: 10.3390/biomedicines10092195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/23/2022] [Accepted: 08/24/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate patient disease classification and detection through deep-learning (DL) models are increasingly contributing to the area of biomedical imaging. The most frequent gastrointestinal (GI) tract ailments are peptic ulcers and stomach cancer. Conventional endoscopy is a painful and hectic procedure for the patient while Wireless Capsule Endoscopy (WCE) is a useful technology for diagnosing GI problems and doing painless gut imaging. However, there is still a challenge to investigate thousands of images captured during the WCE procedure accurately and efficiently because existing deep models are not scored with significant accuracy on WCE image analysis. So, to prevent emergency conditions among patients, we need an efficient and accurate DL model for real-time analysis. In this study, we propose a reliable and efficient approach for classifying GI tract abnormalities using WCE images by applying a deep Convolutional Neural Network (CNN). For this purpose, we propose a custom CNN architecture named GI Disease-Detection Network (GIDD-Net) that is designed from scratch with relatively few parameters to detect GI tract disorders more accurately and efficiently at a low computational cost. Moreover, our model successfully distinguishes GI disorders by visualizing class activation patterns in the stomach bowls as a heat map. The Kvasir-Capsule image dataset has a significant class imbalance problem, we exploited a synthetic oversampling technique BORDERLINE SMOTE (BL-SMOTE) to evenly distribute the image among the classes to prevent the problem of class imbalance. The proposed model is evaluated against various metrics and achieved the following values for evaluation metrics: 98.9%, 99.8%, 98.9%, 98.9%, 98.8%, and 0.0474 for accuracy, AUC, F1-score, precision, recall, and loss, respectively. From the simulation results, it is noted that the proposed model outperforms other state-of-the-art models in all the evaluation metrics.
Collapse
|
11
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
12
|
Investigating the significance of color space for abnormality detection in wireless capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103624] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
13
|
Chetcuti Zammit S, Sidhu R. Artificial intelligence within the small bowel: are we lagging behind? Curr Opin Gastroenterol 2022; 38:307-317. [PMID: 35645023 DOI: 10.1097/mog.0000000000000827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
PURPOSE OF REVIEW The use of artificial intelligence in small bowel capsule endoscopy is expanding. This review focusses on the use of artificial intelligence for small bowel pathology compared with human data and developments to date. RECENT FINDINGS The diagnosis and management of small bowel disease has been revolutionized with the advent of capsule endoscopy. Reading of capsule endoscopy videos however is time consuming with an average reading time of 40 min. Furthermore, the fatigued human eye may miss subtle lesions including indiscreet mucosal bulges. In recent years, artificial intelligence has made significant progress in the field of medicine including gastroenterology. Machine learning has enabled feature extraction and in combination with deep neural networks, image classification has now materialized for routine endoscopy for the clinician. SUMMARY Artificial intelligence is in built within the Navicam-Ankon capsule endoscopy reading system. This development will no doubt expand to other capsule endoscopy platforms and capsule endoscopies that are used to visualize other parts of the gastrointestinal tract as a standard. This wireless and patient friendly technique combined with rapid reading platforms with the help of artificial intelligence will become an attractive and viable choice to alter how patients are investigated in the future.
Collapse
Affiliation(s)
| | - Reena Sidhu
- Academic Department of Gastroenterology, Royal Hallamshire Hospital
- Academic Unit of Gastroenterology, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
14
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
15
|
|
16
|
Muruganantham P, Balakrishnan SM. Attention Aware Deep Learning Model for Wireless Capsule Endoscopy Lesion Classification and Localization. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00686-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
17
|
Zhao PY, Han K, Yao RQ, Ren C, Du XH. Application Status and Prospects of Artificial Intelligence in Peptic Ulcers. Front Surg 2022; 9:894775. [PMID: 35784921 PMCID: PMC9244632 DOI: 10.3389/fsurg.2022.894775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 05/31/2022] [Indexed: 02/05/2023] Open
Abstract
Peptic ulcer (PU) is a common and frequently occurring disease. Although PU seriously threatens the lives and health of global residents, the applications of artificial intelligence (AI) have strongly promoted diversification and modernization in the diagnosis and treatment of PU. This minireview elaborates on the research progress of AI in the field of PU, from PU's pathogenic factor Helicobacter pylori (Hp) infection, diagnosis and differential diagnosis, to its management and complications (bleeding, obstruction, perforation and canceration). Finally, the challenges and prospects of AI application in PU are prospected and expounded. With the in-depth understanding of modern medical technology, AI remains a promising option in the management of PU patients and plays a more indispensable role. How to realize the robustness, versatility and diversity of multifunctional AI systems in PU and conduct multicenter prospective clinical research as soon as possible are the top priorities in the future.
Collapse
Affiliation(s)
- Peng-yue Zhao
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ke Han
- Department of Gastroenterology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ren-qi Yao
- Translational Medicine Research Center, Medical Innovation Research Division and Fourth Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Chao Ren
- Department of Pulmonary and Critical Care Medicine, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Xiao-hui Du
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| |
Collapse
|
18
|
Bang CS, Lee JJ, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J Med Internet Res 2021; 23:e33267. [PMID: 34904949 PMCID: PMC8715364 DOI: 10.2196/33267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Interpretation of capsule endoscopy images or movies is operator-dependent and time-consuming. As a result, computer-aided diagnosis (CAD) has been applied to enhance the efficacy and accuracy of the review process. Two previous meta-analyses reported the diagnostic performance of CAD models for gastrointestinal ulcers or hemorrhage in capsule endoscopy. However, insufficient systematic reviews have been conducted, which cannot determine the real diagnostic validity of CAD models. OBJECTIVE To evaluate the diagnostic test accuracy of CAD models for gastrointestinal ulcers or hemorrhage using wireless capsule endoscopic images. METHODS We conducted core databases searching for studies based on CAD models for the diagnosis of ulcers or hemorrhage using capsule endoscopy and presenting data on diagnostic performance. Systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Overall, 39 studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of ulcers (or erosions) were .97 (95% confidence interval, .95-.98), .93 (.89-.95), .92 (.89-.94), and 138 (79-243), respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of hemorrhage (or angioectasia) were .99 (.98-.99), .96 (.94-0.97), .97 (.95-.99), and 888 (343-2303), respectively. Subgroup analyses showed robust results. Meta-regression showed that published year, number of training images, and target disease (ulcers vs erosions, hemorrhage vs angioectasia) was found to be the source of heterogeneity. No publication bias was detected. CONCLUSIONS CAD models showed high performance for the optical diagnosis of gastrointestinal ulcer and hemorrhage in wireless capsule endoscopy.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea.,Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea.,Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| |
Collapse
|
19
|
Zhou J, Hu N, Huang ZY, Song B, Wu CC, Zeng FX, Wu M. Application of artificial intelligence in gastrointestinal disease: a narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1188. [PMID: 34430629 PMCID: PMC8350704 DOI: 10.21037/atm-21-3001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 06/29/2021] [Indexed: 02/05/2023]
Abstract
Objective We collected evidence on the application of artificial intelligence (AI) in gastroenterology field. The review was carried out from two aspects of endoscopic types and gastrointestinal diseases, and briefly summarized the challenges and future directions in this field. Background Due to the advancement of computational power and a surge of available data, a solid foundation has been laid for the growth of AI. Specifically, varied machine learning (ML) techniques have been emerging in endoscopic image analysis. To improve the accuracy and efficiency of clinicians, AI has been widely applied to gastrointestinal endoscopy. Methods PubMed electronic database was searched using the keywords containing “AI”, “ML”, “deep learning (DL)”, “convolution neural network”, “endoscopy (such as white light endoscopy (WLE), narrow band imaging (NBI) endoscopy, magnifying endoscopy with narrow band imaging (ME-NBI), chromoendoscopy, endocytoscopy (EC), and capsule endoscopy (CE))”. Search results were assessed for relevance and then used for detailed discussion. Conclusions This review described the basic knowledge of AI, ML, and DL, and summarizes the application of AI in various endoscopes and gastrointestinal diseases. Finally, the challenges and directions of AI in clinical application were discussed. At present, the application of AI has solved some clinical problems, but more still needs to be done.
Collapse
Affiliation(s)
- Jun Zhou
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Na Hu
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Zhi-Yin Huang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Bin Song
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Chun-Cheng Wu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Fan-Xin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Min Wu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| |
Collapse
|
20
|
Convolution neural network for the diagnosis of wireless capsule endoscopy: a systematic review and meta-analysis. Surg Endosc 2021; 36:16-31. [PMID: 34426876 PMCID: PMC8741689 DOI: 10.1007/s00464-021-08689-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
Background Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. Methods A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. Results 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93–0.99), and for polyps/cancer is 0.97 (95% CI 0.82–0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93–0.99), for GI bleeding is 1.00 (95% CI 0.99–1.00), and for polyps/cancer is 0.98 (95% CI 0.92–0.99). Conclusion Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08689-3.
Collapse
|
21
|
de Maissin A, Vallée R, Flamant M, Fondain-Bossiere M, Berre CL, Coutrot A, Normand N, Mouchère H, Coudol S, Trang C, Bourreille A. Multi-expert annotation of Crohn's disease images of the small bowel for automatic detection using a convolutional recurrent attention neural network. Endosc Int Open 2021; 9:E1136-E1144. [PMID: 34222640 PMCID: PMC8216776 DOI: 10.1055/a-1468-3964] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 03/04/2021] [Indexed: 12/09/2022] Open
Abstract
Background and study aims Computer-aided diagnostic tools using deep neural networks are efficient for detection of lesions in endoscopy but require a huge number of images. The impact of the quality of annotation has not been tested yet. Here we describe a multi-expert annotated dataset of images extracted from capsules from Crohn's disease patients and the impact of the quality of annotations on the accuracy of a recurrent attention neural network. Methods Images of capsule were annotated by a reader first and then reviewed by three experts in inflammatory bowel disease. Concordance analysis between experts was evaluated by Fleiss' kappa and all the discordant images were, again, read by all the endoscopists to obtain a consensus annotation. A recurrent attention neural network developed for the study was tested before and after the consensus annotation. Available neural networks (ResNet and VGGNet) were also tested under the same conditions. Results The final dataset included 3498 images with 2124 non-pathological (60.7 %), 1360 pathological (38.9 %), and 14 (0.4 %) inconclusive. Agreement of the experts was good for distinguishing pathological and non-pathological images with a kappa of 0.79 ( P < 0.0001). The accuracy of our classifier and the available neural networks increased after the consensus annotation with a precision of 93.7 %, sensitivity of 93 %, and specificity of 95 %. Conclusions The accuracy of the neural network increased with improved annotations, suggesting that the number of images needed for the development of these systems could be diminished using a well-designed dataset.
Collapse
Affiliation(s)
- Astrid de Maissin
- CHD La Roche Sur Yon, department of gastroenterology, La Roche Sur Yon, France
| | - Remi Vallée
- Nantes University, CNRS, LS2N UMR 6004, Nantes, France
| | - Mathurin Flamant
- Clinique Jules Verne, department of gastroenterology, Nantes, France
| | - Marie Fondain-Bossiere
- CHU Nantes, Institut des Maladies de l’Appareil Digestif, CIC Inserm 1413, Nantes University, Nantes, France
| | - Catherine Le Berre
- CHU Nantes, Institut des Maladies de l’Appareil Digestif, CIC Inserm 1413, Nantes University, Nantes, France
| | | | | | | | - Sandrine Coudol
- CHU de Nantes, INSERM CIC 1413, Pôle Hospitalo-Universitaire 11: Santé Publique, Clinique des données, Nantes, France
| | - Caroline Trang
- CHU Nantes, Institut des Maladies de l’Appareil Digestif, CIC Inserm 1413, Nantes University, Nantes, France
| | - Arnaud Bourreille
- CHU Nantes, Institut des Maladies de l’Appareil Digestif, CIC Inserm 1413, Nantes University, Nantes, France
| |
Collapse
|
22
|
Bhandari P, Longcroft-Wheaton G, Libanio D, Pimentel-Nunes P, Albeniz E, Pioche M, Sidhu R, Spada C, Anderloni A, Repici A, Haidry R, Barthet M, Neumann H, Antonelli G, Testoni A, Ponchon T, Siersema PD, Fuccio L, Hassan C, Dinis-Ribeiro M. Revising the European Society of Gastrointestinal Endoscopy (ESGE) research priorities: a research progress update. Endoscopy 2021; 53:535-554. [PMID: 33822332 DOI: 10.1055/a-1397-3005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
BACKGROUND One of the aims of the European Society of Gastrointestinal Endoscopy (ESGE) is to encourage high quality endoscopic research at a European level. In 2016, the ESGE research committee published a set of research priorities. As endoscopic research is flourishing, we aimed to review the literature and determine whether endoscopic research over the last 4 years had managed to address any of our previously published priorities. METHODS As the previously published priorities were grouped under seven different domains, a working party with at least two European experts was created for each domain to review all the priorities under that domain. A structured review form was developed to standardize the review process. The group conducted an extensive literature search relevant to each of the priorities and then graded the priorities into three categories: (1) no longer a priority (well-designed trial, incorporated in national/international guidelines or adopted in routine clinical practice); (2) remains a priority (i. e. the above criterion was not met); (3) redefine the existing priority (i. e. the priority was too vague with the research question not clearly defined). RESULTS The previous ESGE research priorities document published in 2016 had 26 research priorities under seven domains. Our review of these priorities has resulted in seven priorities being removed from the list, one priority being partially removed, another seven being redefined to make them more precise, with eleven priorities remaining unchanged. This is a reflection of a rapid surge in endoscopic research, resulting in 27 % of research questions having already been answered and another 27 % requiring redefinition. CONCLUSIONS Our extensive review process has led to the removal of seven research priorities from the previous (2016) list, leaving 19 research priorities that have been redefined to make them more precise and relevant for researchers and funding bodies to target.
Collapse
Affiliation(s)
- Pradeep Bhandari
- Department of Gastroenterology, Portsmouth University Hospital NHS Trust, Portsmouth, UK
| | | | - Diogo Libanio
- Gastroenterology Department, Portuguese Oncology Institute of Porto, Porto, Portugal.,Center for Research in Health Technologies and Information Systems (CINTESIS), Faculty of Medicine, Porto, Portugal
| | - Pedro Pimentel-Nunes
- Gastroenterology Department, Portuguese Oncology Institute of Porto, Porto, Portugal.,Center for Research in Health Technologies and Information Systems (CINTESIS), Faculty of Medicine, Porto, Portugal
| | - Eduardo Albeniz
- Gastroenterology Department, Endoscopy Unit, Complejo Hospitalario de Navarra, Navarrabiomed-UPNA-IdiSNA, Pamplona, Spain
| | - Mathieu Pioche
- Gastroenterology Division, Edouard Herriot Hospital, Lyon, France
| | - Reena Sidhu
- Academic Department of Gastroenterology, Royal Hallamshire Hospital, Sheffield, UK
| | - Cristiano Spada
- Digestive Endoscopy and Gastroenterology, Fondazione Poliambulanza, Brescia, Italy.,Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea Anderloni
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli, Ariccia, Rome, Italy
| | - Alessandro Repici
- Department of Biomedical Sciences, Humanitas University, Milan, Italy.,Digestive Endoscopy Unit, IRCSS Humanitas Research Hospital, Milan, Italy
| | - Rehan Haidry
- Department of Gastroenterology, University College London Hospitals, London, UK
| | - Marc Barthet
- Department of Gastroenterology, Hôpital Nord, Assistance publique des hôpitaux de Marseille, Marseille, France
| | - Helmut Neumann
- Department of Medicine I, University Medical Center Mainz, Mainz, Germany.,GastroZentrum Lippe, Bad Salzuflen, Germany
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli, Ariccia, Rome, Italy.,Nuovo Regina Margherita Hospital, Rome, Italy.,Department of Translational and Precision Medicine, "Sapienza" University of Rome, Rome, Italy
| | | | - Thierry Ponchon
- Gastroenterology Division, Edouard Herriot Hospital, Lyon, France
| | - Peter D Siersema
- Department of Gastroenterology and Hepatology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lorenzo Fuccio
- Department of Medical and Surgical Sciences, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | | | - Mario Dinis-Ribeiro
- Gastroenterology Department, Portuguese Oncology Institute of Porto, Porto, Portugal.,Center for Research in Health Technologies and Information Systems (CINTESIS), Faculty of Medicine, Porto, Portugal
| |
Collapse
|
23
|
3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00328-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractWireless capsule endoscopy (WCE) might move through human body and captures the small bowel and captures the video and require the analysis of all frames of video due to which the diagnosis of gastrointestinal infections by the physician is a tedious task. This tiresome assignment has fuelled the researcher’s efforts to present an automated technique for gastrointestinal infections detection. The segmentation of stomach infections is a challenging task because the lesion region having low contrast and irregular shape and size. To handle this challenging task, in this research work a new deep semantic segmentation model is suggested for 3D-segmentation of the different types of stomach infections. In the segmentation model, deep labv3 is employed as a backbone of the ResNet-50 model. The model is trained with ground-masks and accurately performs pixel-wise classification in the testing phase. Similarity among the different types of stomach lesions accurate classification is a difficult task, which is addressed in this reported research by extracting deep features from global input images using a pre-trained ResNet-50 model. Furthermore, the latest advances in the estimation of uncertainty and model interpretability in the classification of different types of stomach infections is presented. The classification results estimate uncertainty related to the vital features in input and show how uncertainty and interpretability might be modeled in ResNet-50 for the classification of the different types of stomach infections. The proposed model achieved up to 90% prediction scores to authenticate the method performance.
Collapse
|
24
|
Small Bowel Capsule Endoscopy and artificial intelligence: First or second reader? Best Pract Res Clin Gastroenterol 2021; 52-53:101742. [PMID: 34172256 DOI: 10.1016/j.bpg.2021.101742] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 03/17/2021] [Indexed: 01/31/2023]
Abstract
Several machine learning algorithms have been developed in the past years with the aim to improve SBCE (Small Bowel Capsule Endoscopy) feasibility ensuring at the same time a high diagnostic accuracy. If past algorithms were affected by low performances and unsatisfactory accuracy, deep learning systems raised up the expectancy of effective AI (Artificial Intelligence) application in SBCE reading. Automatic detection and characterization of lesions, such as angioectasias, erosions and ulcers, would significantly shorten reading time other than improve reader attention during SBCE review in routine activity. It is debated whether AI can be used as first or second reader. This issue should be further investigated measuring accuracy and cost-effectiveness of AI systems. Currently, AI has been mostly evaluated as first reader. However, second reading may play an important role in SBCE training as well as for better characterizing lesions for which the first reader was uncertain.
Collapse
|
25
|
A Petri Dish for Histopathology Image Analysis. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-77211-6_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
26
|
Sumiyama K, Futakuchi T, Kamba S, Matsui H, Tamai N. Artificial intelligence in endoscopy: Present and future perspectives. Dig Endosc 2021; 33:218-230. [PMID: 32935376 DOI: 10.1111/den.13837] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Accepted: 09/04/2020] [Indexed: 02/08/2023]
Abstract
Artificial intelligence (AI) has been attracting considerable attention as an important scientific topic in the field of medicine. Deep-leaning (DL) technologies have been applied more dominantly than other traditional machine-learning methods. They have demonstrated excellent capability to retract visual features of objectives, even unnoticeable ones for humans, and analyze huge amounts of information within short periods. The amount of research applying DL-based models to real-time computer-aided diagnosis (CAD) systems has been increasing steadily in the GI endoscopy field. An array of published data has already demonstrated the advantages of DL-based CAD models in the detection and characterization of various neoplastic lesions, regardless of the level of the GI tract. Although the diagnostic performances and study designs vary widely, owing to a lack of academic standards to assess the capability of AI for GI endoscopic diagnosis fairly, the superiority of CAD models has been demonstrated for almost all applications studied so far. Most of the challenges associated with AI in the endoscopy field are general problems for AI models used in the real world outside of medical fields. Solutions have been explored seriously and some solutions have been tested in the endoscopy field. Given that AI has become the basic technology to make machines react to the environment, AI would be a major technological paradigm shift, for not only diagnosis but also treatment. In the near future, autonomous endoscopic diagnosis might no longer be just a dream, as we are witnessing with the advent of autonomously driven electric vehicles.
Collapse
Affiliation(s)
- Kazuki Sumiyama
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Toshiki Futakuchi
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Shunsuke Kamba
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Hiroaki Matsui
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Naoto Tamai
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
27
|
Nayyar Z, Attique Khan M, Alhussein M, Nazir M, Aurangzeb K, Nam Y, Kadry S, Irtaza Haider S. Gastric Tract Disease Recognition Using Optimized Deep Learning Features. COMPUTERS, MATERIALS & CONTINUA 2021; 68:2041-2056. [DOI: 10.32604/cmc.2021.015916] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 02/13/2021] [Indexed: 08/25/2024]
|
28
|
Barash Y, Azaria L, Soffer S, Margalit Yehuda R, Shlomi O, Ben-Horin S, Eliakim R, Klang E, Kopylov U. Ulcer severity grading in video capsule images of patients with Crohn's disease: an ordinal neural network solution. Gastrointest Endosc 2021; 93:187-192. [PMID: 32535191 DOI: 10.1016/j.gie.2020.05.066] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Capsule endoscopy (CE) is an important modality for diagnosis and follow-up of Crohn's disease (CD). The severity of ulcers at endoscopy is significant for predicting the course of CD. Deep learning has been proven accurate in detecting ulcers on CE. However, endoscopic classification of ulcers by deep learning has not been attempted. The aim of our study was to develop a deep learning algorithm for automated grading of CD ulcers on CE. METHODS We retrospectively collected CE images of CD ulcers from our CE database. In experiment 1, the severity of each ulcer was graded by 2 capsule readers based on the PillCam CD classification (grades 1-3 from mild to severe), and the inter-reader variability was evaluated. In experiment 2, a consensus reading by 3 capsule readers was used to train an ordinal convolutional neural network (CNN) to automatically grade images of ulcers, and the resulting algorithm was tested against the consensus reading. A pretraining stage included training the network on images of normal mucosa and ulcerated mucosa. RESULTS Overall, our dataset included 17,640 CE images from 49 patients; 7391 images with mucosal ulcers and 10,249 normal images. A total of 2598 randomly selected pathologic images were further graded from 1 to 3 according to ulcer severity in the 2 different experiments. In experiment 1, overall inter-reader agreement occurred for 31% of the images (345 of 1108) and 76% (752 of 989) for distinction of grades 1 and 3. In experiment 2, the algorithm was trained on 1242 images. It achieved an overall agreement for consensus reading of 67% (166 of 248) and 91% (158 of 173) for distinction of grades 1 and 3. The classification accuracy of the algorithm was 0.91 (95% confidence interval, 0.867-0.954) for grade 1 versus grade 3 ulcers, 0.78 (95% confidence interval, 0.716-0.844) for grade 2 versus grade 3, and 0.624 (95% confidence interval, 0.547-0.701) for grade 1 versus grade 2. CONCLUSIONS CNN achieved high accuracy in detecting severe CD ulcerations. CNN-assisted CE readings in patients with CD can potentially facilitate and improve diagnosis and monitoring in these patients.
Collapse
Affiliation(s)
- Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Liran Azaria
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Shelly Soffer
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Reuma Margalit Yehuda
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Oranit Shlomi
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Shomron Ben-Horin
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Rami Eliakim
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Uri Kopylov
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
29
|
Atsawarungruangkit A, Elfanagely Y, Asombang AW, Rupawala A, Rich HG. Understanding deep learning in capsule endoscopy: Can artificial intelligence enhance clinical practice? Artif Intell Gastrointest Endosc 2020; 1:33-43. [DOI: 10.37126/aige.v1.i2.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/01/2020] [Accepted: 10/13/2020] [Indexed: 02/06/2023] Open
Abstract
Wireless capsule endoscopy (WCE) enables physicians to examine the gastrointestinal tract by transmitting images wirelessly from a disposable capsule to a data recorder. Although WCE is the least invasive endoscopy technique for diagnosing gastrointestinal disorders, interpreting a WCE study requires significant time effort and training. Analysis of images by artificial intelligence, through advances such as machine or deep learning, has been increasingly applied to medical imaging. There has been substantial interest in using deep learning to detect various gastrointestinal disorders based on WCE images. This article discusses basic knowledge of deep learning, applications of deep learning in WCE, and the implementation of deep learning model in a clinical setting. We anticipate continued research investigating the use of deep learning in interpreting WCE studies to generate predictive algorithms and aid in the diagnosis of gastrointestinal disorders.
Collapse
Affiliation(s)
- Amporn Atsawarungruangkit
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Yousef Elfanagely
- Department of Internal Medicine, Brown University, Providence, RI 02903, United States
| | - Akwi W Asombang
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Abbas Rupawala
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Harlan G Rich
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| |
Collapse
|