1
|
Oukdach Y, Garbaz A, Kerkaou Z, El Ansari M, Koutti L, El Ouafdi AF, Salihoun M. UViT-Seg: An Efficient ViT and U-Net-Based Framework for Accurate Colorectal Polyp Segmentation in Colonoscopy and WCE Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2354-2374. [PMID: 38671336 PMCID: PMC11522253 DOI: 10.1007/s10278-024-01124-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/01/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024]
Abstract
Colorectal cancer (CRC) stands out as one of the most prevalent global cancers. The accurate localization of colorectal polyps in endoscopy images is pivotal for timely detection and removal, contributing significantly to CRC prevention. The manual analysis of images generated by gastrointestinal screening technologies poses a tedious task for doctors. Therefore, computer vision-assisted cancer detection could serve as an efficient tool for polyp segmentation. Numerous efforts have been dedicated to automating polyp localization, with the majority of studies relying on convolutional neural networks (CNNs) to learn features from polyp images. Despite their success in polyp segmentation tasks, CNNs exhibit significant limitations in precisely determining polyp location and shape due to their sole reliance on learning local features from images. While gastrointestinal images manifest significant variation in their features, encompassing both high- and low-level ones, a framework that combines the ability to learn both features of polyps is desired. This paper introduces UViT-Seg, a framework designed for polyp segmentation in gastrointestinal images. Operating on an encoder-decoder architecture, UViT-Seg employs two distinct feature extraction methods. A vision transformer in the encoder section captures long-range semantic information, while a CNN module, integrating squeeze-excitation and dual attention mechanisms, captures low-level features, focusing on critical image regions. Experimental evaluations conducted on five public datasets, including CVC clinic, ColonDB, Kvasir-SEG, ETIS LaribDB, and Kvasir Capsule-SEG, demonstrate UViT-Seg's effectiveness in polyp localization. To confirm its generalization performance, the model is tested on datasets not used in training. Benchmarking against common segmentation methods and state-of-the-art polyp segmentation approaches, the proposed model yields promising results. For instance, it achieves a mean Dice coefficient of 0.915 and a mean intersection over union of 0.902 on the CVC Colon dataset. Furthermore, UViT-Seg has the advantage of being efficient, requiring fewer computational resources for both training and testing. This feature positions it as an optimal choice for real-world deployment scenarios.
Collapse
Affiliation(s)
- Yassine Oukdach
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco.
| | - Anass Garbaz
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Zakaria Kerkaou
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mohamed El Ansari
- Informatics and Applications Laboratory, Department of Computer Sciences, Faculty of Science, Moulay Ismail University, B.P 11201, Meknès, 52000, Morocco
| | - Lahcen Koutti
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Ahmed Fouad El Ouafdi
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mouna Salihoun
- Faculty of Medicine and Pharmacy of Rabat, Mohammed V University of Rabat, Rabat, 10000, Morocco
| |
Collapse
|
2
|
George AA, Tan JL, Kovoor JG, Lee A, Stretton B, Gupta AK, Bacchi S, George B, Singh R. Artificial intelligence in capsule endoscopy: development status and future expectations. MINI-INVASIVE SURGERY 2024. [DOI: 10.20517/2574-1225.2023.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
In this review, we aim to illustrate the state-of-the-art artificial intelligence (AI) applications in the field of capsule endoscopy. AI has made significant strides in gastrointestinal imaging, particularly in capsule endoscopy - a non-invasive procedure for capturing gastrointestinal tract images. However, manual analysis of capsule endoscopy videos is labour-intensive and error-prone, prompting the development of automated computational algorithms and AI models. While currently serving as a supplementary observer, AI has the capacity to evolve into an autonomous, integrated reading system, potentially significantly reducing capsule reading time while surpassing human accuracy. We searched Embase, Pubmed, Medline, and Cochrane databases from inception to 06 Jul 2023 for studies investigating the use of AI for capsule endoscopy and screened retrieved records for eligibility. Quantitative and qualitative data were extracted and synthesised to identify current themes. In the search, 824 articles were collected, and 291 duplicates and 31 abstracts were deleted. After a double-screening process and full-text review, 106 publications were included in the review. Themes pertaining to AI for capsule endoscopy included active gastrointestinal bleeding, erosions and ulcers, vascular lesions and angiodysplasias, polyps and tumours, inflammatory bowel disease, coeliac disease, hookworms, bowel prep assessment, and multiple lesion detection. This review provides current insights into the impact of AI on capsule endoscopy as of 2023. AI holds the potential for faster and precise readings and the prospect of autonomous image analysis. However, careful consideration of diagnostic requirements and potential challenges is crucial. The untapped potential within vision transformer technology hints at further evolution and even greater patient benefit.
Collapse
|
3
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
4
|
Malik H, Anees T, Al-Shamaylehs AS, Alharthi SZ, Khalil W, Akhunzada A. Deep Learning-Based Classification of Chest Diseases Using X-rays, CT Scans, and Cough Sound Images. Diagnostics (Basel) 2023; 13:2772. [PMID: 37685310 PMCID: PMC10486427 DOI: 10.3390/diagnostics13172772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/14/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. The objective of the work is to identify nine different types of chest diseases, including COVID-19, edema (EDE), LC, PNEU, pneumothorax (PNEUTH), normal, atelectasis (ATE), and consolidation lung (COL). Therefore, we designed a novel deep learning (DL)-based chest disease detection network (DCDD_Net) that uses a CXR, CT scans, and cough sound images for the identification of nine different types of chest diseases. The scalogram method is used to convert the cough sounds into an image. Before training the proposed DCDD_Net model, the borderline (BL) SMOTE is applied to balance the CXR, CT scans, and cough sound images of nine chest diseases. The proposed DCDD_Net model is trained and evaluated on 20 publicly available benchmark chest disease datasets of CXR, CT scan, and cough sound images. The classification performance of the DCDD_Net is compared with four baseline models, i.e., InceptionResNet-V2, EfficientNet-B0, DenseNet-201, and Xception, as well as state-of-the-art (SOTA) classifiers. The DCDD_Net achieved an accuracy of 96.67%, a precision of 96.82%, a recall of 95.76%, an F1-score of 95.61%, and an area under the curve (AUC) of 99.43%. The results reveal that DCDD_Net outperformed the other four baseline models in terms of many performance evaluation metrics. Thus, the proposed DCDD_Net model can provide significant assistance to radiologists and medical experts. Additionally, the proposed model was also shown to be resilient by statistical evaluations of the datasets using McNemar and ANOVA tests.
Collapse
Affiliation(s)
- Hassaan Malik
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Tayyaba Anees
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Ahmad Sami Al-Shamaylehs
- Department of Networks and Cybersecurity, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan;
| | - Salman Z. Alharthi
- Department of Information System, College of Computers and Information Systems, Al-Lith Campus, Umm AL-Qura University, P.O. Box 7745, AL-Lith 21955, Saudi Arabia
| | - Wajeeha Khalil
- Department of Computer Science and Information Technology, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan;
| | - Adnan Akhunzada
- College of Computing & IT, University of Doha for Science and Technology, Doha P.O. Box 24449, Qatar;
| |
Collapse
|
5
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
6
|
A Robust Deep Model for Classification of Peptic Ulcer and Other Digestive Tract Disorders Using Endoscopic Images. Biomedicines 2022; 10:biomedicines10092195. [PMID: 36140296 PMCID: PMC9496137 DOI: 10.3390/biomedicines10092195] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/23/2022] [Accepted: 08/24/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate patient disease classification and detection through deep-learning (DL) models are increasingly contributing to the area of biomedical imaging. The most frequent gastrointestinal (GI) tract ailments are peptic ulcers and stomach cancer. Conventional endoscopy is a painful and hectic procedure for the patient while Wireless Capsule Endoscopy (WCE) is a useful technology for diagnosing GI problems and doing painless gut imaging. However, there is still a challenge to investigate thousands of images captured during the WCE procedure accurately and efficiently because existing deep models are not scored with significant accuracy on WCE image analysis. So, to prevent emergency conditions among patients, we need an efficient and accurate DL model for real-time analysis. In this study, we propose a reliable and efficient approach for classifying GI tract abnormalities using WCE images by applying a deep Convolutional Neural Network (CNN). For this purpose, we propose a custom CNN architecture named GI Disease-Detection Network (GIDD-Net) that is designed from scratch with relatively few parameters to detect GI tract disorders more accurately and efficiently at a low computational cost. Moreover, our model successfully distinguishes GI disorders by visualizing class activation patterns in the stomach bowls as a heat map. The Kvasir-Capsule image dataset has a significant class imbalance problem, we exploited a synthetic oversampling technique BORDERLINE SMOTE (BL-SMOTE) to evenly distribute the image among the classes to prevent the problem of class imbalance. The proposed model is evaluated against various metrics and achieved the following values for evaluation metrics: 98.9%, 99.8%, 98.9%, 98.9%, 98.8%, and 0.0474 for accuracy, AUC, F1-score, precision, recall, and loss, respectively. From the simulation results, it is noted that the proposed model outperforms other state-of-the-art models in all the evaluation metrics.
Collapse
|
7
|
Vajravelu A, Selvan KT, Jamil MMBA, Anitha J, Diez IDLT. Machine learning techniques to detect bleeding frame and area in wireless capsule endoscopy video. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Wireless Capsule Endoscopy (WCE) allows direct visual inspecting of the full digestive system of the patient without invasion and pain, at the price of a long examination by physicians of a large number of photographs. This research presents a new approach to color extraction to differentiate bleeding frames from normal ones and locate more bleeding areas. We have a dual-system suggestion. We use entire color information on the WCE pictures and the pixel-represented clustering approach to get the clustered centers that characterize WCE pictures as words. Then we evaluate the status of a WCE framework using the nearby SVM and K methods (KNN). The classification performance is 95.75% accurate for the AUC 0.9771% and validates the exciting performance for bleeding classification provided by the suggested approach. Second, we present a two-step approach for extracting saliency maps to emphasize bleeding locations with a distinct color channel mixer to build a first-stage salience map. The second stage salience map was taken with optical contrast.We locate bleeding spots following a suitable fusion approach and threshold. Quantitative and qualitative studies demonstrate that our approaches can correctly distinguish bleeding sites from neighborhoods.
Collapse
Affiliation(s)
- Ashok Vajravelu
- Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn, Malaysia
| | - K.S. Tamil Selvan
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, India
| | | | - Jude Anitha
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - Isabel de la Torre Diez
- Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Spain
| |
Collapse
|
8
|
Raut V, Gunjan R, Shete VV, Eknath UD. Gastrointestinal tract disease segmentation and classification in wireless capsule endoscopy using intelligent deep learning model. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2099298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Vrushali Raut
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Reena Gunjan
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Virendra V. Shete
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Upasani Dhananjay Eknath
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| |
Collapse
|
9
|
Investigating the significance of color space for abnormality detection in wireless capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
10
|
|
11
|
DFCA-Net: Dual Feature Context Aggregation Network for Bleeding Areas Segmentation in Wireless Capsule Endoscopy Images. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00689-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Goel N, Kaur S, Gunjan D, Mahapatra SJ. Dilated CNN for abnormality detection in wireless capsule endoscopy images. Soft comput 2022. [DOI: 10.1007/s00500-021-06546-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
13
|
Efficient scheme for WCE image compression based on strategic chroma subsampling and encoding. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103184] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
14
|
Amiri Z, Hassanpour H, Beghdadi A. Feature extraction for abnormality detection in capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103219] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Liu Y, Miao Q, Surawech C, Zheng H, Nguyen D, Yang G, Raman SS, Sung K. Deep Learning Enables Prostate MRI Segmentation: A Large Cohort Evaluation With Inter-Rater Variability Analysis. Front Oncol 2021; 11:801876. [PMID: 34993152 PMCID: PMC8724207 DOI: 10.3389/fonc.2021.801876] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/23/2021] [Indexed: 02/02/2023] Open
Abstract
Whole-prostate gland (WPG) segmentation plays a significant role in prostate volume measurement, treatment, and biopsy planning. This study evaluated a previously developed automatic WPG segmentation, deep attentive neural network (DANN), on a large, continuous patient cohort to test its feasibility in a clinical setting. With IRB approval and HIPAA compliance, the study cohort included 3,698 3T MRI scans acquired between 2016 and 2020. In total, 335 MRI scans were used to train the model, and 3,210 and 100 were used to conduct the qualitative and quantitative evaluation of the model. In addition, the DANN-enabled prostate volume estimation was evaluated by using 50 MRI scans in comparison with manual prostate volume estimation. For qualitative evaluation, visual grading was used to evaluate the performance of WPG segmentation by two abdominal radiologists, and DANN demonstrated either acceptable or excellent performance in over 96% of the testing cohort on the WPG or each prostate sub-portion (apex, midgland, or base). Two radiologists reached a substantial agreement on WPG and midgland segmentation (κ = 0.75 and 0.63) and moderate agreement on apex and base segmentation (κ = 0.56 and 0.60). For quantitative evaluation, DANN demonstrated a dice similarity coefficient of 0.93 ± 0.02, significantly higher than other baseline methods, such as DeepLab v3+ and UNet (both p values < 0.05). For the volume measurement, 96% of the evaluation cohort achieved differences between the DANN-enabled and manual volume measurement within 95% limits of agreement. In conclusion, the study showed that the DANN achieved sufficient and consistent WPG segmentation on a large, continuous study cohort, demonstrating its great potential to serve as a tool to measure prostate volume.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Physics and Biology in Medicine Interdisciplinary Program (IDP), David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang City, China
| | - Chuthaporn Surawech
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Haoxin Zheng
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Computer Science, Henry Samueli School of Engineering and Applied Science, University of California, Los Angeles, CA, United States
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Physics and Biology in Medicine Interdisciplinary Program (IDP), David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
16
|
Bang CS, Lee JJ, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J Med Internet Res 2021; 23:e33267. [PMID: 34904949 PMCID: PMC8715364 DOI: 10.2196/33267] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Interpretation of capsule endoscopy images or movies is operator-dependent and time-consuming. As a result, computer-aided diagnosis (CAD) has been applied to enhance the efficacy and accuracy of the review process. Two previous meta-analyses reported the diagnostic performance of CAD models for gastrointestinal ulcers or hemorrhage in capsule endoscopy. However, insufficient systematic reviews have been conducted, which cannot determine the real diagnostic validity of CAD models. OBJECTIVE To evaluate the diagnostic test accuracy of CAD models for gastrointestinal ulcers or hemorrhage using wireless capsule endoscopic images. METHODS We conducted core databases searching for studies based on CAD models for the diagnosis of ulcers or hemorrhage using capsule endoscopy and presenting data on diagnostic performance. Systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Overall, 39 studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of ulcers (or erosions) were .97 (95% confidence interval, .95-.98), .93 (.89-.95), .92 (.89-.94), and 138 (79-243), respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of hemorrhage (or angioectasia) were .99 (.98-.99), .96 (.94-0.97), .97 (.95-.99), and 888 (343-2303), respectively. Subgroup analyses showed robust results. Meta-regression showed that published year, number of training images, and target disease (ulcers vs erosions, hemorrhage vs angioectasia) was found to be the source of heterogeneity. No publication bias was detected. CONCLUSIONS CAD models showed high performance for the optical diagnosis of gastrointestinal ulcer and hemorrhage in wireless capsule endoscopy.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea.,Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea.,Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| |
Collapse
|
17
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
18
|
Amiri Z, Hassanpour H, Beghdadi A. A Computer-Aided Method for Digestive System Abnormality Detection in WCE Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7863113. [PMID: 34707798 PMCID: PMC8545542 DOI: 10.1155/2021/7863113] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/25/2021] [Accepted: 10/06/2021] [Indexed: 12/01/2022]
Abstract
Wireless capsule endoscopy (WCE) is a powerful tool for the diagnosis of gastrointestinal diseases. The output of this tool is in video with a length of about eight hours, containing about 8000 frames. It is a difficult task for a physician to review all of the video frames. In this paper, a new abnormality detection system for WCE images is proposed. The proposed system has four main steps: (1) preprocessing, (2) region of interest (ROI) extraction, (3) feature extraction, and (4) classification. In ROI extraction, at first, distinct areas are highlighted and nondistinct areas are faded by using the joint normal distribution; then, distinct areas are extracted as an ROI segment by considering a threshold. The main idea is to extract abnormal areas in each frame. Therefore, it can be used to extract various lesions in WCE images. In the feature extraction step, three different types of features (color, texture, and shape) are employed. Finally, the features are classified using the support vector machine. The proposed system was tested on the Kvasir-Capsule dataset. The proposed system can detect multiple lesions from WCE frames with high accuracy.
Collapse
Affiliation(s)
- Zahra Amiri
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Hamid Hassanpour
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Azeddine Beghdadi
- Department of Computer Science and Engineering, University Sorbonne Paris Nord, Villetaneuse, France
| |
Collapse
|
19
|
Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, Krejcar O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput Biol Med 2021; 137:104789. [PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 08/18/2021] [Accepted: 08/18/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway; Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, Norway; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jan Bures
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tacheci
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| |
Collapse
|
20
|
Li BN, Wang X, Wang R, Zhou T, Gao R, Ciaccio EJ, Green PH. Celiac Disease Detection From Videocapsule Endoscopy Images Using Strip Principal Component Analysis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1396-1404. [PMID: 31751282 DOI: 10.1109/tcbb.2019.2953701] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The purpose of this study was to implement principal component analysis (PCA) on videocapsule endoscopy (VE) images to develop a new computerized tool for celiac disease recognition. Three PCA algorithms were implemented for feature extraction and sparse representation. A novel strip PCA (SPCA) with nongreedy L1-norm maximization is proposed for VE image analysis. The extracted principal components were interpreted by a non-parametric k-nearest neighbor (k-NN) method for automated celiac disease classification. A benchmark dataset of 460 images (240 from celiac disease patients with small intestinal villous atrophy versus 220 control patients lacking villous atrophy) was constructed from the clinical VE series. It was found that the newly developed SPCA with nongreedy L1-norm maximization was most efficient for computerized celiac disease recognition, having a robust performance with an average recognition accuracy of 93.9 percent. Furthermore, SPCA also has a reduced computation time as compared with other methods. Therefore, it is likely that SPCA will be a helpful adjunct for the diagnosis of celiac disease.
Collapse
|
21
|
Lan L, Ye C. Recurrent generative adversarial networks for unsupervised WCE video summarization. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106971] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
22
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
23
|
Rathnamala S, Jenicka S. Automated bleeding detection in wireless capsule endoscopy images based on color feature extraction from Gaussian mixture model superpixels. Med Biol Eng Comput 2021; 59:969-987. [PMID: 33837919 DOI: 10.1007/s11517-021-02352-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 03/19/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy is the commonly employed modality in the treatment of gastrointestinal tract pathologies. However, the time taken for interpretation of these images is very high due to the large volume of images generated. Automated detection of disorders with these images can facilitate faster clinical interventions. In this paper, we propose an automated system based on Gaussian mixture model superpixels for bleeding detection and segmentation of candidate regions. The proposed system is realized with a classic binary support vector machine classifier trained with seven features including color and texture attributes extracted from the Gaussian mixture model superpixels of the WCE images. On detection of bleeding images, bleeding regions are segmented from them, by incrementally grouping the superpixels based on deltaE color differences. Tested with standard datasets, this system exhibits best performance compared to the state-of-the-art approaches with respect to classification accuracy, feature selection, computational time, and segmentation accuracy. The proposed system achieves 99.88% accuracy, 99.83% sensitivity, and 100% specificity signifying the effectiveness of the proposed system in bleeding detection with very few classification errors.
Collapse
Affiliation(s)
- S Rathnamala
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India.
| | - S Jenicka
- Department of CSE, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India
| |
Collapse
|
24
|
Ghosh T, Chakareski J. Deep Transfer Learning for Automated Intestinal Bleeding Detection in Capsule Endoscopy Imaging. J Digit Imaging 2021; 34:404-417. [PMID: 33728563 PMCID: PMC8290011 DOI: 10.1007/s10278-021-00428-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 03/04/2020] [Accepted: 01/18/2021] [Indexed: 12/21/2022] Open
Abstract
PURPOSE The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding. METHODS In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network. RESULTS To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved. CONCLUSION Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.
Collapse
Affiliation(s)
- Tonmoy Ghosh
- Department of Electrical and Computer Engineering, University of Alabama, Alabama, 35401, Tuscaloosa, USA.
| | - Jacob Chakareski
- Department of Informatics, College of Computing , New Jersey Institute of Technology, Newark, 07103, New Jersey, USA
| |
Collapse
|
25
|
3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00328-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractWireless capsule endoscopy (WCE) might move through human body and captures the small bowel and captures the video and require the analysis of all frames of video due to which the diagnosis of gastrointestinal infections by the physician is a tedious task. This tiresome assignment has fuelled the researcher’s efforts to present an automated technique for gastrointestinal infections detection. The segmentation of stomach infections is a challenging task because the lesion region having low contrast and irregular shape and size. To handle this challenging task, in this research work a new deep semantic segmentation model is suggested for 3D-segmentation of the different types of stomach infections. In the segmentation model, deep labv3 is employed as a backbone of the ResNet-50 model. The model is trained with ground-masks and accurately performs pixel-wise classification in the testing phase. Similarity among the different types of stomach lesions accurate classification is a difficult task, which is addressed in this reported research by extracting deep features from global input images using a pre-trained ResNet-50 model. Furthermore, the latest advances in the estimation of uncertainty and model interpretability in the classification of different types of stomach infections is presented. The classification results estimate uncertainty related to the vital features in input and show how uncertainty and interpretability might be modeled in ResNet-50 for the classification of the different types of stomach infections. The proposed model achieved up to 90% prediction scores to authenticate the method performance.
Collapse
|
26
|
Su B, Yu S, Li X, Gong Y, Li H, Ren Z, Xia Y, Wang H, Zhang Y, Yao W, Wang J, Tang J. Autonomous Robot for Removing Superficial Traumatic Blood. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2021; 9:2600109. [PMID: 33598368 PMCID: PMC7880304 DOI: 10.1109/jtehm.2021.3056618] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 01/16/2021] [Accepted: 01/29/2021] [Indexed: 11/09/2022]
Abstract
Objective: To remove blood from an incision and find the incision spot is a key task during surgery, or else over discharge of blood will endanger a patient's life. However, the repetitive manual blood removal involves plenty of workload contributing fatigue of surgeons. Thus, it is valuable to design a robotic system which can automatically remove blood on the incision surface. Methods: In this paper, we design a robotic system to fulfill the surgical task of the blood removal. The system consists of a pair of dual cameras, a 6-DoF robotic arm, an aspirator whose handle is fixed to a robotic arm, and a pump connected to the aspirator. Further, a path-planning algorithm is designed to generate a path, which the aspirator tip should follow to remove blood. Results: In a group of simulating bleeding experiments on ex vivo porcine tissue, the contour of the blood region is detected, and the reconstructed spatial coordinates of the detected blood contour is obtained afterward. The BRR robot cleans thoroughly the blood running out the incision. Conclusions: This study contributes the first result on designing an autonomous blood removal medical robot. The skill of the surgical blood removal operation, which is manually operated by surgeons nowadays, is alternatively grasped by the proposed BRR medical robot.
Collapse
Affiliation(s)
- Baiquan Su
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Shi Yu
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Xintong Li
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yi Gong
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Han Li
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Zifeng Ren
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yijing Xia
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - He Wang
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yucheng Zhang
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Wei Yao
- Department of GastroenterologyPeking University Third HospitalBeijing100191China
| | - Junchen Wang
- School of Mechanical Engineering and AutomationBeihang UniversityBeijing100191China.,Beijing Advanced Innovation Center, Biomedical EngineeringBeihang UniversityBeijing100086China
| | - Jie Tang
- Department of NeurosurgeryXuanwu HospitalCapital Medical UniversityBeijing100053China
| |
Collapse
|
27
|
Caroppo A, Leone A, Siciliano P. Deep transfer learning approaches for bleeding detection in endoscopy images. Comput Med Imaging Graph 2021; 88:101852. [PMID: 33493998 DOI: 10.1016/j.compmedimag.2020.101852] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 12/17/2020] [Accepted: 12/18/2020] [Indexed: 12/17/2022]
Abstract
Wireless capsule endoscopy is a non-invasive, wireless imaging tool that has developed rapidly over the last several years. One of the main limiting factors using this technology is that it produces a huge number of images, whose analysis, to be done by a doctor, is an extremely time-consuming process. In this research area, the management of this problem has been addressed with the development of Computer-aided Diagnosis systems thanks to which the automatic inspection and analysis of images acquired by the capsule has clearly improved. Recently, a big advance in classification of endoscopic images is achieved with the emergence of deep learning methods. The proposed expert system employs three pre-trained deep convolutional neural networks for feature extraction. In order to construct efficient feature sets, the features from VGG19, InceptionV3 and ResNet50 models are then selected and fused using the minimum Redundancy Maximum Relevance method and different fusion rules. Finally, supervised machine learning algorithms are employed to classify the images using the extracted features into two categories: bleeding and nonbleeding images. For performance evaluation a series of experiments are performed on two standard benchmark datasets. It has been observed that the proposed architecture outclass the single deep learning architectures, with an average accuracy in detection bleeding regions of 97.65 % and 95.70 % on well-known state-of-the-art datasets considering three different fusion rules, with the best combination in terms of accuracy and training time obtained using mean value pooling as fusion rule and Support Vector Machine as classifier.
Collapse
Affiliation(s)
- Andrea Caroppo
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Alessandro Leone
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Pietro Siciliano
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| |
Collapse
|
28
|
Liaqat A, Khan MA, Sharif M, Mittal M, Saba T, Manic KS, Al Attar FNH. Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review. Curr Med Imaging 2021; 16:1229-1242. [PMID: 32334504 DOI: 10.2174/1573405616666200425220513] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/14/2020] [Accepted: 01/30/2020] [Indexed: 11/22/2022]
Abstract
Recent facts and figures published in various studies in the US show that approximately
27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that
the mortality rate is quite high in diagnosed cases. The early detection of these infections can save
precious human lives. As the manual process of these infections is time-consuming and expensive,
therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy
specialists in their clinics. Generally, an automated method of gastric infection detections
using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing,
feature extraction, segmentation of infected regions, and classification into their relevant
categories. These steps consist of various challenges that reduce the detection and recognition
accuracy as well as increase the computation time. In this review, authors have focused on the importance
of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and
the scope of endoscopy. Further, the general steps and highlighting the importance of each step
have been presented. A detailed discussion and future directions have been provided at the end.
Collapse
Affiliation(s)
- Amna Liaqat
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mamta Mittal
- Department of Computer Science & Engineering, G.B. Pant Govt. Engineering College, New Delhi, India
| | - Tanzila Saba
- Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - K. Suresh Manic
- Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman
| | | |
Collapse
|
29
|
Pannala R, Krishnan K, Melson J, Parsi MA, Schulman AR, Sullivan S, Trikudanathan G, Trindade AJ, Watson RR, Maple JT, Lichtenstein DR. Artificial intelligence in gastrointestinal endoscopy. VIDEOGIE : AN OFFICIAL VIDEO JOURNAL OF THE AMERICAN SOCIETY FOR GASTROINTESTINAL ENDOSCOPY 2020; 5:598-613. [PMID: 33319126 PMCID: PMC7732722 DOI: 10.1016/j.vgie.2020.08.013] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI)-based applications have transformed several industries and are widely used in various consumer products and services. In medicine, AI is primarily being used for image classification and natural language processing and has great potential to affect image-based specialties such as radiology, pathology, and gastroenterology (GE). This document reviews the reported applications of AI in GE, focusing on endoscopic image analysis. METHODS The MEDLINE database was searched through May 2020 for relevant articles by using key words such as machine learning, deep learning, artificial intelligence, computer-aided diagnosis, convolutional neural networks, GI endoscopy, and endoscopic image analysis. References and citations of the retrieved articles were also evaluated to identify pertinent studies. The manuscript was drafted by 2 authors and reviewed in person by members of the American Society for Gastrointestinal Endoscopy Technology Committee and subsequently by the American Society for Gastrointestinal Endoscopy Governing Board. RESULTS Deep learning techniques such as convolutional neural networks have been used in several areas of GI endoscopy, including colorectal polyp detection and classification, analysis of endoscopic images for diagnosis of Helicobacter pylori infection, detection and depth assessment of early gastric cancer, dysplasia in Barrett's esophagus, and detection of various abnormalities in wireless capsule endoscopy images. CONCLUSIONS The implementation of AI technologies across multiple GI endoscopic applications has the potential to transform clinical practice favorably and improve the efficiency and accuracy of current diagnostic methods.
Collapse
Key Words
- ADR, adenoma detection rate
- AI, artificial intelligence
- AMR, adenoma miss rate
- ANN, artificial neural network
- BE, Barrett’s esophagus
- CAD, computer-aided diagnosis
- CADe, CAD studies for colon polyp detection
- CADx, CAD studies for colon polyp classification
- CI, confidence interval
- CNN, convolutional neural network
- CRC, colorectal cancer
- DL, deep learning
- GI, gastroenterology
- HD-WLE, high-definition white light endoscopy
- HDWL, high-definition white light
- ML, machine learning
- NBI, narrow-band imaging
- NPV, negative predictive value
- PIVI, preservation and Incorporation of Valuable Endoscopic Innovations
- SVM, support vector machine
- VLE, volumetric laser endomicroscopy
- WCE, wireless capsule endoscopy
- WL, white light
Collapse
Affiliation(s)
- Rahul Pannala
- Department of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, Arizona
| | - Kumar Krishnan
- Division of Gastroenterology, Department of Internal Medicine, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts
| | - Joshua Melson
- Division of Digestive Diseases, Department of Internal Medicine, Rush University Medical Center, Chicago, Illinois
| | - Mansour A Parsi
- Section for Gastroenterology and Hepatology, Tulane University Health Sciences Center, New Orleans, Louisiana
| | - Allison R Schulman
- Department of Gastroenterology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan
| | - Shelby Sullivan
- Division of Gastroenterology and Hepatology, University of Colorado School of Medicine, Aurora, Colorado
| | - Guru Trikudanathan
- Department of Gastroenterology, Hepatology and Nutrition, University of Minnesota, Minneapolis, Minnesota
| | - Arvind J Trindade
- Department of Gastroenterology, Zucker School of Medicine at Hofstra/Northwell, Long Island Jewish Medical Center, New Hyde Park, New York
| | - Rabindra R Watson
- Department of Gastroenterology, Interventional Endoscopy Services, California Pacific Medical Center, San Francisco, California
| | - John T Maple
- Division of Digestive Diseases and Nutrition, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - David R Lichtenstein
- Division of Gastroenterology, Boston Medical Center, Boston University School of Medicine, Boston, Massachusetts
| |
Collapse
|
30
|
Xing X, Yuan Y, Meng MQH. Zoom in Lesions for Better Diagnosis: Attention Guided Deformation Network for WCE Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4047-4059. [PMID: 32746146 DOI: 10.1109/tmi.2020.3010102] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Wireless capsule endoscopy (WCE) is a novel imaging tool that allows noninvasive visualization of the entire gastrointestinal (GI) tract without causing discomfort to patients. Convolutional neural networks (CNNs), though perform favorably against traditional machine learning methods, show limited capacity in WCE image classification due to the small lesions and background interference. To overcome these limits, we propose a two-branch Attention Guided Deformation Network (AGDN) for WCE image classification. Specifically, the attention maps of branch1 are utilized to guide the amplification of lesion regions on the input images of branch2, thus leading to better representation and inspection of the small lesions. What's more, we devise and insert Third-order Long-range Feature Aggregation (TLFA) modules into the network. By capturing long-range dependencies and aggregating contextual features, TLFAs endow the network with a global contextual view and stronger feature representation and discrimination capability. Furthermore, we propose a novel Deformation based Attention Consistency (DAC) loss to refine the attention maps and achieve the mutual promotion of the two branches. Finally, the global feature embeddings from the two branches are fused to make image label predictions. Extensive experiments show that the proposed AGDN outperforms state-of-the-art methods with an overall classification accuracy of 91.29% on two public WCE datasets. The source code is available at https://github.com/hathawayxxh/WCE-AGDN.
Collapse
|
31
|
Jain S, Seal A, Ojha A, Krejcar O, Bureš J, Tachecí I, Yazidi A. Detection of abnormality in wireless capsule endoscopy images using fractal features. Comput Biol Med 2020; 127:104094. [PMID: 33152668 DOI: 10.1016/j.compbiomed.2020.104094] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 10/23/2020] [Accepted: 10/23/2020] [Indexed: 12/14/2022]
Abstract
One of the most recent non-invasive technologies to examine the gastrointestinal tract is wireless capsule endoscopy (WCE). As there are thousands of endoscopic images in an 8-15 h long video, an evaluator has to pay constant attention for a relatively long time (60-120 min). Therefore the possibility of the presence of pathological findings in a few images (displayed for evaluation for a few seconds only) brings a significant risk of missing the pathology with all negative consequences for the patient. Hence, manually reviewing a video to identify abnormal images is not only a tedious and time consuming task that overwhelms human attention but also is error prone. In this paper, a method is proposed for the automatic detection of abnormal WCE images. The differential box counting method is used for the extraction of fractal dimension (FD) of WCE images and the random forest based ensemble classifier is used for the identification of abnormal frames. The FD is a well-known technique for extraction of features related to texture, smoothness, and roughness. In this paper, FDs are extracted from pixel-blocks of WCE images and are fed to the classifier for identification of images with abnormalities. To determine a suitable pixel block size for FD feature extraction, various sizes of blocks are considered and are fed into six frequently used classifiers separately, and the block size of 7×7 giving the best performance is empirically determined. Further, the selection of the random forest ensemble classifier is also done using the same empirical study. Performance of the proposed method is evaluated on two datasets containing WCE frames. Results demonstrate that the proposed method outperforms some of the state-of-the-art methods with AUC of 85% and 99% on Dataset-I and Dataset-II respectively.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India; Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka, 1249, Hradec Kralove, 50003, Czech Republic.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka, 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| | - Jan Bureš
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove, University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tachecí
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove, University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Anis Yazidi
- Artificial Intelligence Lab, Oslo Metropolitan University, 460167, Norway
| |
Collapse
|
32
|
Jani KK, Srivastava R. A Survey on Medical Image Analysis in Capsule Endoscopy. Curr Med Imaging 2020; 15:622-636. [PMID: 32008510 DOI: 10.2174/1573405614666181102152434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/14/2018] [Accepted: 10/22/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. METHODS Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. RESULTS This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. CONCLUSIONS The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
Collapse
Affiliation(s)
- Kuntesh Ketan Jani
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| | - Rajeev Srivastava
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| |
Collapse
|
33
|
Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92:831-839.e8. [PMID: 32334015 DOI: 10.1016/j.gie.2020.04.039] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 04/13/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Deep learning is an innovative algorithm based on neural networks. Wireless capsule endoscopy (WCE) is considered the criterion standard for detecting small-bowel diseases. Manual examination of WCE is time-consuming and can benefit from automatic detection using artificial intelligence (AI). We aimed to perform a systematic review of the current literature pertaining to deep learning implementation in WCE. METHODS We conducted a search in PubMed for all original publications on the subject of deep learning applications in WCE published between January 1, 2016 and December 15, 2019. Evaluation of the risk of bias was performed using tailored Quality Assessment of Diagnostic Accuracy Studies-2. Pooled sensitivity and specificity were calculated. Summary receiver operating characteristic curves were plotted. RESULTS Of the 45 studies retrieved, 19 studies were included. All studies were retrospective. Deep learning applications for WCE included detection of ulcers, polyps, celiac disease, bleeding, and hookworm. Detection accuracy was above 90% for most studies and diseases. Pooled sensitivity and specificity for ulcer detection were .95 (95% confidence interval [CI], .89-.98) and .94 (95% CI, .90-.96), respectively. Pooled sensitivity and specificity for bleeding or bleeding source were .98 (95% CI, .96-.99) and .99 (95% CI, .97-.99), respectively. CONCLUSIONS Deep learning has achieved excellent performance for the detection of a range of diseases in WCE. Notwithstanding, current research is based on retrospective studies with a high risk of bias. Thus, future prospective, multicenter studies are necessary for this technology to be implemented in the clinical use of WCE.
Collapse
|
34
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
35
|
Automated Classification of Blood Loss from Transurethral Resection of the Prostate Surgery Videos Using Deep Learning Technique. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10144908] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Transurethral resection of the prostate (TURP) is a surgical removal of obstructing prostate tissue. The total bleeding area is used to determine the performance of the TURP surgery. Although the traditional method for the detection of bleeding areas provides accurate results, it cannot detect them in time for surgery diagnosis. Moreover, it is easily disturbed to judge bleeding areas for experienced physicians because a red light pattern arising from the surgical cutting loop often appears on the images. Recently, the automatic computer-aided technique and artificial intelligence deep learning are broadly used in medical image recognition, which can effectively extract the desired features to reduce the burden of physicians and increase the accuracy of diagnosis. In this study, we integrated two state-of-the-art deep learning techniques for recognizing and extracting the red light areas arising from the cutting loop in the TURP surgery. First, the ResNet-50 model was used to recognize the red light pattern appearing in the chipped frames of the surgery videos. Then, the proposed Res-Unet model was used to segment the areas with the red light pattern and remove these areas. Finally, the hue, saturation, and value color space were used to classify the four levels of the blood loss under the circumstances of non-red light pattern images. The experiments have shown that the proposed Res-Unet model achieves higher accuracy than other segmentation algorithms in classifying the images with the red and non-red lights, and is able to extract the red light patterns and effectively remove them in the TURP surgery images. The proposed approaches presented here are capable of obtaining the level classifications of blood loss, which are helpful for physicians in diagnosis.
Collapse
|
36
|
Hajabdollahi M, Esfandiarpoor R, Najarian K, Karimi N, Samavi S, Reza Soroushmehr SM. Low Complexity CNN Structure for Automatic Bleeding Zone Detection in Wireless Capsule Endoscopy Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7227-7230. [PMID: 31947501 DOI: 10.1109/embc.2019.8857751] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Wireless capsule endoscopy (WCE) is a swallowable device used for screening different parts of the human digestive system. Automatic WCE image analysis methods reduce the duration of the screening procedure and alleviate the burden of manual screening by medical experts. Recent studies widely employ convolutional neural networks (CNNs) for automatic analysis of WCE images; however, these studies do not consider CNN's structural and computational complexities. In this paper, we address the problem of simplifying the CNN's structure. A low complexity CNN structure for bleeding zone detection is proposed which takes a single patch as input and then outputs a segmented patch of the same size. The proposed network is inspired by the FCN paradigm with a simplified structure. Since it is based on image patches, the resulting network benefits from moderate-sized intermediate feature maps. Moreover, the problem of redundant computations in patch-based methods is circumvented by non-overlapping patch processing. The proposed method is evaluated using the publicly available KID dataset for WCE image analysis. Experimental results show that the proposed network has better accuracy and AUC than previous structures while requiring less computational operations.
Collapse
|
37
|
Guo X, Yuan Y. Semi-supervised WCE image classification with adaptive aggregated attention. Med Image Anal 2020; 64:101733. [PMID: 32574987 DOI: 10.1016/j.media.2020.101733] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 04/01/2020] [Accepted: 05/22/2020] [Indexed: 02/08/2023]
Abstract
Accurate abnormality classification in Wireless Capsule Endoscopy (WCE) images is crucial for early gastrointestinal (GI) tract cancer diagnosis and treatment, while it remains challenging due to the limited annotated dataset, the huge intra-class variances and the high degree of inter-class similarities. To tackle these dilemmas, we propose a novel semi-supervised learning method with Adaptive Aggregated Attention (AAA) module for automatic WCE image classification. Firstly, a novel deformation field based image preprocessing strategy is proposed to remove the black background and circular boundaries in WCE images. Then we propose a synergic network to learn discriminative image features, consisting of two branches: an abnormal regions estimator (the first branch) and an abnormal information distiller (the second branch). The first branch utilizes the proposed AAA module to capture global dependencies and incorporate context information to highlight the most meaningful regions, while the second branch mainly focuses on these calculated attention regions for accurate and robust abnormality classification. Finally, these two branches are jointly optimized by minimizing the proposed discriminative angular (DA) loss and Jensen-Shannon divergence (JS) loss with labeled data as well as unlabeled data. Comprehensive experiments have been conducted on the public CAD-CAP WCE dataset. The proposed method achieves 93.17% overall accuracy in a fourfold cross-validation, verifying its effectiveness for WCE image classification. The source code is available at https://github.com/Guo-Xiaoqing/SSL_WCE.
Collapse
Affiliation(s)
- Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
38
|
Kundu AK, Fattah SA, Wahid KA. Multiple Linear Discriminant Models for Extracting Salient Characteristic Patterns in Capsule Endoscopy Images for Multi-Disease Detection. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:3300111. [PMID: 32190429 PMCID: PMC7062148 DOI: 10.1109/jtehm.2020.2964666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/05/2019] [Accepted: 12/03/2019] [Indexed: 01/01/2023]
Abstract
Background: Computer-aided disease detection schemes from wireless capsule endoscopy (WCE) videos have received great attention by the researchers for reducing physicians’ burden due to the time-consuming and risky manual review process. While single disease classification schemes are greatly dealt by the researchers in the past, developing a unified scheme which is capable of detecting multiple gastrointestinal (GI) diseases is very challenging due to the highly irregular behavior of diseased images in terms of color patterns. Method: In this paper, a computer-aided method is developed to detect multiple GI diseases from WCE videos utilizing linear discriminant analysis (LDA) based region of interest (ROI) separation scheme followed by a probabilistic model fitting approach. Commonly in training phase, as pixel-labeled images are available in small number, only the image-level annotations are used for detecting diseases in WCE images, whereas pixel-level knowledge, although a major source for learning the disease characteristics, is left unused. In view of learning the characteristic disease patterns from pixel-labeled images, a set of LDA models are trained which are later used to extract the salient ROI from WCE images both in training and testing stages. The intensity patterns of ROI are then modeled by a suitable probability distribution and the fitted parameters of the distribution are utilized as features in a supervised cascaded classification scheme. Results: For the purpose of validation of the proposed multi-disease detection scheme, a set of pixel-labeled images of bleeding, ulcer and tumor are used to extract the LDA models and then, a large WCE dataset is used for training and testing. A high level of accuracy is achieved even with a small number of pixel-labeled images. Conclusion: Therefore, the proposed scheme is expected to help physicians in reviewing a large number of WCE images to diagnose different GI diseases.
Collapse
Affiliation(s)
- Amit Kumar Kundu
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Shaikh Anowarul Fattah
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Khan A Wahid
- 2Department of Electrical and Computer EngineeringUniversity of SaskatchewanSaskatoonSKS7N 5A9Canada
| |
Collapse
|
39
|
Khan MA, Kadry S, Alhaisoni M, Nam Y, Zhang Y, Rajinikanth V, Sarfraz MS. Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection. IEEE ACCESS 2020; 8:132850-132859. [DOI: 10.1109/access.2020.3010448] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
40
|
Jia X, Xing X, Yuan Y, Xing L, Meng MQH. Wireless Capsule Endoscopy: A New Tool for Cancer Screening in the Colon With Deep-Learning-Based Polyp Recognition. PROCEEDINGS OF THE IEEE 2020; 108:178-197. [DOI: 10.1109/jproc.2019.2950506] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
41
|
Wang S, Xing Y, Zhang L, Gao H, Zhang H. A systematic evaluation and optimization of automatic detection of ulcers in wireless capsule endoscopy on a large dataset using deep convolutional neural networks. Phys Med Biol 2019; 64:235014. [PMID: 31645019 DOI: 10.1088/1361-6560/ab5086] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Compared with conventional gastroscopy which is invasive and painful, wireless capsule endoscopy (WCE) can provide noninvasive examination of gastrointestinal (GI) tract. The WCE video can effectively support physicians to reach a diagnostic decision while a huge number of images need to be analyzed (more than 50 000 frames per patient). In this paper, we propose a computer-aided diagnosis method called second glance (secG) detection framework for automatic detection of ulcers based on deep convolutional neural networks that provides both classification confidence and bounding box of lesion area. We evaluated its performance on a large dataset that consists of 1504 patient cases (the largest WCE ulcer dataset to our best knowledge, 1076 cases with ulcers, 428 normal cases). We use 15 781 ulcer frames from 753 ulcer cases and 17 138 normal frames from 300 normal cases for training. Validation dataset consists of 2040 ulcer frames from 108 cases and 2319 frames from 43 normal cases. For test, we use 4917 ulcer frames from 215 ulcer cases and 5007 frames from 85 normal cases. Test results demonstrate the 0.9469 ROC-AUC of the proposed secG detection framework outperforms state-of-the-art detection frameworks including Faster-RCNN (0.9014) and SSD-300 (0.8355), which implies the effectiveness of our method. From the ulcer size analysis, we find the detection of ulcers is highly related to the size. For ulcers with size larger than 1% of the full image size, the sensitivity exceeds 92.00%. For ulcers that are smaller than 1% of the full image size, the sensitivity is around 85.00%. The overall sensitivity, specificity and accuracy are 89.71%, 90.48% and 90.10%, at a threshold value of 0.6706, which implies the potential of the proposed method to suppress oversights and to reduce the burden of physicians.
Collapse
Affiliation(s)
- Sen Wang
- Key Laboratory of Particle and Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, People's Republic of China. Department of Engineering Physics, Tsinghua University, Beijing 100084, People's Republic of China
| | | | | | | | | |
Collapse
|
42
|
Kundu AK, Fattah SA. Probability density function based modeling of spatial feature variation in capsule endoscopy data for automatic bleeding detection. Comput Biol Med 2019; 115:103478. [PMID: 31698239 DOI: 10.1016/j.compbiomed.2019.103478] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 09/30/2019] [Accepted: 09/30/2019] [Indexed: 02/07/2023]
Abstract
Wireless capsule endoscopy (WCE) is a video technology to inspect abnormalities, like bleeding in the gastrointestinal tract. In order to avoid a complex and long duration manual review process, automatic bleeding detection schemes are developed that mainly utilize features extracted from WCE images. In feature-based bleeding detection schemes, either global features are used which produce averaged characteristics ignoring the effect of smaller bleeding regions or local features are utilized that cause large feature dimension. In this paper, pixels of interest (POI) in a given WCE image are determined using a linear separation scheme, local spatial features are then extracted from the POI and finally, a suitable characteristic probability density function (PDF) is fitted over the resulting feature space. The proposed PDF model fitting based approach not only reduces the computational complexity but also offers more consistent representation of a class. Details analysis are carried out to find the best suitable PDF and it is found that fitting of Rayleigh PDF model to the local spatial features is best suited for bleeding detection. For the purpose of classification, the fitted PDF parameters are used as features in the supervised support vector machine classifier. Pixels residing in the close vicinity of the POI are further classified with the help of an unsupervised clustering-based scheme to extract more precise bleeding regions. A large number of WCE images obtained from 30 publicly available WCE videos are used for performance evaluation of the proposed scheme and the effects on classification performance due to the changes in PDF models, block statistics, color spaces, and classifiers are experimentally analyzed. The proposed scheme shows satisfactory performance in terms of sensitivity (97.55%), specificity (96.59%) and accuracy (96.77%) and the results obtained by the proposed method outperforms the results reported for some state-of-the-art methods.
Collapse
Affiliation(s)
- Amit Kumar Kundu
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Bangladesh.
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Bangladesh.
| |
Collapse
|
43
|
Deep Convolutional Neural Network for Ulcer Recognition in Wireless Capsule Endoscopy: Experimental Feasibility and Optimization. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:7546215. [PMID: 31641370 PMCID: PMC6766681 DOI: 10.1155/2019/7546215] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 08/18/2019] [Indexed: 01/17/2023]
Abstract
Wireless capsule endoscopy (WCE) has developed rapidly over the last several years and now enables physicians to examine the gastrointestinal tract without surgical operation. However, a large number of images must be analyzed to obtain a diagnosis. Deep convolutional neural networks (CNNs) have demonstrated impressive performance in different computer vision tasks. Thus, in this work, we aim to explore the feasibility of deep learning for ulcer recognition and optimize a CNN-based ulcer recognition architecture for WCE images. By analyzing the ulcer recognition task and characteristics of classic deep learning networks, we propose a HAnet architecture that uses ResNet-34 as the base network and fuses hyper features from the shallow layer with deep features in deeper layers to provide final diagnostic decisions. 1,416 independent WCE videos are collected for this study. The overall test accuracy of our HAnet is 92.05%, and its sensitivity and specificity are 91.64% and 92.42%, respectively. According to our comparisons of F1, F2, and ROC-AUC, the proposed method performs better than several off-the-shelf CNN models, including VGG, DenseNet, and Inception-ResNet-v2, and classical machine learning methods with handcrafted features for WCE image classification. Overall, this study demonstrates that recognizing ulcers in WCE images via the deep CNN method is feasible and could help reduce the tedious image reading work of physicians. Moreover, our HAnet architecture tailored for this problem gives a fine choice for the design of network structure.
Collapse
|
44
|
The future of capsule endoscopy in clinical practice: from diagnostic to therapeutic experimental prototype capsules. GASTROENTEROLOGY REVIEW 2019; 15:179-193. [PMID: 33005262 PMCID: PMC7509905 DOI: 10.5114/pg.2019.87528] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 06/17/2019] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) is indicated as a first-line clinical examination for the detection of small-bowel pathology, and there is an ever-growing drive for it to become a method for the screening of the entire gastrointestinal tract (GI). Although CE's main function is diagnosis, the research for therapeutic capabilities has intensified to make therapeutic capsule endoscopy (TCE) a target within reach. This manuscript presents the research evolution of CE and TCE through the last 5 years and describes notable problems, as well as clinical and technological challenges to overcome. This review also reports the state-of-the-art of capsule devices with a focus on CE research prototypes promising an enhanced diagnostic yield (DY) and treatment. Lastly, this article provides an overview of the research progress made in software for enhancing DY by increasing the accuracy of abnormality detection and lesion localisation.
Collapse
|
45
|
Hajabdollahi M, Esfandiarpoor R, Khadivi P, Soroushmehr S, Karimi N, Najarian K, Samavi S. Segmentation of bleeding regions in wireless capsule endoscopy for detection of informative frames. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101565] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
46
|
Cummins G, Cox BF, Ciuti G, Anbarasan T, Desmulliez MPY, Cochran S, Steele R, Plevris JN, Koulaouzidis A. Gastrointestinal diagnosis using non-white light imaging capsule endoscopy. Nat Rev Gastroenterol Hepatol 2019; 16:429-447. [PMID: 30988520 DOI: 10.1038/s41575-019-0140-z] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) has proved to be a powerful tool in the diagnosis and management of small bowel disorders since its introduction in 2001. However, white light imaging (WLI) is the principal technology used in clinical CE at present, and therefore, CE is limited to mucosal inspection, with diagnosis remaining reliant on visible manifestations of disease. The introduction of WLI CE has motivated a wide range of research to improve its diagnostic capabilities through integration with other sensing modalities. These developments have the potential to overcome the limitations of WLI through enhanced detection of subtle mucosal microlesions and submucosal and/or transmural pathology, providing novel diagnostic avenues. Other research aims to utilize a range of sensors to measure physiological parameters or to discover new biomarkers to improve the sensitivity, specificity and thus the clinical utility of CE. This multidisciplinary Review summarizes research into non-WLI CE devices by organizing them into a taxonomic structure on the basis of their sensing modality. The potential of these capsules to realize clinically useful virtual biopsy and computer-aided diagnosis (CADx) is also reported.
Collapse
Affiliation(s)
- Gerard Cummins
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
| | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | | | - Marc P Y Desmulliez
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
| | - Sandy Cochran
- School of Engineering, University of Glasgow, Glasgow, UK
| | - Robert Steele
- School of Medicine, University of Dundee, Dundee, UK
| | - John N Plevris
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
47
|
Pogorelov K, Suman S, Azmadi Hussin F, Saeed Malik A, Ostroukhova O, Riegler M, Halvorsen P, Hooi Ho S, Goh KL. Bleeding detection in wireless capsule endoscopy videos - Color versus texture features. J Appl Clin Med Phys 2019; 20:141-154. [PMID: 31251460 PMCID: PMC6698770 DOI: 10.1002/acm2.12662] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 10/15/2018] [Accepted: 05/26/2019] [Indexed: 12/22/2022] Open
Abstract
Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient’s video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine‐learning‐based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel‐level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state‐of‐the‐art approaches. In this research, we have conducted a broad comparison of a number of different state‐of‐the‐art features and classification methods that allows building an efficient and flexible WCE video processing system.
Collapse
Affiliation(s)
- Konstantin Pogorelov
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Shipra Suman
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Fawnizu Azmadi Hussin
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Aamir Saeed Malik
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Olga Ostroukhova
- Research Institute of Multiprocessor Computation Systems, n.a. A.V. Kalyaev, Russia
| | - Michael Riegler
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Pål Halvorsen
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Shiaw Hooi Ho
- Department of Medicine, University of Malaya Medical Center, Kuala Lumpur, Malaysia
| | - Khean-Lee Goh
- Department of Medicine, University of Malaya Medical Center, Kuala Lumpur, Malaysia
| |
Collapse
|
48
|
Vasilakakis M, Koulaouzidis A, Yung DE, Plevris JN, Toth E, Iakovidis DK. Follow-up on: optimizing lesion detection in small bowel capsule endoscopy and beyond: from present problems to future solutions. Expert Rev Gastroenterol Hepatol 2019; 13:129-141. [PMID: 30791780 DOI: 10.1080/17474124.2019.1553616] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
This review presents noteworthy advances in clinical and experimental Capsule Endoscopy (CE), focusing on the progress that has been reported over the last 5 years since our previous review on the subject. Areas covered: This study presents the commercially available CE platforms, as well as the advances made in optimizing the diagnostic capabilities of CE. The latter includes recent concept and prototype capsule endoscopes, medical approaches to improve diagnostic yield, and progress in software for enhancing visualization, abnormality detection, and lesion localization. Expert commentary: Currently, moving through the second decade of CE evolution, there are still several open issues and remarkable challenges to overcome.
Collapse
Affiliation(s)
- Michael Vasilakakis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| | - Anastasios Koulaouzidis
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland.,c Department of Clinical Sciences , Lund University , Malmö , Sweden
| | - Diana E Yung
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - John N Plevris
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - Ervin Toth
- c Department of Clinical Sciences , Lund University , Malmö , Sweden.,d Section of Gastroenterology, Department of Clinical Sciences , Skåne University Hospital Malmö , Malmö , Sweden
| | - Dimitris K Iakovidis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| |
Collapse
|
49
|
Sharif M, Attique Khan M, Rashid M, Yasmin M, Afza F, Tanik UJ. Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1572657] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Affiliation(s)
- Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | | | - Muhammad Rashid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Farhat Afza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Urcun John Tanik
- Department of Computer Science and Information Systems, Texas A&M University-Commerce, USA
| |
Collapse
|
50
|
Liu D, Rao N, Mei X, Jiang H, Li Q, Luo C, Li Q, Zeng C, Zeng B, Gan T. Annotating Early Esophageal Cancers Based on Two Saliency Levels of Gastroscopic Images. J Med Syst 2018; 42:237. [PMID: 30327890 DOI: 10.1007/s10916-018-1063-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 09/06/2018] [Indexed: 02/05/2023]
Abstract
Early diagnoses of esophageal cancer can greatly improve the survival rate of patients. At present, the lesion annotation of early esophageal cancers (EEC) in gastroscopic images is generally performed by medical personnel in a clinic. To reduce the effect of subjectivity and fatigue in manual annotation, computer-aided annotation is required. However, automated annotation of EEC lesions using images is a challenging task owing to the fine-grained variability in the appearance of EEC lesions. This study modifies the traditional EEC annotation framework and utilizes visual salient information to develop a two saliency levels-based lesion annotation (TSL-BLA) for EEC annotations on gastroscopic images. Unlike existing methods, the proposed framework has a strong ability of constraining false positive outputs. What is more, TSL-BLA is also placed an additional emphasis on the annotation of small EEC lesions. A total of 871 gastroscopic images from 231 patients were used to validate TSL-BLA. 365 of those images contain 434 EEC lesions and 506 images do not contain any lesions. 101 small lesion regions are extracted from the 434 lesions to further validate the performance of TSL-BLA. The experimental results show that the mean detection rate and Dice similarity coefficients of TSL-BLA were 97.24 and 75.15%, respectively. Compared with other state-of-the-art methods, TSL-BLA shows better performance. Moreover, it shows strong superiority when annotating small EEC lesions. It also produces fewer false positive outputs and has a fast running speed. Therefore, The proposed method has good application prospects in aiding clinical EEC diagnoses.
Collapse
Affiliation(s)
- Dingyun Liu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Nini Rao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China. .,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China. .,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.
| | - Xinming Mei
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.,Institute of Electronic and Information Engineering of UESTC in Guangdong, Dongguan, China
| | - Hongxiu Jiang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Quanchi Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - ChengSi Luo
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Qian Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Chengshi Zeng
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Bing Zeng
- School of Communication and Information Engineering, University Electronic Science and Technology of China, Chengdu, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|