1
|
Lazo JF, Rosa B, Catellani M, Fontana M, Mistretta FA, Musi G, de Cobelli O, de Mathelin M, De Momi E. Semi-Supervised Bladder Tissue Classification in Multi-Domain Endoscopic Images. IEEE Trans Biomed Eng 2023; 70:2822-2833. [PMID: 37037233 DOI: 10.1109/tbme.2023.3265679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
OBJECTIVE Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. METHOD We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. CONCLUSION The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. SIGNIFICANCE This study shows the potential of using semi-supervised GAN-based bladder tissue classification when annotations are limited in multi-domain data.
Collapse
|
2
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
3
|
Aliyi S, Dese K, Raj H. Detection of Gastrointestinal Tract Disorders Using Deep Learning Methods from Colonoscopy Images and Videos. SCIENTIFIC AFRICAN 2023. [DOI: 10.1016/j.sciaf.2023.e01628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/16/2023] Open
|
4
|
Das D, Biswas SK, Bandyopadhyay S. Detection of Diabetic Retinopathy using Convolutional Neural Networks for Feature Extraction and Classification (DRFEC). MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-59. [PMID: 36467440 PMCID: PMC9708148 DOI: 10.1007/s11042-022-14165-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/14/2022] [Accepted: 10/27/2022] [Indexed: 06/17/2023]
Abstract
Diabetic Retinopathy (DR) is caused as a result of Diabetes Mellitus which causes development of various retinal abrasions in the human retina. These lesions cause hindrance in vision and in severe cases, DR can lead to blindness. DR is observed amongst 80% of patients who have been diagnosed from prolonged diabetes for a period of 10-15 years. The manual process of periodic DR diagnosis and detection for necessary treatment, is time consuming and unreliable due to unavailability of resources and expert opinion. Therefore, computerized diagnostic systems which use Deep Learning (DL) Convolutional Neural Network (CNN) architectures, are proposed to learn DR patterns from fundus images and identify the severity of the disease. This paper proposes a comprehensive model using 26 state-of-the-art DL networks to assess and evaluate their performance, and which contribute for deep feature extraction and image classification of DR fundus images. In the proposed model, ResNet50 has shown highest overfitting in comparison to Inception V3, which has shown lowest overfitting when trained using the Kaggle's EyePACS fundus image dataset. EfficientNetB4 is the most optimal, efficient and reliable DL algorithm in detection of DR, followed by InceptionResNetV2, NasNetLarge and DenseNet169. EfficientNetB4 has achieved a training accuracy of 99.37% and the highest validation accuracy of 79.11%. DenseNet201 has achieved the highest training accuracy of 99.58% and a validation accuracy of 76.80% which is less than the top-4 best performing models.
Collapse
Affiliation(s)
- Dolly Das
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| | - Saroj Kumar Biswas
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| | - Sivaji Bandyopadhyay
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| |
Collapse
|
5
|
Li JW, Chia T, Fock KM, Chong KDW, Wong YJ, Ang TL. Artificial intelligence and polyp detection in colonoscopy: Use of a single neural network to achieve rapid polyp localization for clinical use. J Gastroenterol Hepatol 2021; 36:3298-3307. [PMID: 34327729 DOI: 10.1111/jgh.15642] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 05/11/2021] [Accepted: 07/22/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND AIM Artificial intelligence has been extensively studied to assist clinicians in polyp detection, but such systems usually require expansive processing power, making them prohibitively expensive and hindering wide adaption. The current study used a fast object detection algorithm, known as the YOLOv3 algorithm, to achieve real-time polyp detection on a laptop. In addition, we evaluated and classified the causes of false detections to further improve accuracy. METHODS The YOLOv3 algorithm was trained and validated with 6038 and 2571 polyp images, respectively. Videos from live colonoscopies in a tertiary center and those obtained from public databases were used for the training and validation sets. The algorithm was tested on 10 unseen videos from the CVC-Video ClinicDB dataset. Only bounding boxes with an intersection over union area of > 0.3 were considered positive predictions. RESULTS Polyp detection rate in our study was 100%, with the algorithm able to detect every polyp in each video. Sensitivity, specificity, and F1 score were 74.1%, 85.1%, and 83.3, respectively. The algorithm achieved a speed of 61.2 frames per second (fps) on a desktop RTX2070 GPU and 27.2 fps on a laptop GTX2060 GPU. Nearly a quarter of false negatives happened when the polyps were at the corner of an image. Image blurriness accounted for approximately 3% and 9% of false positive and false negative detections, respectively. CONCLUSION The YOLOv3 algorithm can achieve real-time poly detection with high accuracy and speed on a desktop GPU, making it low cost and accessible to most endoscopy centers worldwide.
Collapse
Affiliation(s)
- James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore.,Yong Loo Lin School of Medicine, National University of Singapore, Singapore.,Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Kwong Ming Fock
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore.,Yong Loo Lin School of Medicine, National University of Singapore, Singapore.,Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Yu Jun Wong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore.,Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore.,Yong Loo Lin School of Medicine, National University of Singapore, Singapore.,Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| |
Collapse
|
6
|
A CNN-based methodology for cow heat analysis from endoscopic images. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02910-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Beyersdorffer P, Kunert W, Jansen K, Miller J, Wilhelm P, Burgert O, Kirschniak A, Rolinger J. Detection of adverse events leading to inadvertent injury during laparoscopic cholecystectomy using convolutional neural networks. ACTA ACUST UNITED AC 2021; 66:413-421. [PMID: 33655738 DOI: 10.1515/bmt-2020-0106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 02/16/2021] [Indexed: 01/17/2023]
Abstract
Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train the CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labeled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.
Collapse
Affiliation(s)
| | - Wolfgang Kunert
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| | - Kai Jansen
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| | - Johanna Miller
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| | - Peter Wilhelm
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| | - Oliver Burgert
- Department of Medical Informatics, Reutlingen University, Reutlingen, Germany
| | - Andreas Kirschniak
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| | - Jens Rolinger
- Department of Surgery and Transplantation, Tübingen University Hospital, Tübingen, Germany
| |
Collapse
|
8
|
Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB. Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 2020; 108:101923. [PMID: 32972656 DOI: 10.1016/j.artmed.2020.101923] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/03/2020] [Accepted: 07/01/2020] [Indexed: 02/07/2023]
Abstract
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Collapse
Affiliation(s)
| | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| | - Artzai Picón
- Tecnalia, Parque Científico y Tecnológico de Bizkaia, C/ Astondo bidea, Edificio 700, 48160 Derio, Spain.
| | | | - J Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| |
Collapse
|
9
|
Thambawita V, Jha D, Hammer HL, Johansen HD, Johansen D, Halvorsen P, Riegler MA. An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification. ACTA ACUST UNITED AC 2020. [DOI: 10.1145/3386295] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Precise and efficient automated identification of gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Toward this goal, we present comprehensive evaluations of five distinct machine learning models using global features and deep neural networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics, such as recall, precision, specificity, accuracy, F1-score, and the Matthews correlation coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset—that is, the performance metrics should always be interpreted together rather than relying on a single metric.
Collapse
Affiliation(s)
| | - Debesh Jha
- SimulaMet and UiT—The Arctic University of Norway, Tromsø, Norway
| | | | | | - Dag Johansen
- UiT—The Arctic University of Norway, Tromsø, Norway
| | - Pål Halvorsen
- SimulaMet and Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
10
|
Machine Learning-Based Analysis of Sperm Videos and Participant Data for Male Fertility Prediction. Sci Rep 2019; 9:16770. [PMID: 31727961 PMCID: PMC6856178 DOI: 10.1038/s41598-019-53217-y] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 10/24/2019] [Indexed: 01/12/2023] Open
Abstract
Methods for automatic analysis of clinical data are usually targeted towards a specific modality and do not make use of all relevant data available. In the field of male human reproduction, clinical and biological data are not used to its fullest potential. Manual evaluation of a semen sample using a microscope is time-consuming and requires extensive training. Furthermore, the validity of manual semen analysis has been questioned due to limited reproducibility, and often high inter-personnel variation. The existing computer-aided sperm analyzer systems are not recommended for routine clinical use due to methodological challenges caused by the consistency of the semen sample. Thus, there is a need for an improved methodology. We use modern and classical machine learning techniques together with a dataset consisting of 85 videos of human semen samples and related participant data to automatically predict sperm motility. Used techniques include simple linear regression and more sophisticated methods using convolutional neural networks. Our results indicate that sperm motility prediction based on deep learning using sperm motility videos is rapid to perform and consistent. Adding participant data did not improve the algorithms performance. In conclusion, machine learning-based automatic analysis may become a valuable tool in male infertility investigation and research.
Collapse
|
11
|
Xu J, Jing M, Wang S, Yang C, Chen X. A review of medical image detection for cancers in digestive system based on artificial intelligence. Expert Rev Med Devices 2019; 16:877-889. [PMID: 31530047 DOI: 10.1080/17434440.2019.1669447] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Introduction: At present, cancer imaging examination relies mainly on manual reading of doctors, which requests a high standard of doctors' professional skills, clinical experience, and concentration. However, the increasing amount of medical imaging data has brought more and more challenges to radiologists. The detection of digestive system cancer (DSC) based on artificial intelligence (AI) can provide a solution for automatic analysis of medical images and assist doctors to achieve high-precision intelligent diagnosis of cancers. Areas covered: The main goal of this paper is to introduce the main research methods of the AI based detection of DSC, and provide relevant reference for researchers. Meantime, it summarizes the main problems existing in these methods, and provides better guidance for future research. Expert commentary: The automatic classification, recognition, and segmentation of DSC can be better realized through the methods of machine learning and deep learning, which minimize the internal information of images that are difficult for humans to discover. In the diagnosis of DSC, the use of AI to assist imaging surgeons can achieve cancer detection rapidly and effectively and save doctors' diagnosis time. These can lay the foundation for better clinical diagnosis, treatment planning and accurate quantitative evaluation of DSC.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Mengjie Jing
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Shiming Wang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Cuiping Yang
- Department of Gastroenterology, Ruijin North Hospital of Shanghai Jiao Tong University School of Medicine , Shanghai , China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| |
Collapse
|
12
|
Karako K, Chen Y, Tang W. On medical application of neural networks trained with various types of data. Biosci Trends 2018; 12:553-559. [PMID: 30555113 DOI: 10.5582/bst.2018.01264] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Neural networks have garnered attention over the past few years. A neural network is a typical model of machine learning that is used to identify visual patterns. Neural networks are used to solve a wide variety of problems, including image recognition problems and time series prediction problems. In addition, neural networks have been applied to medicine over the past few years. This paper classifies the ways in which neural networks have been applied to medicine based on the type of data used to train those networks. Applications of neural networks to medicine can be categorized two types: automated diagnosis and physician aids. Considering the number of patients per physician, neural networks could be used to diagnose diseases related to the vascular system, heart, brain, spinal column, head, neck, and tumors/cancer in three fields: vascular and interventional radiology, interventional cardiology, and neuroradiology. Lastly, this paper also considers areas of medicine where neural networks can be effectively applied in the future.
Collapse
Affiliation(s)
- Kenji Karako
- Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo
| | - Yu Chen
- Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo
| | - Wei Tang
- Department of International Trial, Center for Clinical Sciences; Hospital International Health Care Center, National Center for Global Health and Medicine
| |
Collapse
|
13
|
de Lange T, Halvorsen P, Riegler M. Methodology to develop machine learning algorithms to improve performance in gastrointestinal endoscopy. World J Gastroenterol 2018; 24:5057-5062. [PMID: 30568383 PMCID: PMC6288655 DOI: 10.3748/wjg.v24.i45.5057] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 10/25/2018] [Accepted: 11/02/2018] [Indexed: 02/06/2023] Open
Abstract
Assisted diagnosis using artificial intelligence has been a holy grail in medical research for many years, and recent developments in computer hardware have enabled the narrower area of machine learning to equip clinicians with potentially useful tools for computer assisted diagnosis (CAD) systems. However, training and assessing a computer's ability to diagnose like a human are complex tasks, and successful outcomes depend on various factors. We have focused our work on gastrointestinal (GI) endoscopy because it is a cornerstone for diagnosis and treatment of diseases of the GI tract. About 2.8 million luminal GI (esophageal, stomach, colorectal) cancers are detected globally every year, and although substantial technical improvements in endoscopes have been made over the last 10-15 years, a major limitation of endoscopic examinations remains operator variation. This translates into a substantial inter-observer variation in the detection and assessment of mucosal lesions, causing among other things an average polyp miss-rate of 20% in the colon and thus the subsequent development of a number of post-colonoscopy colorectal cancers. CAD systems might eliminate this variation and lead to more accurate diagnoses. In this editorial, we point out some of the current challenges in the development of efficient computer-based digital assistants. We give examples of proposed tools using various techniques, identify current challenges, and give suggestions for the development and assessment of future CAD systems.
Collapse
Affiliation(s)
- Thomas de Lange
- Department of Transplantation, Oslo University Hospital, Oslo 0424, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0316, Norway
| | - Pål Halvorsen
- Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway
- Department for Informatics, University of Oslo, Oslo 0316, Norway
| | - Michael Riegler
- Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway
- Department for Informatics, University of Oslo, Oslo 0316, Norway
| |
Collapse
|
14
|
Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng 2018; 2:741-748. [PMID: 31015647 DOI: 10.1038/s41551-018-0301-3] [Citation(s) in RCA: 259] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 08/29/2018] [Indexed: 02/08/2023]
Abstract
The detection and removal of precancerous polyps via colonoscopy is the gold standard for the prevention of colon cancer. However, the detection rate of adenomatous polyps can vary significantly among endoscopists. Here, we show that a machine-learning algorithm can detect polyps in clinical colonoscopies, in real time and with high sensitivity and specificity. We developed the deep-learning algorithm by using data from 1,290 patients, and validated it on newly collected 27,113 colonoscopy images from 1,138 patients with at least one detected polyp (per-image-sensitivity, 94.38%; per-image-specificity, 95.92%; area under the receiver operating characteristic curve, 0.984), on a public database of 612 polyp-containing images (per-image-sensitivity, 88.24%), on 138 colonoscopy videos with histologically confirmed polyps (per-image-sensitivity of 91.64%; per-polyp-sensitivity, 100%), and on 54 unaltered full-range colonoscopy videos without polyps (per-image-specificity, 95.40%). By using a multi-threaded processing system, the algorithm can process at least 25 frames per second with a latency of 76.80 ± 5.60 ms in real-time video analysis. The software may aid endoscopists while performing colonoscopies, and help assess differences in polyp and adenoma detection performance among endoscopists.
Collapse
|