1
|
Wang Y, Wei S, Zuo R, Kam M, Opfermann JD, Sunmola I, Hsieh MH, Krieger A, Kang JU. Automatic and real-time tissue sensing for autonomous intestinal anastomosis using hybrid MLP-DC-CNN classifier-based optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2543-2560. [PMID: 38633079 PMCID: PMC11019703 DOI: 10.1364/boe.521652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/18/2024] [Accepted: 03/18/2024] [Indexed: 04/19/2024]
Abstract
Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.
Collapse
Affiliation(s)
- Yaning Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Ruizhi Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Idris Sunmola
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael H. Hsieh
- Division of Urology, Children’s National Hospital, 111 Michigan Ave NW, Washington, D.C. 20010, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
2
|
Li H, Wang Z, Guan Z, Miao J, Li W, Yu P, Molina Jimenez C. UCFNNet: Ulcerative colitis evaluation based on fine-grained lesion learner and noise suppression gating. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108080. [PMID: 38382306 DOI: 10.1016/j.cmpb.2024.108080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 02/09/2024] [Accepted: 02/14/2024] [Indexed: 02/23/2024]
Abstract
BACKGROUND AND OBJECTIVE Ulcerative colitis (UC) is a chronic disease characterized by recurrent symptoms and significant morbidity. The exact cause of the disease remains unknown. The selection of current treatment options for ulcerative colitis depends on the severity and location of the disease in each patient. Therefore, developing a fully automated endoscopic images for evaluating UC is crucial for guiding treatment plans and facilitating early prevention efforts. METHODS We propose a network called ulcerative colitis evaluation based on fine-grained lesion learner and noise suppression gating (UCFNNet). UCFNNet contains three novel modules. Firstly, a fine-grained lesion feature learner (FG-LF Learner) is proposed by integrating local features and a Softmax category prediction (SCP) module to improve the feature accuracy in small lesion areas. Subsequently, a graph convolutional feature combiner (GCFC) is developed to connect features across adjacent convolutional layers and to incorporate short connections between input and output, thereby mitigating feature loss during transmission. Thereafter, a noise suppression gating (NS gating) technique is designed by implementing a grid attention mechanism and a feature gating (FG) module to prioritize significant lesion features and suppress irrelevant and noisy regions in the input feature map. RESULTS We evaluate the performance of the proposed network on both privately-collected and publicly-available datasets. The evaluation of UC achieves excellent results on privately-collected dataset, with an accuracy (ACC) of 89.57 %, Matthews correlation coefficient (MCC) of 85.52 %, precision of 89.26 %, recall of 89.48 %, and F1-score of 89.78 %. The results are also impressive on publicly-available dataset, with ACC of 85.47 %, MCC of 80.42 %, precision of 85.62 %, recall of 84.00 %, and F1-score of 84.53 %, surpassing the performance of state-of-the-art techniques. CONCLUSION Our proposed model introduces three innovative algorithm modules, which outperform the current state-of-the-art methods and achieve high ACC and F1-score. This indicates that our method has superior performance compared to traditional machine learning and existing deep methods, which means that our method has good application prospects. Meanwhile, it has been verified that the proposed model demonstrates good interpretability. The source code is available at github.com/YinLeRenNB/UCFNNet.
Collapse
Affiliation(s)
- Haiyan Li
- School of Information, Yunnan University, Kunming 650504, China
| | - Zhixin Wang
- School of Information, Yunnan University, Kunming 650504, China
| | - Zheng Guan
- School of Information, Yunnan University, Kunming 650504, China.
| | - Jiarong Miao
- Department of Gastroenterology, the First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Weihua Li
- School of Information, Yunnan University, Kunming 650504, China
| | - Pengfei Yu
- School of Information, Yunnan University, Kunming 650504, China
| | | |
Collapse
|
3
|
Kapuria S, Minot P, Kapusta A, Ikoma N, Alambeigi F. A Novel Dual Layer Cascade Reliability Framework for an Informed and Intuitive Clinician-AI Interaction In Diagnosis of Colorectal Cancer Polyps. IEEE J Biomed Health Inform 2024; PP:10.1109/JBHI.2024.3350082. [PMID: 38194408 PMCID: PMC11231060 DOI: 10.1109/jbhi.2024.3350082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
We present a novel Cascade Reliability Framework (CRF) that integrates two independent cascade layers of reliability (i.e., variational temperature scaling and conformal prediction) with a pre-trained Machine Learning (ML) model in order to provide clinicians with a more reliable and tunable tool for early-stage diagnosis of Colorectal Cancer (CRC) polyps. The conformal prediction layer generates predictive sets that are guaranteed to contain the true polyp type with an adjustable error rate tuned by clinicians, while the confidence calibration generates meaningful confidence estimates for each predicted label. These two layers provide additional information and an error-tuning-ability for clinicians to assist them in making informed and intuitive decisions considering the outputs of the pre-trained ML model. Utilizing a novel vision-based tactile sensor and unique 3D-printed CRC polyp phantoms, we evaluated the trustworthiness of the proposed architecture and particularly dual outputs of four different types of CRF models, integrated with two different pre-trained ML models (i.e., ResNet18 and Dilated Residual Network) to highlight the model-agnostic feature of the architecture. To thoroughly assess the performance of the proposed approach, we used reliability diagrams and metrics such as accuracy, coverage, and average set size, while also addressing inter-class performance. Results demonstrate that the calibrated CRF models are well capable of handling non-ideal inputs with noise and blur. Moreover, using the conformal prediction with a user-defined error rate and various experiments, we show how clinicians can intuitively interact with a pre-trained ML model to make informed decisions and minimize the risk of CRC polyps misdiagnoses.
Collapse
|
4
|
Tokutake K, Morelos-Gomez A, Hoshi KI, Katouda M, Tejima S, Endo M. Artificial intelligence for the prevention and prediction of colorectal neoplasms. J Transl Med 2023; 21:431. [PMID: 37400891 DOI: 10.1186/s12967-023-04258-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 06/09/2023] [Indexed: 07/05/2023] Open
Abstract
BACKGROUND Colonoscopy is a useful as a cancer screening test. However, in countries with limited medical resources, there are restrictions on the widespread use of endoscopy. Non-invasive screening methods to determine whether a patient requires a colonoscopy are thus desired. Here, we investigated whether artificial intelligence (AI) can predict colorectal neoplasia. METHODS We used data from physical exams and blood analyses to determine the incidence of colorectal polyp. However, these features exhibit highly overlapping classes. The use of a kernel density estimator (KDE)-based transformation improved the separability of both classes. RESULTS Along with an adequate polyp size threshold, the optimal machine learning (ML) models' performance provided 0.37 and 0.39 Matthews correlation coefficient (MCC) for the datasets of men and women, respectively. The models exhibit a higher discrimination than fecal occult blood test with 0.047 and 0.074 MCC for men and women, respectively. CONCLUSION The ML model can be chosen according to the desired polyp size discrimination threshold, may suggest further colorectal screening, and possible adenoma size. The KDE feature transformation could serve to score each biomarker and background factors (health lifestyles) to suggest measures to be taken against colorectal adenoma growth. All the information that the AI model provides can lower the workload for healthcare providers and be implemented in health care systems with scarce resources. Furthermore, risk stratification may help us to optimize the efficiency of resources for screening colonoscopy.
Collapse
Affiliation(s)
- Kohjiro Tokutake
- Department of Gastroenterology, Nagano Red Cross Hospital, 5-22-1 Wakasato, Nagano, 380-8582, Japan.
| | | | - Ken-Ichi Hoshi
- Department of Health Checkup Center, Nagano Red Cross Hospital, 5-22-1 Wakasato, Nagano, 380-8582, Japan
| | - Michio Katouda
- Research Organization for Information Science & Technology, 2-32-3, Kitashinagawa, Shinagawa-ku, Tokyo, 140-0001, Japan
| | - Syogo Tejima
- Research Organization for Information Science & Technology, 2-32-3, Kitashinagawa, Shinagawa-ku, Tokyo, 140-0001, Japan
| | - Morinobu Endo
- Research Initiative for Supra-Materials, Shinshu University, 4-17-1 Wakasato, Nagano, 380-8553, Japan.
| |
Collapse
|
5
|
Haja SA, Mahadevappa V. Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology. Rom J Ophthalmol 2023; 67:222-237. [PMID: 37876506 PMCID: PMC10591431 DOI: 10.22336/rjo.2023.39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2023] [Indexed: 10/26/2023] Open
Abstract
A leading cause of irreversible vision loss, glaucoma needs early detection for effective management. Intraocular Pressure (IOP) is a significant risk factor for glaucoma. Convolutional Neural Networks (CNN) demonstrate exceptional capabilities in analyzing retinal fundus images, a non-invasive and cost-effective imaging technique widely used in glaucoma diagnosis. By learning from large datasets of annotated images, CNN can identify subtle changes in the optic nerve head and retinal structures indicative of glaucoma. This enables early and precise glaucoma diagnosis, empowering clinicians to implement timely interventions. CNNs excel in analyzing complex medical images, detecting subtle changes indicative of glaucoma with high precision. Another valuable diagnostic tool for glaucoma evaluation, Optical Coherence Tomography (OCT), provides high-resolution cross-sectional images of the retina. CNN can effectively analyze OCT scans and extract meaningful features, facilitating the identification of structural abnormalities associated with glaucoma. Visual field testing, performed using devices like the Humphrey Field Analyzer, is crucial for assessing functional vision loss in glaucoma. The integration of CNN with retinal fundus images, OCT scans, visual field testing, and IOP measurements represents a transformative approach to glaucoma detection. These advanced technologies have the potential to revolutionize ophthalmology by enabling early detection, personalized management, and improved patient outcomes. CNNs facilitate remote expert opinions and enhance treatment monitoring. Overcoming challenges such as data scarcity and interpretability can optimize CNN utilization in glaucoma diagnosis. Measuring retinal nerve fiber layer thickness as a diagnostic marker proves valuable. CNN implementation reduces healthcare costs and improves access to quality eye care. Future research should focus on optimizing architectures and incorporating novel biomarkers. CNN integration in glaucoma detection revolutionizes ophthalmology, improving patient outcomes and access to care. This review paves the way for innovative CNN-based glaucoma detection methods. Abbreviations: CNN = Convolutional Neural Networks, AI = Artificial Intelligence, IOP = Intraocular Pressure, OCT = Optical Coherence Tomography, CLSO = Confocal Scanning Laser Ophthalmoscopy, AUC-ROC = Area Under the Receiver Operating Characteristic Curve, RNFL = Retinal Nerve Fiber Layer, RNN = Recurrent Neural Networks, VF = Visual Field, AP = Average Precision, MD = Mean Defect, sLV = square-root of Loss Variance, NN = Neural Network, WHO = World Health Organization.
Collapse
Affiliation(s)
- Shafeeq Ahmed Haja
- Department of Ophthalmology, Bangalore Medical College and Research Institute, India
| | - Vidyadevi Mahadevappa
- Department of Ophthalmology, Bangalore Medical College and Research Institute, India
| |
Collapse
|
6
|
Wang KN, Zhuang S, Ran QY, Zhou P, Hua J, Zhou GQ, He X. DLGNet: A dual-branch lesion-aware network with the supervised Gaussian Mixture model for colon lesions classification in colonoscopy images. Med Image Anal 2023; 87:102832. [PMID: 37148864 DOI: 10.1016/j.media.2023.102832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 01/20/2023] [Accepted: 04/20/2023] [Indexed: 05/08/2023]
Abstract
Colorectal cancer is one of the malignant tumors with the highest mortality due to the lack of obvious early symptoms. It is usually in the advanced stage when it is discovered. Thus the automatic and accurate classification of early colon lesions is of great significance for clinically estimating the status of colon lesions and formulating appropriate diagnostic programs. However, it is challenging to classify full-stage colon lesions due to the large inter-class similarities and intra-class differences of the images. In this work, we propose a novel dual-branch lesion-aware neural network (DLGNet) to classify intestinal lesions by exploring the intrinsic relationship between diseases, composed of four modules: lesion location module, dual-branch classification module, attention guidance module, and inter-class Gaussian loss function. Specifically, the elaborate dual-branch module integrates the original image and the lesion patch obtained by the lesion localization module to explore and interact with lesion-specific features from a global and local perspective. Also, the feature-guided module guides the model to pay attention to the disease-specific features by learning remote dependencies through spatial and channel attention after network feature learning. Finally, the inter-class Gaussian loss function is proposed, which assumes that each feature extracted by the network is an independent Gaussian distribution, and the inter-class clustering is more compact, thereby improving the discriminative ability of the network. The extensive experiments on the collected 2568 colonoscopy images have an average accuracy of 91.50%, and the proposed method surpasses the state-of-the-art methods. This study is the first time that colon lesions are classified at each stage and achieves promising colon disease classification performance. To motivate the community, we have made our code publicly available via https://github.com/soleilssss/DLGNet.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Shuaishuai Zhuang
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qi-Yong Ran
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Ping Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Jie Hua
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China; Liyang People's Hospital, Liyang Branch Hospital of Jiangsu Province Hospital, Liyang, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | - Xiaopu He
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.
| |
Collapse
|
7
|
Schulz D, Heilmaier M, Phillip V, Treiber M, Mayr U, Lahmer T, Mueller J, Demir IE, Friess H, Reichert M, Schmid RM, Abdelhafez M. Accurate prediction of histological grading of intraductal papillary mucinous neoplasia using deep learning. Endoscopy 2023; 55:415-422. [PMID: 36323331 DOI: 10.1055/a-1971-1274] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
BACKGROUND Risk stratification and recommendation for surgery for intraductal papillary mucinous neoplasm (IPMN) are currently based on consensus guidelines. Risk stratification from presurgery histology is only potentially decisive owing to the low sensitivity of fine-needle aspiration. In this study, we developed and validated a deep learning-based method to distinguish between IPMN with low grade dysplasia and IPMN with high grade dysplasia/invasive carcinoma using endoscopic ultrasound (EUS) images. METHODS For model training, we acquired a total of 3355 EUS images from 43 patients who underwent pancreatectomy from March 2015 to August 2021. All patients had histologically proven IPMN. We used transfer learning to fine-tune a convolutional neural network and to classify "low grade IPMN" from "high grade IPMN/invasive carcinoma." Our test set consisted of 1823 images from 27 patients, recruiting 11 patients retrospectively, 7 patients prospectively, and 9 patients externally. We compared our results with the prediction based on international consensus guidelines. RESULTS Our approach could classify low grade from high grade/invasive carcinoma in the test set with an accuracy of 99.6 % (95 %CI 99.5 %-99.9 %). Our deep learning model achieved superior accuracy in prediction of the histological outcome compared with any individual guideline, which have accuracies between 51.8 % (95 %CI 31.9 %-71.3 %) and 70.4 % (95 %CI 49.8-86.2). CONCLUSION This pilot study demonstrated that deep learning in IPMN-EUS images can predict the histological outcome with high accuracy.
Collapse
Affiliation(s)
- Dominik Schulz
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Markus Heilmaier
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Veit Phillip
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Treiber
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Ulrich Mayr
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Tobias Lahmer
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Julius Mueller
- Klinik für Innere Medizin II, Universitätsklinikum Freiburg, Freiburg, Germany
| | - Ihsan Ekin Demir
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Helmut Friess
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Maximilian Reichert
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Roland M Schmid
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Mohamed Abdelhafez
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
8
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
9
|
Turan M, Durmus F. UC-NfNet: Deep learning-enabled assessment of ulcerative colitis from colonoscopy images. Med Image Anal 2022; 82:102587. [DOI: 10.1016/j.media.2022.102587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/12/2022] [Accepted: 08/17/2022] [Indexed: 10/31/2022]
|
10
|
Double-Balanced Loss for Imbalanced Colorectal Lesion Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1691075. [PMID: 35979050 PMCID: PMC9377973 DOI: 10.1155/2022/1691075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022]
Abstract
Colorectal cancer has a high incidence rate in all countries around the world, and the survival rate of patients is improved by early detection. With the development of object detection technology based on deep learning, computer-aided diagnosis of colonoscopy medical images becomes a reality, which can effectively reduce the occurrence of missed diagnosis and misdiagnosis. In medical image recognition, the assumption that training samples follow independent identical distribution (IID) is the key to the high accuracy of deep learning. However, the classification of medical images is unbalanced in most cases. This paper proposes a new loss function named the double-balanced loss function for the deep learning model, to improve the impact of datasets on classification accuracy. It introduces the effects of sample size and sample difficulty to the loss calculation and deals with both sample size imbalance and sample difficulty imbalance. And it combines with deep learning to build the medical diagnosis model for colorectal cancer. Experimentally verified by three colorectal white-light endoscopic image datasets, the double-balanced loss function proposed in this paper has better performance on the imbalance classification problem of colorectal medical images.
Collapse
|
11
|
Carteri RB, Grellert M, Borba DL, Marroni CA, Fernandes SA. Machine learning approaches using blood biomarkers in non-alcoholic fatty liver diseases. Artif Intell Gastroenterol 2022; 3:80-87. [DOI: 10.35712/aig.v3.i3.80] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/15/2022] [Accepted: 05/08/2022] [Indexed: 02/06/2023] Open
Abstract
The prevalence of nonalcoholic fatty liver disease (NAFLD) is an important public health concern. Early diagnosis of NAFLD and potential progression to nonalcoholic steatohepatitis (NASH), could reduce the further advance of the disease, and improve patient outcomes. Aiming to support patient diagnostic and predict specific outcomes, the interest in artificial intelligence (AI) methods in hepatology has dramatically increased, especially with the application of less-invasive biomarkers. In this review, our objective was twofold: Firstly, we presented the most frequent blood biomarkers in NAFLD and NASH and secondly, we reviewed recent literature regarding the use of machine learning (ML) methods to predict NAFLD and NASH in large cohorts. Strikingly, these studies provide insights into ML application in NAFLD patients' prognostics and ranked blood biomarkers are able to provide a recognizable signature allowing cost-effective NAFLD prediction and also differentiating NASH patients. Future studies should consider the limitations in the current literature and expand the application of these algorithms in different populations, fortifying an already promising tool in medical science.
Collapse
Affiliation(s)
- Randhall B Carteri
- Department of Nutrition, Methodist University Center - IPA, Porto Alegre 90420-060, Rio Grande do Sul, Brazil
- Department of Health and Behaviour, Catholic University of Pelotas, Pelotas 96015-560, Rio Grande do Sul, Brazil
| | - Mateus Grellert
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis 88040-900, Santa Catarina, Brazil
| | - Daniela Luisa Borba
- Postgraduate Program in Hepatology, Federal University of Health Sciences of Porto Alegre, Porto Alegre 90050-170, Rio Grande do Sul, Brazil
| | - Claudio Augusto Marroni
- Department of Gastroenterology and Hepatology, Federal University of Health Sciences of Porto Alegre, Porto Alegre 90050-170, Rio Grande do Sul, Brazil
| | - Sabrina Alves Fernandes
- Postgraduate Program in Hepatology, Federal University of Health Sciences of Porto Alegre, Porto Alegre 90050-170, Rio Grande do Sul, Brazil
| |
Collapse
|
12
|
Luca M, Ciobanu A. Polyp detection in video colonoscopy using deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Video colonoscopy automatic processing is a challenge and further development of computer assisted diagnosis is very helpful in correctness assessment of the exam, in e-learning and training, for statistics on polyps’ malignity or in polyps’ survey. New devices and programming languages are emerging and deep learning begun already to furnish astonishing results, in the quest for high speed and optimal polyp detection software. This paper presents a successful attempt in detecting the intestinal polyps in real time video colonoscopy with deep learning, using Mobile Net.
Collapse
Affiliation(s)
- Mihaela Luca
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| | - Adrian Ciobanu
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| |
Collapse
|
13
|
Kumar N, Sharma M, Singh VP, Madan C, Mehandia S. An empirical study of handcrafted and dense feature extraction techniques for lung and colon cancer classification from histopathological images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103596] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
14
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
15
|
Detection and Classification of Colorectal Polyp Using Deep Learning. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2805607. [PMID: 35463989 PMCID: PMC9033358 DOI: 10.1155/2022/2805607] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/05/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Colorectal Cancer (CRC) is the third most dangerous cancer in the world and also increasing day by day. So, timely and accurate diagnosis is required to save the life of patients. Cancer grows from polyps which can be either cancerous or noncancerous. So, if the cancerous polyps are detected accurately and removed on time, then the dangerous consequences of cancer can be reduced to a large extent. The colonoscopy is used to detect the presence of colorectal polyps. However, manual examinations performed by experts are prone to various errors. Therefore, some researchers have utilized machine and deep learning-based models to automate the diagnosis process. However, existing models suffer from overfitting and gradient vanishing problems. To overcome these problems, a convolutional neural network- (CNN-) based deep learning model is proposed. Initially, guided image filter and dynamic histogram equalization approaches are used to filter and enhance the colonoscopy images. Thereafter, Single Shot MultiBox Detector (SSD) is used to efficiently detect and classify colorectal polyps from colonoscopy images. Finally, fully connected layers with dropouts are used to classify the polyp classes. Extensive experimental results on benchmark dataset show that the proposed model achieves significantly better results than the competitive models. The proposed model can detect and classify colorectal polyps from the colonoscopy images with 92% accuracy.
Collapse
|
16
|
Artificial intelligence-enhanced white-light colonoscopy with attention guidance predicts colorectal cancer invasion depth. Gastrointest Endosc 2021; 94:627-638.e1. [PMID: 33852902 DOI: 10.1016/j.gie.2021.03.936] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/30/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Endoscopic submucosal dissection (ESD) and EMR are applied in treating superficial colorectal neoplasms but are contraindicated by deeply invasive colorectal cancer (CRC). The invasion depth of neoplasms can be examined by an automated artificial intelligence (AI) system to determine the applicability of ESD and EMR. METHODS A deep convolutional neural network with a tumor localization branch to guide invasion depth classification was constructed on the GoogLeNet architecture. The model was trained using 7734 nonmagnified white-light colonoscopy (WLC) images supplemented by image augmentation from 657 lesions labeled with histopathologic analysis of invasion depth. An independent testing dataset consisting of 1634 WLC images from 156 lesions was used to validate the model. RESULTS For predicting noninvasive and superficially invasive neoplasms, the model achieved an overall accuracy of 91.1% (95% confidence interval [CI], 89.6%-92.4%), with 91.2% sensitivity (95% CI, 88.8%-93.3%) and 91.0% specificity (95% CI, 89.0%-92.7%) at an optimal cutoff of .41 and the area under the receiver operating characteristic (AUROC) curve of .970 (95% CI, .962-.978). Inclusion of the advanced CRC data significantly increased the sensitivity in differentiating superficial neoplasms from deeply invasive early CRC to 65.3% (95% CI, 61.9%-68.8%) with an AUROC curve of .729 (95% CI, .699-.759), similar to experienced endoscopists (.691; 95% CI, .624-.758). CONCLUSIONS We have developed an AI-enhanced attention-guided WLC system that differentiates noninvasive or superficially submucosal invasive neoplasms from deeply invasive CRC with high accuracy, sensitivity, and specificity.
Collapse
|
17
|
Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, Krejcar O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput Biol Med 2021; 137:104789. [PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 08/18/2021] [Accepted: 08/18/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway; Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, Norway; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jan Bures
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tacheci
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| |
Collapse
|
18
|
Yoo BS, D'Souza SM, Houston K, Patel A, Lau J, Elmahdi A, Parekh PJ, Johnson D. Artificial intelligence and colonoscopy − enhancements and improvements. Artif Intell Gastrointest Endosc 2021; 2:157-167. [DOI: 10.37126/aige.v2.i4.157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 06/21/2021] [Accepted: 07/23/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence is a technology that processes and analyzes information with reproducibility and accuracy. Its application in medicine, especially in the field of gastroenterology, has great potential to facilitate in diagnosis of various disease states. Currently, the role of artificial intelligence as it pertains to colonoscopy revolves around enhanced polyp detection and characterization. The aim of this article is to review the current and potential future applications of artificial intelligence for enhanced quality of detection for colorectal neoplasia.
Collapse
Affiliation(s)
- Byung Soo Yoo
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Steve M D'Souza
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Kevin Houston
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Ankit Patel
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - James Lau
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Alsiddig Elmahdi
- Department of Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Parth J Parekh
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23505, United States
| | - David Johnson
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23505, United States
| |
Collapse
|
19
|
Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening. J Digit Imaging 2021; 33:619-631. [PMID: 31848896 DOI: 10.1007/s10278-019-00269-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Transfer learning using deep pre-trained convolutional neural networks is increasingly used to solve a large number of problems in the medical field. In spite of being trained using images with entirely different domain, these networks are flexible to adapt to solve a problem in a different domain too. Transfer learning involves fine-tuning a pre-trained network with optimal values of hyperparameters such as learning rate, batch size, and number of training epochs. The process of training the network identifies the relevant features for solving a specific problem. Adapting the pre-trained network to solve a different problem requires fine-tuning until relevant features are obtained. This is facilitated through the use of large number of filters present in the convolutional layers of pre-trained network. A very few features out of these features are useful for solving the problem in a different domain, while others are irrelevant, use of which may only reduce the efficacy of the network. However, by minimizing the number of filters required to solve the problem, the efficiency of the training the network can be improved. In this study, we consider identification of relevant filters using the pre-trained networks namely AlexNet and VGG-16 net to detect cervical cancer from cervix images. This paper presents a novel hybrid transfer learning technique, in which a CNN is built and trained from scratch, with initial weights of only those filters which were identified as relevant using AlexNet and VGG-16 net. This study used 2198 cervix images with 1090 belonging to negative class and 1108 to positive class. Our experiment using hybrid transfer learning achieved an accuracy of 91.46%.
Collapse
|
20
|
Computer-Aided Colon Polyp Detection on High Resolution Colonoscopy Using Transfer Learning Techniques. SENSORS 2021; 21:s21165315. [PMID: 34450756 PMCID: PMC8402119 DOI: 10.3390/s21165315] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/01/2021] [Accepted: 08/04/2021] [Indexed: 01/10/2023]
Abstract
Colonoscopies reduce the incidence of colorectal cancer through early recognition and resecting of the colon polyps. However, the colon polyp miss detection rate is as high as 26% in conventional colonoscopy. The search for methods to decrease the polyp miss rate is nowadays a paramount task. A number of algorithms or systems have been developed to enhance polyp detection, but few are suitable for real-time detection or classification due to their limited computational ability. Recent studies indicate that the automated colon polyp detection system is developing at an astonishing speed. Real-time detection with classification is still a yet to be explored field. Newer image pattern recognition algorithms with convolutional neuro-network (CNN) transfer learning has shed light on this topic. We proposed a study using real-time colonoscopies with the CNN transfer learning approach. Several multi-class classifiers were trained and mAP ranged from 38% to 49%. Based on an Inception v2 model, a detector adopting a Faster R-CNN was trained. The mAP of the detector was 77%, which was an improvement of 35% compared to the same type of multi-class classifier. Therefore, our results indicated that the polyp detection model could attain a high accuracy, but the polyp type classification still leaves room for improvement.
Collapse
|
21
|
Doherty T, McKeever S, Al-Attar N, Murphy T, Aura C, Rahman A, O'Neill A, Finn SP, Kay E, Gallagher WM, Watson RWG, Gowen A, Jackman P. Feature fusion of Raman chemical imaging and digital histopathology using machine learning for prostate cancer detection. Analyst 2021; 146:4195-4211. [PMID: 34060548 DOI: 10.1039/d1an00075f] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The diagnosis of prostate cancer is challenging due to the heterogeneity of its presentations, leading to the over diagnosis and treatment of non-clinically important disease. Accurate diagnosis can directly benefit a patient's quality of life and prognosis. Towards addressing this issue, we present a learning model for the automatic identification of prostate cancer. While many prostate cancer studies have adopted Raman spectroscopy approaches, none have utilised the combination of Raman Chemical Imaging (RCI) and other imaging modalities. This study uses multimodal images formed from stained Digital Histopathology (DP) and unstained RCI. The approach was developed and tested on a set of 178 clinical samples from 32 patients, containing a range of non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples. For each histological sample, there is a pathologist labelled DP-RCI image pair. The hypothesis tested was whether multimodal image models can outperform single modality baseline models in terms of diagnostic accuracy. Binary non-cancer/cancer models and the more challenging G3/G4 differentiation were investigated. Regarding G3/G4 classification, the multimodal approach achieved a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model showed a sensitivity and specificity of 54.1% and 84.7% respectively. The multimodal approach demonstrated a statistically significant 12.7% AUC advantage over the baseline with a value of 85.8% compared to 73.1%, also outperforming models based solely on RCI and mean and median Raman spectra. Feature fusion of DP and RCI does not improve the more trivial task of tumour identification but does deliver an observed advantage in G3/G4 discrimination. Building on these promising findings, future work could include the acquisition of larger datasets for enhanced model generalization.
Collapse
Affiliation(s)
- Trevor Doherty
- Technological University Dublin, School of Computer Science, City Campus, Grangegorman Lower, Dublin 7, Ireland.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Bardhi O, Sierra-Sosa D, Garcia-Zapirain B, Bujanda L. Deep Learning Models for Colorectal Polyps. INFORMATION 2021; 12:245. [DOI: 10.3390/info12060245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2023] Open
Abstract
Colorectal cancer is one of the main causes of cancer incident cases and cancer deaths worldwide. Undetected colon polyps, be them benign or malignant, lead to late diagnosis of colorectal cancer. Computer aided devices have helped to decrease the polyp miss rate. The application of deep learning algorithms and techniques has escalated during this last decade. Many scientific studies are published to detect, localize, and classify colon polyps. We present here a brief review of the latest published studies. We compare the accuracy of these studies with our results obtained from training and testing three independent datasets using a convolutional neural network and autoencoder model. A train, validate and test split was performed for each dataset, 75%, 15%, and 15%, respectively. An accuracy of 0.937 was achieved for CVC-ColonDB, 0.951 for CVC-ClinicDB, and 0.967 for ETIS-LaribPolypDB. Our results suggest slight improvements compared to the algorithms used to date.
Collapse
|
23
|
Franklin MM, Schultz FA, Tafoya MA, Kerwin AA, Broehm CJ, Fischer EG, Gullapalli RR, Clark DP, Hanson JA, Martin DR. A Deep Learning Convolutional Neural Network Can Differentiate Between Helicobacter Pylori Gastritis and Autoimmune Gastritis With Results Comparable to Gastrointestinal Pathologists. Arch Pathol Lab Med 2021; 146:117-122. [PMID: 33861314 DOI: 10.5858/arpa.2020-0520-oa] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/20/2021] [Indexed: 12/16/2022]
Abstract
CONTEXT.— Pathology studies using convolutional neural networks (CNNs) have focused on neoplasms, while studies in inflammatory pathology are rare. We previously demonstrated a CNN differentiates reactive gastropathy, Helicobacter pylori gastritis (HPG), and normal gastric mucosa. OBJECTIVE.— To determine whether a CNN can differentiate the following 2 gastric inflammatory patterns: autoimmune gastritis (AG) and HPG. DESIGN.— Gold standard diagnoses were blindly established by 2 gastrointestinal (GI) pathologists. One hundred eighty-seven cases were scanned for analysis by HALO-AI. All levels and tissue fragments per slide were included for analysis. The cases were randomized, 112 (60%; 60 HPG, 52 AG) in the training set and 75 (40%; 40 HPG, 35 AG) in the test set. A HALO-AI correct area distribution (AD) cutoff of 50% or more was required to credit the CNN with the correct diagnosis. The test set was blindly reviewed by pathologists with different levels of GI pathology expertise as follows: 2 GI pathologists, 2 general surgical pathologists, and 2 residents. Each pathologist rendered their preferred diagnosis, HPG or AG. RESULTS.— At the HALO-AI AD percentage cutoff of 50% or more, the CNN results were 100% concordant with the gold standard diagnoses. On average, autoimmune gastritis cases had 84.7% HALO-AI autoimmune gastritis AD and HP cases had 87.3% HALO-AI HP AD. The GI pathologists, general anatomic pathologists, and residents were on average, 100%, 86%, and 57% concordant with the gold standard diagnoses, respectively. CONCLUSIONS.— A CNN can distinguish between cases of HPG and autoimmune gastritis with accuracy equal to GI pathologists.
Collapse
Affiliation(s)
- Michael M Franklin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Fred A Schultz
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Marissa A Tafoya
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Audra A Kerwin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Cory J Broehm
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Edgar G Fischer
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Rama R Gullapalli
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Douglas P Clark
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Joshua A Hanson
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - David R Martin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| |
Collapse
|
24
|
Bari BS, Islam MN, Rashid M, Hasan MJ, Razman MAM, Musa RM, Ab Nasir AF, P.P. Abdul Majeed A. A real-time approach of diagnosing rice leaf disease using deep learning-based faster R-CNN framework. PeerJ Comput Sci 2021; 7:e432. [PMID: 33954231 PMCID: PMC8049121 DOI: 10.7717/peerj-cs.432] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/17/2021] [Indexed: 05/25/2023]
Abstract
The rice leaves related diseases often pose threats to the sustainable production of rice affecting many farmers around the world. Early diagnosis and appropriate remedy of the rice leaf infection is crucial in facilitating healthy growth of the rice plants to ensure adequate supply and food security to the rapidly increasing population. Therefore, machine-driven disease diagnosis systems could mitigate the limitations of the conventional methods for leaf disease diagnosis techniques that is often time-consuming, inaccurate, and expensive. Nowadays, computer-assisted rice leaf disease diagnosis systems are becoming very popular. However, several limitations ranging from strong image backgrounds, vague symptoms' edge, dissimilarity in the image capturing weather, lack of real field rice leaf image data, variation in symptoms from the same infection, multiple infections producing similar symptoms, and lack of efficient real-time system mar the efficacy of the system and its usage. To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) was employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions. The robustness of the Faster R-CNN model is enhanced by training the model with publicly available online and own real-field rice leaf datasets. The proposed deep-learning-based approach was observed to be effective in the automatic diagnosis of three discriminative rice leaf diseases including rice blast, brown spot, and hispa with an accuracy of 98.09%, 98.85%, and 99.17% respectively. Moreover, the model was able to identify a healthy rice leaf with an accuracy of 99.25%. The results obtained herein demonstrated that the Faster R-CNN model offers a high-performing rice leaf infection identification system that could diagnose the most common rice diseases more precisely in real-time.
Collapse
Affiliation(s)
- Bifta Sama Bari
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Md Nahidul Islam
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Mamunur Rashid
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Md Jahid Hasan
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Mohd Azraai Mohd Razman
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Rabiu Muazu Musa
- Centre for Fundamental and Continuing Education, Universiti Malaysia Terengganu, Kuala Nerus, Terengganu, Malaysia
| | - Ahmad Fakhri Ab Nasir
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
- Centre for Software Development & Integrated Computing, Universiti Malaysia Pahang, Pahang Darul Makmur, Pekan, Malaysia
| | - Anwar P.P. Abdul Majeed
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
- Centre for Software Development & Integrated Computing, Universiti Malaysia Pahang, Pahang Darul Makmur, Pekan, Malaysia
| |
Collapse
|
25
|
Cui P, Shu T, Lei J, Chen W. Nerve recognition in percutaneous transforaminal endoscopic discectomy using convolutional neural network. Med Phys 2021; 48:2279-2288. [PMID: 33683736 DOI: 10.1002/mp.14822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Percutaneous transforaminal endoscopic discectomy (PTED) is one of the most common minimally invasive surgery methods used in clinic in recent years. In this study, we developed a computer-aided detection system (CADS) based on convolutional neural network (CNN) to automatically recognize nerve and dura mater images under PTED surgery. METHODS We collected surgical videos from 65 patients with lumbar disc herniation who underwent PTED; we then converted the videos into images, and randomly divided some images into a training dataset, a validation dataset, test dataset. The training dataset and validation dataset were composed of 10 454 images containing nerve and dura mater from 50 randomly selected patients; test dataset contained 12 000 images from the remaining 15 patients. RESULTS The results showed that sensitivity, specificity, and accuracy reached 90.90%, 93.68%, and 92.29%, respectively. CADS could recognize the nerve and dura mater with no significant difference (P > 0.05) between each patient in test dataset. In comparison with clinicians of different levels, the performance of CADS was lower than that of a spinal endoscopist, but significantly higher than that of general surgeons. With the assistance of CADS, the performance of the general surgeons approached that of the spinal endoscopist. CONCLUSIONS CNN can recognize well nerve and dura mater images in PTED surgery, and can help general surgeons to improve their ability to recognize tissues during the operation.
Collapse
Affiliation(s)
- Peng Cui
- Biomedical Information Engineering Lab, The University of Aizu, Aizu-Wakamatsu City, Fukushima, 965-8580, Japan
| | - Tao Shu
- Department of Spine Surgery, Pinghu Hospital, Health Science Center, Shenzhen University, Shenzhen, 518116, China
| | - Jun Lei
- Department of Orthopedics, Zhongnan Hospital, Wuhan University, Wuhan, Hubei, 430071, China
| | - Wenxi Chen
- Biomedical Information Engineering Lab, The University of Aizu, Aizu-Wakamatsu City, Fukushima, 965-8580, Japan
| |
Collapse
|
26
|
Prediction of the histology of colorectal neoplasm in white light colonoscopic images using deep learning algorithms. Sci Rep 2021; 11:5311. [PMID: 33674628 PMCID: PMC7935886 DOI: 10.1038/s41598-021-84299-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 02/15/2021] [Indexed: 01/01/2023] Open
Abstract
The treatment plan of colorectal neoplasm differs based on histology. Although new endoscopic imaging systems have been developed, there are clear diagnostic thresholds and requirements in using them. To overcome these limitations, we trained convolutional neural networks (CNNs) with endoscopic images and developed a computer-aided diagnostic (CAD) system which predicts the pathologic histology of colorectal adenoma. We retrospectively collected colonoscopic images from two tertiary hospitals and labeled 3400 images into one of 4 classes according to the final histology: normal, low-grade dysplasia, high-grade dysplasia, and adenocarcinoma. We implemented a CAD system based on ensemble learning with three CNN models which transfer the knowledge learned from common digital photography images to the colonoscopic image domain. The deep learning models were trained to classify the colorectal adenoma into these 4 classes. We compared the outcomes of the CNN models to those of two endoscopist groups having different years of experience, and visualized the model predictions using Class Activation Mapping. In our multi-center study, our CNN-CAD system identified the histology of colorectal adenoma with as sensitivity 77.25%, specificity of 92.42%, positive predictive value of 77.16%, negative predictive value of 92.58% averaged over the 4 classes, and mean diagnostic time of 0.12 s per image. Our experiments demonstrate that the CNN-CAD showed a similar performance to that of endoscopic experts and outperformed that of trainees. The model visualization results also showed reasonable regions of interest to explain the classification decisions of CAD systems. We suggest that CNN-CAD system can predict the histology of colorectal adenoma.
Collapse
|
27
|
|
28
|
Use of artificial intelligence for detection of gastric lesions by magnetically controlled capsule endoscopy. Gastrointest Endosc 2021; 93:133-139.e4. [PMID: 32470426 DOI: 10.1016/j.gie.2020.05.027] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 05/01/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Magnetically controlled capsule endoscopy (MCE) has become an efficient diagnostic modality for gastric diseases. We developed a novel automatic gastric lesion detection system to assist in diagnosis and reduce inter-physician variations. This study aimed to evaluate the diagnostic capability of the computer-aided detection system for MCE images. METHODS We developed a novel automatic gastric lesion detection system based on a convolutional neural network (CNN) and faster region-based convolutional neural network (RCNN). A total of 1,023,955 MCE images from 797 patients were used to train and test the system. These images were divided into 7 categories (erosion, polyp, ulcer, submucosal tumor, xanthoma, normal mucosa, and invalid images). The primary endpoint was the sensitivity of the system. RESULTS The system detected gastric focal lesions with 96.2% sensitivity (95% confidence interval [CI], 95.7%-96.5%), 76.2% specificity (95% CI, 75.97%-76.3%), 16.0% positive predictive value (95% CI, 15.7%-16.3%), 99.7% negative predictive value (95% CI, 99.74%-99.79%), and 77.1% accuracy (95% CI, 76.9%-77.3%) (sensitivity was 99.3% for erosions; 96.5% for polyps; 89.3% for ulcers; 87.2% for submucosal tumors; 90.6% for xanthomas; 67.8% for normal; and 96.1% for invalid images). Analysis of the receiver operating characteristic curve showed that the area under the curve for all positive images was 0.84. Image processing time was 44 milliseconds per image for the system and 0.38 ± 0.29 seconds per image for clinicians (P < .001). The kappa value of 2 times repeated reads was 1. CONCLUSIONS The CNN faster-RCNN-based diagnostic program system showed good performance in diagnosing gastric focal lesions in MCE images.
Collapse
|
29
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
30
|
Enhanced Image-Based Endoscopic Pathological Site Classification Using an Ensemble of Deep Learning Models. SENSORS 2020; 20:s20215982. [PMID: 33105736 PMCID: PMC7660061 DOI: 10.3390/s20215982] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 12/14/2022]
Abstract
In vivo diseases such as colorectal cancer and gastric cancer are increasingly occurring in humans. These are two of the most common types of cancer that cause death worldwide. Therefore, the early detection and treatment of these types of cancer are crucial for saving lives. With the advances in technology and image processing techniques, computer-aided diagnosis (CAD) systems have been developed and applied in several medical systems to assist doctors in diagnosing diseases using imaging technology. In this study, we propose a CAD method to preclassify the in vivo endoscopic images into negative (images without evidence of a disease) and positive (images that possibly include pathological sites such as a polyp or suspected regions including complex vascular information) cases. The goal of our study is to assist doctors to focus on the positive frames of endoscopic sequence rather than the negative frames. Consequently, we can help in enhancing the performance and mitigating the efforts of doctors in the diagnosis procedure. Although previous studies were conducted to solve this problem, they were mostly based on a single classification model, thus limiting the classification performance. Thus, we propose the use of multiple classification models based on ensemble learning techniques to enhance the performance of pathological site classification. Through experiments with an open database, we confirmed that the ensemble of multiple deep learning-based models with different network architectures is more efficient for enhancing the performance of pathological site classification using a CAD system as compared to the state-of-the-art methods.
Collapse
|
31
|
Wimmer G, Häfner M, Uhl A. Improving CNN training on endoscopic image data by extracting additionally training data from endoscopic videos. Comput Med Imaging Graph 2020; 86:101798. [PMID: 33075676 DOI: 10.1016/j.compmedimag.2020.101798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 06/23/2020] [Accepted: 09/24/2020] [Indexed: 02/07/2023]
Abstract
In this work we present a technique to deal with one of the biggest problems for the application of convolutional neural networks (CNNs) in the area of computer assisted endoscopic image diagnosis, the insufficient amount of training data. Based on patches from endoscopic images of colonic polyps with given label information, our proposed technique acquires additional (labeled) training data by tracking the area shown in the patches through the corresponding endoscopic videos and by extracting additional image patches from frames of these areas. So similar to the widely used augmentation strategies, additional training data is produced by adding images with different orientations, scales and points of view than the original images. However, contrary to augmentation techniques, we do not artificially produce image data but use real image data from videos under different image recording conditions (different viewpoints and image qualities). By means of our proposed method and by filtering out all extracted images with insufficient image quality, we are able to increase the amount of labeled image data by factor 39. We will show that our proposed method clearly and continuously improves the performance of CNNs.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob-Haringerstrasse 2, Salzburg 5020, Austria.
| | - Michael Häfner
- Department of Gastroenterologie and Hepatologie, St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, Wien A-1030, Austria
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob-Haringerstrasse 2, Salzburg 5020, Austria
| |
Collapse
|
32
|
Abdolahi M, Salehi M, Shokatian I, Reiazi R. Artificial intelligence in automatic classification of invasive ductal carcinoma breast cancer in digital pathology images. Med J Islam Repub Iran 2020; 34:140. [PMID: 33437736 PMCID: PMC7787039 DOI: 10.34171/mjiri.34.140] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2019] [Indexed: 12/21/2022] Open
Abstract
Background: Breast cancer is one of the most causes of death in women. Early diagnosis and detection of Invasive Ductal Carcinoma (IDC) is an important key for the treatment of IDC. Computer-aided approaches have great potential to improve diagnosis accuracy. In this paper, we proposed a deep learning-based method for the automatic classification of IDC in whole slide images (WSI) of breast cancer. Furthermore, different types of deep neural networks training such as training from scratch and transfer learning to classify IDC were evaluated. Methods: In total, 277524 image patches with 50×50-pixel size form original images were used for model training. In the first method, we train a simple convolutional neural network (named it baseline model) on these images. In the second approach, we used the pre-trained VGG-16 CNN model via feature extraction and fine-tuning for the classification of breast pathology images. Results: Our baseline model achieved a better result for the automatic classification of IDC in terms of F-measure and accuracy (83%, 85%) in comparison with original paper on this data set and achieved a comparable result with a new study that introduced accepted-rejected pooling layer. Also, transfer learning via feature extraction yielded better results (81%, 81%) in comparison with handcrafted features. Furthermore, transfer learning via feature extraction yielded better classification results in comparison with the baseline model. Conclusion: The experimental results demonstrate that using deep learning approaches yielded better results in comparison with handcrafted features. Also, using transfer learning in histopathology image analysis yielded significant results in comparison with training from scratch in much less time.
Collapse
Affiliation(s)
- Mohammad Abdolahi
- Department of Radiation Technology, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Mohammad Salehi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran
- Student Research Committee, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Iman Shokatian
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran
- Student Research Committee, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Reza Reiazi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
33
|
Sierra F, Gutierrez Y, Martinez F. An online deep convolutional polyp lesion prediction over Narrow Band Imaging (NBI). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2412-2415. [PMID: 33018493 DOI: 10.1109/embc44109.2020.9176534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Polyps, represented as abnormal protuberances along intestinal track, are the main biomarker to diagnose gastrointestinal cancer. During routine colonoscopies such polyps are localized and coarsely characterized according to microvascular and surface textural patterns. Narrow-band imaging (NBI) sequences have emerged as complementary technique to enhance description of suspicious mucosa surfaces according to blood vessels architectures. Nevertheless, a high number of misleading polyp characterization, together with expert dependency during evaluation, reduce the possibility of effective disease treatments. Additionally, challenges during colonoscopy, such as abrupt camera motions, changes of intensity and artifacts, difficult the diagnosis task. This work introduces a robust frame-level convolutional strategy with the capability to characterize and predict hyperplastic, adenomas and serrated polyps over NBI sequences. The proposed strategy was evaluated over a total of 76 videos achieving an average accuracy of 90,79% to distinguish among these three classes. Remarkably, the approach achieves a 100% of accuracy to differentiate intermediate serrated polyps, whose evaluation is challenging even for expert gastroenterologist. The approach was also favorable to support polyp resection decisions, achieving perfect score on evaluated dataset.Clinical relevance- The proposed approach supports observable hystological characterization of polyps during a routine colonoscopy avoiding misclassification of potential masses that could evolve in cancer.
Collapse
|
34
|
Zorron Cheng Tao Pu L, Maicas G, Tian Y, Yamamura T, Nakamura M, Suzuki H, Singh G, Rana K, Hirooka Y, Burt AD, Fujishiro M, Carneiro G, Singh R. Computer-aided diagnosis for characterization of colorectal lesions: comprehensive software that includes differentiation of serrated lesions. Gastrointest Endosc 2020; 92:891-899. [PMID: 32145289 DOI: 10.1016/j.gie.2020.02.042] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 02/19/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Endoscopy guidelines recommend adhering to policies such as resect and discard only if the optical biopsy is accurate. However, accuracy in predicting histology can vary greatly. Computer-aided diagnosis (CAD) for characterization of colorectal lesions may help with this issue. In this study, CAD software developed at the University of Adelaide (Australia) that includes serrated polyp differentiation was validated with Japanese images on narrow-band imaging (NBI) and blue-laser imaging (BLI). METHODS CAD software developed using machine learning and densely connected convolutional neural networks was modeled with NBI colorectal lesion images (Olympus 190 series - Australia) and validated for NBI (Olympus 290 series) and BLI (Fujifilm 700 series) with Japanese datasets. All images were correlated with histology according to the modified Sano classification. The CAD software was trained with Australian NBI images and tested with separate sets of images from Australia (NBI) and Japan (NBI and BLI). RESULTS An Australian dataset of 1235 polyp images was used as training, testing, and internal validation sets. A Japanese dataset of 20 polyp images on NBI and 49 polyp images on BLI was used as external validation sets. The CAD software had a mean area under the curve (AUC) of 94.3% for the internal set and 84.5% and 90.3% for the external sets (NBI and BLI, respectively). CONCLUSIONS The CAD achieved AUCs comparable with experts and similar results with NBI and BLI. Accurate CAD prediction was achievable, even when the predicted endoscopy imaging technology was not part of the training set.
Collapse
Affiliation(s)
- Leonardo Zorron Cheng Tao Pu
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia; Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Gabriel Maicas
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Yu Tian
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia; South Australian Health and Medical Research Institute, Adelaide, South Australia, Australia
| | - Takeshi Yamamura
- Department of Endoscopy, Nagoya University Hospital, Nagoya, Aichi, Japan
| | - Masanao Nakamura
- Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Hiroto Suzuki
- Department of Endoscopy, Nagoya University Hospital, Nagoya, Aichi, Japan
| | - Gurfarmaan Singh
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Khizar Rana
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Yoshiki Hirooka
- Department of Liver, Biliary Tract and Pancreas Diseases, Fujita Health University, Toyoake, Aichi, Japan
| | - Alastair D Burt
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Gustavo Carneiro
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Rajvinder Singh
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia; Department of Gastroenterology and Hepatology, Lyell McEwin Hospital, Adelaide, South Australia, Australia
| |
Collapse
|
35
|
Toward automated severe pharyngitis detection with smartphone camera using deep learning networks. Comput Biol Med 2020; 125:103980. [PMID: 32871294 PMCID: PMC7440230 DOI: 10.1016/j.compbiomed.2020.103980] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Revised: 08/18/2020] [Accepted: 08/18/2020] [Indexed: 02/06/2023]
Abstract
PURPOSE Severe pharyngitis is frequently associated with inflammations caused by streptococcal pharyngitis, which can cause immune-mediated and post-infectious complications. The recent global pandemic of coronavirus disease (COVID-19) encourages the use of telemedicine for patients with respiratory symptoms. This study, therefore, purposes automated detection of severe pharyngitis using a deep learning framework with self-taken throat images. METHODS A dataset composed of two classes of 131 throat images with pharyngitis and 208 normal throat images was collected. Before the training classifier, we constructed a cycle consistency generative adversarial network (CycleGAN) to augment the training dataset. The ResNet50, Inception-v3, and MobileNet-v2 architectures were trained with transfer learning and validated using a randomly selected test dataset. The performance of the models was evaluated based on the accuracy and area under the receiver operating characteristic curve (ROC-AUC). RESULTS The CycleGAN-based synthetic images reflected the pragmatic characteristic features of pharyngitis. Using the synthetic throat images, the deep learning model demonstrated a significant improvement in the accuracy of the pharyngitis diagnosis. ResNet50 with GAN-based augmentation showed the best ROC-AUC of 0.988 for pharyngitis detection in the test dataset. In the 4-fold cross-validation using the ResNet50, the highest detection accuracy and ROC-AUC achieved were 95.3% and 0.992, respectively. CONCLUSION The deep learning model for smartphone-based pharyngitis screening allows fast identification of severe pharyngitis with a potential of the timely diagnosis of pharyngitis. In the recent pandemic of COVID-19, this framework will help patients with upper respiratory symptoms to improve convenience in diagnosis and reduce transmission.
Collapse
|
36
|
Application of A Convolutional Neural Network in The Diagnosis of Gastric Mesenchymal Tumors on Endoscopic Ultrasonography Images. J Clin Med 2020; 9:jcm9103162. [PMID: 33003602 PMCID: PMC7600226 DOI: 10.3390/jcm9103162] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/18/2020] [Accepted: 09/27/2020] [Indexed: 12/14/2022] Open
Abstract
Background and Aims: Endoscopic ultrasonography (EUS) is a useful diagnostic modality for evaluating gastric mesenchymal tumors; however, differentiating gastrointestinal stromal tumors (GISTs) from benign mesenchymal tumors such as leiomyomas and schwannomas remains challenging. For this reason, we developed a convolutional neural network computer-aided diagnosis (CNN-CAD) system that can analyze gastric mesenchymal tumors on EUS images. Methods: A total of 905 EUS images of gastric mesenchymal tumors (pathologically confirmed GIST, leiomyoma, and schwannoma) were used as a training dataset. Validation was performed using 212 EUS images of gastric mesenchymal tumors. This test dataset was interpreted by three experienced and three junior endoscopists. Results: The sensitivity, specificity, and accuracy of the CNN-CAD system for differentiating GISTs from non-GIST tumors were 83.0%, 75.5%, and 79.2%, respectively. Its diagnostic specificity and accuracy were significantly higher than those of two experienced and one junior endoscopists. In the further sequential analysis to differentiate leiomyoma from schwannoma in non-GIST tumors, the final diagnostic accuracy of the CNN-CAD system was 75.5%, which was significantly higher than that of two experienced and one junior endoscopists. Conclusions: Our CNN-CAD system showed high accuracy in diagnosing gastric mesenchymal tumors on EUS images. It may complement the current clinical practices in the EUS diagnosis of gastric mesenchymal tumors.
Collapse
|
37
|
Wang KW, Dong M. Potential applications of artificial intelligence in colorectal polyps and cancer: Recent advances and prospects. World J Gastroenterol 2020; 26:5090-5100. [PMID: 32982111 PMCID: PMC7495038 DOI: 10.3748/wjg.v26.i34.5090] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/01/2020] [Accepted: 08/12/2020] [Indexed: 02/06/2023] Open
Abstract
Since the advent of artificial intelligence (AI) technology, it has been constantly studied and has achieved rapid development. The AI assistant system is expected to improve the quality of automatic polyp detection and classification. It could also help prevent endoscopists from missing polyps and make an accurate optical diagnosis. These functions provided by AI could result in a higher adenoma detection rate and decrease the cost of polypectomy for hyperplastic polyps. In addition, AI has good performance in the staging, diagnosis, and segmentation of colorectal cancer. This article provides an overview of recent research focusing on the application of AI in colorectal polyps and cancer and highlights the advances achieved.
Collapse
Affiliation(s)
- Ke-Wei Wang
- Department of Gastrointestinal Surgery, the First Affiliated Hospital of China Medical University, Shenyang 110001, Liaoning Province, China
| | - Ming Dong
- Department of Gastrointestinal Surgery, the First Affiliated Hospital of China Medical University, Shenyang 110001, Liaoning Province, China
| |
Collapse
|
38
|
Igarashi S, Sasaki Y, Mikami T, Sakuraba H, Fukuda S. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet. Comput Biol Med 2020; 124:103950. [PMID: 32798923 DOI: 10.1016/j.compbiomed.2020.103950] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/28/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND Machine learning has led to several endoscopic studies about the automated localization of digestive lesions and prediction of cancer invasion depth. Training and validation dataset collection are required for a disease in each digestive organ under a similar image capture condition; this is the first step in system development. This data cleansing task in data collection causes a great burden among experienced endoscopists. Thus, this study classified upper gastrointestinal (GI) organ images obtained via routine esophagogastroduodenoscopy (EGD) into precise anatomical categories using AlexNet. METHOD In total, 85,246 raw upper GI endoscopic images from 441 patients with gastric cancer were collected retrospectively. The images were manually classified into 14 categories: 0) white-light (WL) stomach with indigo carmine (IC); 1) WL esophagus with iodine; 2) narrow-band (NB) esophagus; 3) NB stomach with IC; 4) NB stomach; 5) WL duodenum; 6) WL esophagus; 7) WL stomach; 8) NB oral-pharynx-larynx; 9) WL oral-pharynx-larynx; 10) WL scaling paper; 11) specimens; 12) WL muscle fibers during endoscopic submucosal dissection (ESD); and 13) others. AlexNet is a deep learning framework and was trained using 49,174 datasets and validated using 36,072 independent datasets. RESULTS The accuracy rates of the training and validation dataset were 0.993 and 0.965, respectively. CONCLUSIONS A simple anatomical organ classifier using AlexNet was developed and found to be effective in data cleansing task for collection of EGD images. Moreover, it could be useful to both expert and non-expert endoscopists as well as engineers in retrospectively assessing upper GI images.
Collapse
Affiliation(s)
- Shohei Igarashi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, 53 Hon-cho, Hirosaki, 036-8563, Japan.
| | - Tatsuya Mikami
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Shinsaku Fukuda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| |
Collapse
|
39
|
Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB. Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 2020; 108:101923. [PMID: 32972656 DOI: 10.1016/j.artmed.2020.101923] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/03/2020] [Accepted: 07/01/2020] [Indexed: 02/07/2023]
Abstract
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Collapse
Affiliation(s)
| | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| | - Artzai Picón
- Tecnalia, Parque Científico y Tecnológico de Bizkaia, C/ Astondo bidea, Edificio 700, 48160 Derio, Spain.
| | | | - J Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| |
Collapse
|
40
|
Patel K, Li K, Tao K, Wang Q, Bansal A, Rastogi A, Wang G. A comparative study on polyp classification using convolutional neural networks. PLoS One 2020; 15:e0236452. [PMID: 32730279 PMCID: PMC7392235 DOI: 10.1371/journal.pone.0236452] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/07/2020] [Indexed: 12/31/2022] Open
Abstract
Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.
Collapse
Affiliation(s)
- Krushi Patel
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Kaidong Li
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Ke Tao
- The First Hospital of Jilin University, Changchun, China
| | - Quan Wang
- The First Hospital of Jilin University, Changchun, China
| | - Ajay Bansal
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Amit Rastogi
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Guanghui Wang
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| |
Collapse
|
41
|
Mostafiz R, Rahman MM, Uddin MS. Gastrointestinal polyp classification through empirical mode decomposition and neural features. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2944-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
|
42
|
The Road Not Taken with Pyrrole-Imidazole Polyamides: Off-Target Effects and Genomic Binding. Biomolecules 2020; 10:biom10040544. [PMID: 32260120 PMCID: PMC7226143 DOI: 10.3390/biom10040544] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 03/16/2020] [Accepted: 03/19/2020] [Indexed: 12/20/2022] Open
Abstract
The high sequence specificity of minor groove-binding N-methylpyrrole-N-methylimidazole polyamides have made significant advances in cancer and disease biology, yet there have been few comprehensive reports on their off-target effects, most likely as a consequence of the lack of available tools in evaluating genomic binding, an essential aspect that has gone seriously underexplored. Compared to other N-heterocycles, the off-target effects of these polyamides and their specificity for the DNA minor groove and primary base pair recognition require the development of new analytical methods, which are missing in the field today. This review aims to highlight the current progress in deciphering the off-target effects of these N-heterocyclic molecules and suggests new ways that next-generating sequencing can be used in addressing off-target effects.
Collapse
|
43
|
Rex DK. Can we do resect and discard with artificial intelligence-assisted colon polyp “optical biopsy?”. ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.tgie.2019.150638] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
44
|
Ellebrecht DB, Latus S, Schlaefer A, Keck T, Gessert N. Towards an Optical Biopsy during Visceral Surgical Interventions. Visc Med 2020; 36:70-79. [PMID: 32355663 DOI: 10.1159/000505938] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 01/13/2020] [Indexed: 12/24/2022] Open
Abstract
Background Cancer will replace cardiovascular diseases as the most frequent cause of death. Therefore, the goals of cancer treatment are prevention strategies and early detection by cancer screening and ideal stage therapy. From an oncological point of view, complete tumor resection is a significant prognostic factor. Optical coherence tomography (OCT) and confocal laser microscopy (CLM) are two techniques that have the potential to complement intraoperative frozen section analysis as in vivo and real-time optical biopsies. Summary In this review we present both procedures and review the progress of evaluation for intraoperative application in visceral surgery. For visceral surgery, there are promising studies evaluating OCT and CLM; however, application during routine visceral surgical interventions is still lacking. Key Message OCT and CLM are not competing but complementary approaches of tissue analysis to intraoperative frozen section analysis. Although intraoperative application of OCT and CLM is at an early stage, they are two promising techniques of intraoperative in vivo and real-time tissue examination. Additionally, deep learning strategies provide a significant supplement for automated tissue detection.
Collapse
Affiliation(s)
- David Benjamin Ellebrecht
- LungenClinic Grosshansdorf, Department of Thoracic Surgery, Grosshansdorf, Germany.,University Medical Center Schleswig-Holstein, Campus Lübeck, Department of Surgery, Lübeck, Germany
| | - Sarah Latus
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| | - Alexander Schlaefer
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| | - Tobias Keck
- University Medical Center Schleswig-Holstein, Campus Lübeck, Department of Surgery, Lübeck, Germany
| | - Nils Gessert
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| |
Collapse
|
45
|
Deep principal dimension encoding for the classification of early neoplasia in Barrett's Esophagus with volumetric laser endomicroscopy. Comput Med Imaging Graph 2020; 80:101701. [PMID: 32044547 DOI: 10.1016/j.compmedimag.2020.101701] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 12/20/2019] [Accepted: 01/14/2020] [Indexed: 02/07/2023]
Abstract
Barrett cancer is a treatable disease when detected at an early stage. However, current screening protocols are often not effective at finding the disease early. Volumetric Laser Endomicroscopy (VLE) is a promising new imaging tool for finding dysplasia in Barrett's esophagus (BE) at an early stage, by acquiring cross-sectional images of the microscopic structure of BE up to 3-mm deep. However, interpretation of VLE scans is difficult for medical doctors due to both the size and subtlety of the gray-scale data. Therefore, algorithms that can accurately find cancerous regions are very valuable for the interpretation of VLE data. In this study, we propose a fully-automatic multi-step Computer-Aided Detection (CAD) algorithm that optimally leverages the effectiveness of deep learning strategies by encoding the principal dimension in VLE data. Additionally, we show that combining the encoded dimensions with conventional machine learning techniques further improves results while maintaining interpretability. Furthermore, we train and validate our algorithm on a new histopathologically validated set of in-vivo VLE snapshots. Additionally, an independent test set is used to assess the performance of the model. Finally, we compare the performance of our algorithm against previous state-of-the-art systems. With the encoded principal dimension, we obtain an Area Under the Curve (AUC) and F1 score of 0.93 and 87.4% on the test set respectively. We show this is a significant improvement compared to the state-of-the-art of 0.89 and 83.1%, respectively, thereby demonstrating the effectiveness of our approach.
Collapse
|
46
|
Moccia S, Romeo L, Migliorelli L, Frontoni E, Zingaretti P. Supervised CNN Strategies for Optical Image Segmentation and Classification in Interventional Medicine. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2020. [DOI: 10.1007/978-3-030-42750-4_8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
47
|
Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158:76-94.e2. [PMID: 31593701 DOI: 10.1053/j.gastro.2019.08.058] [Citation(s) in RCA: 280] [Impact Index Per Article: 70.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 02/07/2023]
Abstract
Since 2010, substantial progress has been made in artificial intelligence (AI) and its application to medicine. AI is explored in gastroenterology for endoscopic analysis of lesions, in detection of cancer, and to facilitate the analysis of inflammatory lesions or gastrointestinal bleeding during wireless capsule endoscopy. AI is also tested to assess liver fibrosis and to differentiate patients with pancreatic cancer from those with pancreatitis. AI might also be used to establish prognoses of patients or predict their response to treatments, based on multiple factors. We review the ways in which AI may help physicians make a diagnosis or establish a prognosis and discuss its limitations, knowing that further randomized controlled studies will be required before the approval of AI techniques by the health authorities.
Collapse
Affiliation(s)
- Catherine Le Berre
- Institut des Maladies de l'Appareil Digestif, Nantes University Hospital, France; Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France
| | | | - Sabeur Aridhi
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Marie-Dominique Devignes
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Laure Fournier
- Université Paris-Descartes, Institut National de la Santé et de la Recherche Médicale, Unité Mixte De Recherché S970, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Malika Smaïl-Tabbone
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Silvio Danese
- Inflammatory Bowel Disease Center and Department of Biomedical Sciences, Humanitas Clinical and Research Center, Humanitas University, Milan, Italy
| | - Laurent Peyrin-Biroulet
- Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France.
| |
Collapse
|
48
|
Wang S, Xing Y, Zhang L, Gao H, Zhang H. A systematic evaluation and optimization of automatic detection of ulcers in wireless capsule endoscopy on a large dataset using deep convolutional neural networks. Phys Med Biol 2019; 64:235014. [PMID: 31645019 DOI: 10.1088/1361-6560/ab5086] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Compared with conventional gastroscopy which is invasive and painful, wireless capsule endoscopy (WCE) can provide noninvasive examination of gastrointestinal (GI) tract. The WCE video can effectively support physicians to reach a diagnostic decision while a huge number of images need to be analyzed (more than 50 000 frames per patient). In this paper, we propose a computer-aided diagnosis method called second glance (secG) detection framework for automatic detection of ulcers based on deep convolutional neural networks that provides both classification confidence and bounding box of lesion area. We evaluated its performance on a large dataset that consists of 1504 patient cases (the largest WCE ulcer dataset to our best knowledge, 1076 cases with ulcers, 428 normal cases). We use 15 781 ulcer frames from 753 ulcer cases and 17 138 normal frames from 300 normal cases for training. Validation dataset consists of 2040 ulcer frames from 108 cases and 2319 frames from 43 normal cases. For test, we use 4917 ulcer frames from 215 ulcer cases and 5007 frames from 85 normal cases. Test results demonstrate the 0.9469 ROC-AUC of the proposed secG detection framework outperforms state-of-the-art detection frameworks including Faster-RCNN (0.9014) and SSD-300 (0.8355), which implies the effectiveness of our method. From the ulcer size analysis, we find the detection of ulcers is highly related to the size. For ulcers with size larger than 1% of the full image size, the sensitivity exceeds 92.00%. For ulcers that are smaller than 1% of the full image size, the sensitivity is around 85.00%. The overall sensitivity, specificity and accuracy are 89.71%, 90.48% and 90.10%, at a threshold value of 0.6706, which implies the potential of the proposed method to suppress oversights and to reduce the burden of physicians.
Collapse
Affiliation(s)
- Sen Wang
- Key Laboratory of Particle and Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, People's Republic of China. Department of Engineering Physics, Tsinghua University, Beijing 100084, People's Republic of China
| | | | | | | | | |
Collapse
|
49
|
Ding Z, Shi H, Zhang H, Meng L, Fan M, Han C, Zhang K, Ming F, Xie X, Liu H, Liu J, Lin R, Hou X. Gastroenterologist-Level Identification of Small-Bowel Diseases and Normal Variants by Capsule Endoscopy Using a Deep-Learning Model. Gastroenterology 2019; 157:1044-1054.e5. [PMID: 31251929 DOI: 10.1053/j.gastro.2019.06.025] [Citation(s) in RCA: 170] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 06/02/2019] [Accepted: 06/17/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND & AIMS Capsule endoscopy has revolutionized investigation of the small bowel. However, this technique produces a video that is 8-10 hours long, so analysis is time consuming for gastroenterologists. Deep convolutional neural networks (CNNs) can recognize specific images among a large variety. We aimed to develop a CNN-based algorithm to assist in the evaluation of small bowel capsule endoscopy (SB-CE) images. METHODS We collected 113,426,569 images from 6970 patients who had SB-CE at 77 medical centers from July 2016 through July 2018. A CNN-based auxiliary reading model was trained to differentiate abnormal from normal images using 158,235 SB-CE images from 1970 patients. Images were categorized as normal, inflammation, ulcer, polyps, lymphangiectasia, bleeding, vascular disease, protruding lesion, lymphatic follicular hyperplasia, diverticulum, parasite, and other. The model was further validated in 5000 patients (no patient was overlap with the 1970 patients in the training set); the same patients were evaluated by conventional analysis and CNN-based auxiliary analysis by 20 gastroenterologists. If there was agreement in image categorization between the conventional analysis and CNN model, no further evaluation was performed. If there was disagreement between the conventional analysis and CNN model, the gastroenterologists re-evaluated the image to confirm or reject the CNN categorization. RESULTS In the SB-CE images from the validation set, 4206 abnormalities in 3280 patients were identified after final consensus evaluation. The CNN-based auxiliary model identified abnormalities with 99.88% sensitivity in the per-patient analysis (95% CI, 99.67-99.96) and 99.90% sensitivity in the per-lesion analysis (95% CI, 99.74-99.97). Conventional reading by the gastroenterologists identified abnormalities with 74.57% sensitivity (95% CI, 73.05-76.03) in the per-patient analysis and 76.89% in the per-lesion analysis (95% CI, 75.58-78.15). The mean reading time per patient was 96.6 ± 22.53 minutes by conventional reading and 5.9 ± 2.23 minutes by CNN-based auxiliary reading (P < .001). CONCLUSIONS We validated the ability of a CNN-based algorithm to identify abnormalities in SB-CE images. The CNN-based auxiliary model identified abnormalities with higher levels of sensitivity and significantly shorter reading times than conventional analysis by gastroenterologists. This algorithm provides an important tool to help gastroenterologists analyze SB-CE images more efficiently and more accurately.
Collapse
Affiliation(s)
- Zhen Ding
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Huiying Shi
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Hao Zhang
- Ankon Medical Technologies Co, Ltd, Shanghai, China
| | - Lingjun Meng
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Mengke Fan
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Chaoqun Han
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Kun Zhang
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Fanhua Ming
- Ankon Medical Technologies Co, Ltd, Shanghai, China
| | - Xiaoping Xie
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Hao Liu
- Ankon Medical Technologies Co, Ltd, Shanghai, China
| | - Jun Liu
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Rong Lin
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China.
| | - Xiaohua Hou
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| |
Collapse
|
50
|
Elguindi S, Zelefsky MJ, Jiang J, Veeraraghavan H, Deasy JO, Hunt MA, Tyagi N. Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy. Phys Imaging Radiat Oncol 2019; 12:80-86. [PMID: 32355894 PMCID: PMC7192345 DOI: 10.1016/j.phro.2019.11.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 11/20/2019] [Accepted: 11/22/2019] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance (MR) only radiation therapy for prostate treatment provides superior contrast for defining targets and organs-at-risk (OARs). This study aims to develop a deep learning model to leverage this advantage to automate the contouring process. MATERIALS AND METHODS Six structures (bladder, rectum, urethra, penile bulb, rectal spacer, prostate and seminal vesicles) were contoured and reviewed by a radiation oncologist on axial T2-weighted MR image sets from 50 patients, which constituted expert delineations. The data was split into a 40/10 training and validation set to train a two-dimensional fully convolutional neural network, DeepLabV3+, using transfer learning. The T2-weighted image sets were pre-processed to 2D false color images to leverage pre-trained (from natural images) convolutional layers' weights. Independent testing was performed on an additional 50 patient's MR scans. Performance comparison was done against a U-Net deep learning method. Algorithms were evaluated using volumetric Dice similarity coefficient (VDSC) and surface Dice similarity coefficient (SDSC). RESULTS When comparing VDSC, DeepLabV3+ significantly outperformed U-Net for all structures except urethra (P < 0.001). Average VDSC was 0.93 ± 0.04 (bladder), 0.83 ± 0.06 (prostate and seminal vesicles [CTV]), 0.74 ± 0.13 (penile bulb), 0.82 ± 0.05 (rectum), 0.69 ± 0.10 (urethra), and 0.81 ± 0.1 (rectal spacer). Average SDSC was 0.92 ± 0.1 (bladder), 0.85 ± 0.11 (prostate and seminal vesicles [CTV]), 0.80 ± 0.22 (penile bulb), 0.87 ± 0.07 (rectum), 0.85 ± 0.25 (urethra), and 0.83 ± 0.26 (rectal spacer). CONCLUSION A deep learning-based model produced contours that show promise to streamline an MR-only planning workflow in treating prostate cancer.
Collapse
Affiliation(s)
- Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Michael J. Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Margie A. Hunt
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|