1
|
Tan J, Yuan J, Fu X, Bai Y. Colonoscopy polyp classification via enhanced scattering wavelet Convolutional Neural Network. PLoS One 2024; 19:e0302800. [PMID: 39392783 PMCID: PMC11469526 DOI: 10.1371/journal.pone.0302800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 08/26/2024] [Indexed: 10/13/2024] Open
Abstract
Among the most common cancers, colorectal cancer (CRC) has a high death rate. The best way to screen for colorectal cancer (CRC) is with a colonoscopy, which has been shown to lower the risk of the disease. As a result, Computer-aided polyp classification technique is applied to identify colorectal cancer. But visually categorizing polyps is difficult since different polyps have different lighting conditions. Different from previous works, this article presents Enhanced Scattering Wavelet Convolutional Neural Network (ESWCNN), a polyp classification technique that combines Convolutional Neural Network (CNN) and Scattering Wavelet Transform (SWT) to improve polyp classification performance. This method concatenates simultaneously learnable image filters and wavelet filters on each input channel. The scattering wavelet filters can extract common spectral features with various scales and orientations, while the learnable filters can capture image spatial features that wavelet filters may miss. A network architecture for ESWCNN is designed based on these principles and trained and tested using colonoscopy datasets (two public datasets and one private dataset). An n-fold cross-validation experiment was conducted for three classes (adenoma, hyperplastic, serrated) achieving a classification accuracy of 96.4%, and 94.8% accuracy in two-class polyp classification (positive and negative). In the three-class classification, correct classification rates of 96.2% for adenomas, 98.71% for hyperplastic polyps, and 97.9% for serrated polyps were achieved. The proposed method in the two-class experiment reached an average sensitivity of 96.7% with 93.1% specificity. Furthermore, we compare the performance of our model with the state-of-the-art general classification models and commonly used CNNs. Six end-to-end models based on CNNs were trained using 2 dataset of video sequences. The experimental results demonstrate that the proposed ESWCNN method can effectively classify polyps with higher accuracy and efficacy compared to the state-of-the-art CNN models. These findings can provide guidance for future research in polyp classification.
Collapse
Affiliation(s)
- Jun Tan
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiamin Yuan
- Health construction administration center, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, Guangdong, China
- The Second Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine(TCM), Guangzhou, Guangdong, China
| | - Xiaoyong Fu
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yilin Bai
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- China Southern Airlines, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Ding M, Yan J, Chao G, Zhang S. Application of artificial intelligence in colorectal cancer screening by colonoscopy: Future prospects (Review). Oncol Rep 2023; 50:199. [PMID: 37772392 DOI: 10.3892/or.2023.8636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/07/2023] [Indexed: 09/30/2023] Open
Abstract
Colorectal cancer (CRC) has become a severe global health concern, with the third‑high incidence and second‑high mortality rate of all cancers. The burden of CRC is expected to surge to 60% by 2030. Fortunately, effective early evidence‑based screening could significantly reduce the incidence and mortality of CRC. Colonoscopy is the core screening method for CRC with high popularity and accuracy. Yet, the accuracy of colonoscopy in CRC screening is related to the experience and state of operating physicians. It is challenging to maintain the high CRC diagnostic rate of colonoscopy. Artificial intelligence (AI)‑assisted colonoscopy will compensate for the above shortcomings and improve the accuracy, efficiency, and quality of colonoscopy screening. The unique advantages of AI, such as the continuous advancement of high‑performance computing capabilities and innovative deep‑learning architectures, which hugely impact the control of colorectal cancer morbidity and mortality expectancy, highlight its role in colonoscopy screening.
Collapse
Affiliation(s)
- Menglu Ding
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Junbin Yan
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Guanqun Chao
- Department of General Practice, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang 310000, P.R. China
| | - Shuo Zhang
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| |
Collapse
|
3
|
Huang D, Xu X, Du P, Feng Y, Zhang X, Lu H, Liu Y. Radiomics-based T-staging of hollow organ cancers. Front Oncol 2023; 13:1191519. [PMID: 37719013 PMCID: PMC10499612 DOI: 10.3389/fonc.2023.1191519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/11/2023] [Indexed: 09/19/2023] Open
Abstract
Cancer growing in hollow organs has become a serious threat to human health. The accurate T-staging of hollow organ cancers is a major concern in the clinic. With the rapid development of medical imaging technologies, radiomics has become a reliable tool of T-staging. Due to similar growth characteristics of hollow organ cancers, radiomics studies of these cancers can be used as a common reference. In radiomics, feature-based and deep learning-based methods are two critical research focuses. Therefore, we review feature-based and deep learning-based T-staging methods in this paper. In conclusion, existing radiomics studies may underestimate the hollow organ wall during segmentation and the depth of invasion in staging. It is expected that this survey could provide promising directions for following research in this realm.
Collapse
Affiliation(s)
- Dong Huang
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Xiaopan Xu
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Peng Du
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Yuefei Feng
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Xi Zhang
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Hongbing Lu
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| | - Yang Liu
- School of Biomedical Engineering, Air Force Medical University, Shaanxi, China
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, Shaanxi, China
| |
Collapse
|
4
|
Gan P, Li P, Xia H, Zhou X, Tang X. The application of artificial intelligence in improving colonoscopic adenoma detection rate: Where are we and where are we going. GASTROENTEROLOGIA Y HEPATOLOGIA 2023; 46:203-213. [PMID: 35489584 DOI: 10.1016/j.gastrohep.2022.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/08/2023]
Abstract
Colorectal cancer (CRC) is one of the common malignant tumors in the world. Colonoscopy is the crucial examination technique in CRC screening programs for the early detection of precursor lesions, and treatment of early colorectal cancer, which can reduce the morbidity and mortality of CRC significantly. However, pooled polyp miss rates during colonoscopic examination are as high as 22%. Artificial intelligence (AI) provides a promising way to improve the colonoscopic adenoma detection rate (ADR). It might assist endoscopists in avoiding missing polyps and offer an accurate optical diagnosis of suspected lesions. Herein, we described some of the milestone studies in using AI for colonoscopy, and the future application directions of AI in improving colonoscopic ADR.
Collapse
Affiliation(s)
- Peiling Gan
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Peiling Li
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Huifang Xia
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xiaowei Tang
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
5
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
6
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
7
|
Detection and Classification of Colorectal Polyp Using Deep Learning. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2805607. [PMID: 35463989 PMCID: PMC9033358 DOI: 10.1155/2022/2805607] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/05/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Colorectal Cancer (CRC) is the third most dangerous cancer in the world and also increasing day by day. So, timely and accurate diagnosis is required to save the life of patients. Cancer grows from polyps which can be either cancerous or noncancerous. So, if the cancerous polyps are detected accurately and removed on time, then the dangerous consequences of cancer can be reduced to a large extent. The colonoscopy is used to detect the presence of colorectal polyps. However, manual examinations performed by experts are prone to various errors. Therefore, some researchers have utilized machine and deep learning-based models to automate the diagnosis process. However, existing models suffer from overfitting and gradient vanishing problems. To overcome these problems, a convolutional neural network- (CNN-) based deep learning model is proposed. Initially, guided image filter and dynamic histogram equalization approaches are used to filter and enhance the colonoscopy images. Thereafter, Single Shot MultiBox Detector (SSD) is used to efficiently detect and classify colorectal polyps from colonoscopy images. Finally, fully connected layers with dropouts are used to classify the polyp classes. Extensive experimental results on benchmark dataset show that the proposed model achieves significantly better results than the competitive models. The proposed model can detect and classify colorectal polyps from the colonoscopy images with 92% accuracy.
Collapse
|
8
|
Classification of the Confocal Microscopy Images of Colorectal Tumor and Inflammatory Colitis Mucosa Tissue Using Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12020288. [PMID: 35204379 PMCID: PMC8870781 DOI: 10.3390/diagnostics12020288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/21/2022] [Accepted: 01/21/2022] [Indexed: 12/09/2022] Open
Abstract
Confocal microscopy image analysis is a useful method for neoplasm diagnosis. Many ambiguous cases are difficult to distinguish with the naked eye, thus leading to high inter-observer variability and significant time investments for learning this method. We aimed to develop a deep learning-based neoplasm classification model that classifies confocal microscopy images of 10× magnified colon tissues into three classes: neoplasm, inflammation, and normal tissue. ResNet50 with data augmentation and transfer learning approaches was used to efficiently train the model with limited training data. A class activation map was generated by using global average pooling to confirm which areas had a major effect on the classification. The proposed method achieved an accuracy of 81%, which was 14.05% more accurate than three machine learning-based methods and 22.6% better than the predictions made by four endoscopists. ResNet50 with data augmentation and transfer learning can be utilized to effectively identify neoplasm, inflammation, and normal tissue in confocal microscopy images. The proposed method outperformed three machine learning-based methods and identified the area that had a major influence on the results. Inter-observer variability and the time required for learning can be reduced if the proposed model is used with confocal microscopy image analysis for diagnosis.
Collapse
|
9
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/08/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
10
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
11
|
Cai YW, Dong FF, Shi YH, Lu LY, Chen C, Lin P, Xue YS, Chen JH, Chen SY, Luo XB. Deep learning driven colorectal lesion detection in gastrointestinal endoscopic and pathological imaging. World J Clin Cases 2021; 9:9376-9385. [PMID: 34877273 PMCID: PMC8610875 DOI: 10.12998/wjcc.v9.i31.9376] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 07/26/2021] [Accepted: 08/13/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer has the second highest incidence of malignant tumors and is the fourth leading cause of cancer deaths in China. Early diagnosis and treatment of colorectal cancer will lead to an improvement in the 5-year survival rate, which will reduce medical costs. The current diagnostic methods for early colorectal cancer include excreta, blood, endoscopy, and computer-aided endoscopy. In this paper, research on image analysis and prediction of colorectal cancer lesions based on deep learning is reviewed with the goal of providing a reference for the early diagnosis of colorectal cancer lesions by combining computer technology, 3D modeling, 5G remote technology, endoscopic robot technology, and surgical navigation technology. The findings will supplement the research and provide insights to improve the cure rate and reduce the mortality of colorectal cancer.
Collapse
Affiliation(s)
- Yu-Wen Cai
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Fang-Fen Dong
- Department of Medical Technology and Engineering, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Yu-Heng Shi
- Computer Science and Engineering College, University of Alberta, Edmonton T6G 2R3, Canada
| | - Li-Yuan Lu
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Chen Chen
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Ping Lin
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Yu-Shan Xue
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Jian-Hua Chen
- Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou 350014, Fujian Province, China
| | - Su-Yu Chen
- Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou 350014, Fujian Province, China
| | - Xiong-Biao Luo
- Department of Computer Science, Xiamen University, Xiamen 361005, Fujian, China
| |
Collapse
|
12
|
Mitsala A, Tsalikidis C, Pitiakoudis M, Simopoulos C, Tsaroucha AK. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. ACTA ACUST UNITED AC 2021; 28:1581-1607. [PMID: 33922402 PMCID: PMC8161764 DOI: 10.3390/curroncol28030149] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/09/2021] [Accepted: 04/20/2021] [Indexed: 12/24/2022]
Abstract
The development of artificial intelligence (AI) algorithms has permeated the medical field with great success. The widespread use of AI technology in diagnosing and treating several types of cancer, especially colorectal cancer (CRC), is now attracting substantial attention. CRC, which represents the third most commonly diagnosed malignancy in both men and women, is considered a leading cause of cancer-related deaths globally. Our review herein aims to provide in-depth knowledge and analysis of the AI applications in CRC screening, diagnosis, and treatment based on current literature. We also explore the role of recent advances in AI systems regarding medical diagnosis and therapy, with several promising results. CRC is a highly preventable disease, and AI-assisted techniques in routine screening represent a pivotal step in declining incidence rates of this malignancy. So far, computer-aided detection and characterization systems have been developed to increase the detection rate of adenomas. Furthermore, CRC treatment enters a new era with robotic surgery and novel computer-assisted drug delivery techniques. At the same time, healthcare is rapidly moving toward precision or personalized medicine. Machine learning models have the potential to contribute to individual-based cancer care and transform the future of medicine.
Collapse
Affiliation(s)
- Athanasia Mitsala
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
- Correspondence: ; Tel.: +30-6986423707
| | - Christos Tsalikidis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Michail Pitiakoudis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Constantinos Simopoulos
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Alexandra K. Tsaroucha
- Laboratory of Experimental Surgery & Surgical Research, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece;
| |
Collapse
|
13
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
14
|
Comparison of deep learning and conventional machine learning methods for classification of colon polyp types. EUROBIOTECH JOURNAL 2021. [DOI: 10.2478/ebtj-2021-0006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Determination of polyp types requires tissue biopsy during colonoscopy and then histopathological examination of the microscopic images which tremendously time-consuming and costly. The first aim of this study was to design a computer-aided diagnosis system to classify polyp types using colonoscopy images (optical biopsy) without the need for tissue biopsy. For this purpose, two different approaches were designed based on conventional machine learning (ML) and deep learning. Firstly, classification was performed using random forest approach by means of the features obtained from the histogram of gradients descriptor. Secondly, simple convolutional neural networks (CNN) based architecture was built to train with the colonoscopy images containing colon polyps. The performances of these approaches on two (adenoma & serrated vs. hyperplastic) or three (adenoma vs. hyperplastic vs. serrated) category classifications were investigated. Furthermore, the effect of imaging modality on the classification was also examined using white-light and narrow band imaging systems. The performance of these approaches was compared with the results obtained by 3 novice and 4 expert doctors. Two-category classification results showed that conventional ML approach achieved significantly better than the simple CNN based approach did in both narrow band and white-light imaging modalities. The accuracy reached almost 95% for white-light imaging. This performance surpassed the correct classification rate of all 7 doctors. Additionally, the second task (three-category) results indicated that the simple CNN architecture outperformed both conventional ML based approaches and the doctors. This study shows the feasibility of using conventional machine learning or deep learning based approaches in automatic classification of colon types on colonoscopy images.
Collapse
|
15
|
|
16
|
Misawa M, Kudo SE, Mori Y, Maeda Y, Ogawa Y, Ichimasa K, Kudo T, Wakamura K, Hayashi T, Miyachi H, Baba T, Ishida F, Itoh H, Oda M, Mori K. Current status and future perspective on artificial intelligence for lower endoscopy. Dig Endosc 2021; 33:273-284. [PMID: 32969051 DOI: 10.1111/den.13847] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/03/2020] [Accepted: 09/16/2020] [Indexed: 12/23/2022]
Abstract
The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.
Collapse
Affiliation(s)
- Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan.,Clinical Effectiveness Research Group, Institute of Heath and Society, University of Oslo, Oslo, Norway
| | - Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yushi Ogawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kunihiko Wakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hideyuki Miyachi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toshiyuki Baba
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Fumio Ishida
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
17
|
Wimmer G, Häfner M, Uhl A. Improving CNN training on endoscopic image data by extracting additionally training data from endoscopic videos. Comput Med Imaging Graph 2020; 86:101798. [PMID: 33075676 DOI: 10.1016/j.compmedimag.2020.101798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 06/23/2020] [Accepted: 09/24/2020] [Indexed: 02/07/2023]
Abstract
In this work we present a technique to deal with one of the biggest problems for the application of convolutional neural networks (CNNs) in the area of computer assisted endoscopic image diagnosis, the insufficient amount of training data. Based on patches from endoscopic images of colonic polyps with given label information, our proposed technique acquires additional (labeled) training data by tracking the area shown in the patches through the corresponding endoscopic videos and by extracting additional image patches from frames of these areas. So similar to the widely used augmentation strategies, additional training data is produced by adding images with different orientations, scales and points of view than the original images. However, contrary to augmentation techniques, we do not artificially produce image data but use real image data from videos under different image recording conditions (different viewpoints and image qualities). By means of our proposed method and by filtering out all extracted images with insufficient image quality, we are able to increase the amount of labeled image data by factor 39. We will show that our proposed method clearly and continuously improves the performance of CNNs.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob-Haringerstrasse 2, Salzburg 5020, Austria.
| | - Michael Häfner
- Department of Gastroenterologie and Hepatologie, St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, Wien A-1030, Austria
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob-Haringerstrasse 2, Salzburg 5020, Austria
| |
Collapse
|
18
|
Itoh H, Nimura Y, Mori Y, Misawa M, Kudo SE, Hotta K, Ohtsuka K, Saito S, Saito Y, Ikematsu H, Hayashi Y, Oda M, Mori K. Robust endocytoscopic image classification based on higher-order symmetric tensor analysis and multi-scale topological statistics. Int J Comput Assist Radiol Surg 2020; 15:2049-2059. [PMID: 32935249 DOI: 10.1007/s11548-020-02255-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 09/02/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE An endocytoscope is a new type of endoscope that enables users to perform conventional endoscopic observation and ultramagnified observation at the cell level. Although endocytoscopy is expected to improve the cost-effectiveness of colonoscopy, endocytoscopic image diagnosis requires much knowledge and high-level experience for physicians. To circumvent this difficulty, we developed a robust endocytoscopic (EC) image classification method for the construction of a computer-aided diagnosis (CAD) system, since real-time CAD can resolve accuracy issues and reduce interobserver variability. METHOD We propose a novel feature extraction method by introducing higher-order symmetric tensor analysis to the computation of multi-scale topological statistics on an image, and we integrate this feature extraction with EC image classification. We experimentally evaluate the classification accuracy of our proposed method by comparing it with three deep learning methods. We conducted this comparison by using our large-scale multi-hospital dataset of about 55,000 images of over 3800 patients. RESULTS Our proposed method achieved an average 90% classification accuracy for all the images in four hospitals even though the best deep learning method achieved 95% classification accuracy for images in only one hospital. In the case with a rejection option, the proposed method achieved expert-level accurate classification. These results demonstrate the robustness of our proposed method against pit pattern variations, including differences of colours, contrasts, shapes, and hospitals. CONCLUSIONS We developed a robust EC image classification method with novel feature extraction. This method is useful for the construction of a practical CAD system, since it has sufficient generalisation ability.
Collapse
Affiliation(s)
- Hayato Itoh
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Yukitaka Nimura
- Information Strategy Office, Information and Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuduki-ku, Yokohama, 224-8503, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuduki-ku, Yokohama, 224-8503, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuduki-ku, Yokohama, 224-8503, Japan
| | - Kinichi Hotta
- Division of Endoscopy, Shizuoka Cancer Center, Shimonagakubo 1007, Nagaizumi-cho, Sunto-gun, Shizuoka, 411-8777, Japan
| | - Kazuo Ohtsuka
- Department of Gastroenterology and Hepatology, Tokyo Medical and Dental University, Yushima 1-5-45, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Shoichi Saito
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Ariake 3-8-31, Koto-ku, Tokyo, 135-8550, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tsukiji 5-1-1, Chuo-ku, Tokyo, 104-0045, Japan
| | - Hiroaki Ikematsu
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwanoha 6-5-1, Kashiwa, 277-8577, Japan
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| |
Collapse
|
19
|
Fu Y, Xue P, Ji H, Cui W, Dong E. Deep model with Siamese network for viable and necrotic tumor regions assessment in osteosarcoma. Med Phys 2020; 47:4895-4905. [DOI: 10.1002/mp.14397] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 07/01/2020] [Accepted: 07/10/2020] [Indexed: 01/06/2023] Open
Affiliation(s)
- Yu Fu
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Peng Xue
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Huizhong Ji
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Wentao Cui
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Enqing Dong
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| |
Collapse
|
20
|
A CNN CADx System for Multimodal Classification of Colorectal Polyps Combining WL, BLI, and LCI Modalities. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10155040] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Colorectal polyps are critical indicators of colorectal cancer (CRC). Blue Laser Imaging and Linked Color Imaging are two modalities that allow improved visualization of the colon. In conjunction with the Blue Laser Imaging (BLI) Adenoma Serrated International Classification (BASIC) classification, endoscopists are capable of distinguishing benign and pre-malignant polyps. Despite these advancements, this classification still prevails a high misclassification rate for pre-malignant colorectal polyps. This work proposes a computer aided diagnosis (CADx) system that exploits the additional information contained in two novel imaging modalities, enabling more informative decision-making during colonoscopy. We train and benchmark six commonly used CNN architectures and compare the results with 19 endoscopists that employed the standard clinical classification model (BASIC). The proposed CADx system for classifying colorectal polyps achieves an area under the curve (AUC) of 0.97. Furthermore, we incorporate visual explanatory information together with a probability score, jointly computed from White Light, Blue Laser Imaging, and Linked Color Imaging. Our CADx system for automatic polyp malignancy classification facilitates future advances towards patient safety and may reduce time-consuming and costly histology assessment.
Collapse
|
21
|
Yang Q, Guo Y, Ou X, Wang J, Hu C. Automatic T Staging Using Weakly Supervised Deep Learning for Nasopharyngeal Carcinoma on MR Images. J Magn Reson Imaging 2020; 52:1074-1082. [PMID: 32583578 DOI: 10.1002/jmri.27202] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/07/2020] [Accepted: 05/07/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Recent studies have shown that deep learning can help tumor staging automatically. However, automatic nasopharyngeal carcinoma (NPC) staging is difficult due to the lack of large and slice-level annotated datasets. PURPOSE To develop a weakly-supervised deep-learning method to predict NPC patients' T stage without additional annotations. STUDY TYPE Retrospective. POPULATION/SUBJECTS In all, 1138 cases with NPC from 2010 to 2012 were enrolled, including a training set (n = 712) and a validation set (n = 426). FIELD STRENGTH/SEQUENCE 1.5T, T1 -weighted images (T1 WI), T2 -weighted images (T2 WI), contrast-enhanced T1 -weighted images (CE-T1 WI). ASSESSMENT We used a weakly-supervised deep-learning network to achieve automated T staging of NPC. T usually refers to the size and extent of the main tumor. The training set was employed to construct the deep-learning model. The performance of the automated T staging model was evaluated in the validation set. The accuracy of the model was assessed by the receiver operating characteristic (ROC) curve. To further assess the performance of the deep-learning-based T score, the progression-free survival (PFS) and overall survival (OS) were performed. STATISTICAL TESTS The Sklearn package in Python was applied to calculate the area under the curve (AUC) of the ROC. The survcomp package was used for calculations and comparisons between C-indexes. The software SPSS was employed to conduct survival analysis and chi-square tests. RESULTS The accuracy of the deep-learning model was 75.59% in the validation set. The average AUC of the ROC curve of different stages was 0.943. There were no significant differences in the C-indexes of PFS and OS from the deep-learning model and those from TNM staging, with P values of 0.301 and 0.425, respectively. DATA CONCLUSION This weakly-supervised deep-learning approach can perform fully automated T staging of NPC and achieve good prognostic performance. LEVEL OF EVIDENCE 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2020;52:1074-1082.
Collapse
Affiliation(s)
- Qing Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
22
|
Adenocarcinoma Recognition in Endoscopy Images Using Optimized Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051650] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Colonoscopy, which refers to the endoscopic examination of colon using a camera, is considered as the most effective method for diagnosis of colorectal cancer. Colonoscopy is performed by a medical doctor who visually inspects one’s colon to find protruding or cancerous polyps. In some situations, these polyps are difficult to find by the human eye, which may lead to a misdiagnosis. In recent years, deep learning has revolutionized the field of computer vision due to its exemplary performance. This study proposes a Convolutional Neural Network (CNN) architecture for classifying colonoscopy images as normal, adenomatous polyps, and adenocarcinoma. The main objective of this study is to aid medical practitioners in the correct diagnosis of colorectal cancer. Our proposed CNN architecture consists of 43 convolutional layers and one fully-connected layer. We trained and evaluated our proposed network architecture on the colonoscopy image dataset with 410 test subjects provided by Gachon University Hospital. Our experimental results showed an accuracy of 94.39% over 410 test subjects.
Collapse
|
23
|
Nogueira-Rodríguez A, López-Fernández H, Glez-Peña D. Deep Learning Techniques for Real Time Computer-Aided Diagnosis in Colorectal Cancer. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2020. [DOI: 10.1007/978-3-030-23946-6_27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
24
|
Yang K, Zhou B, Yi F, Chen Y, Chen Y. Colorectal Cancer Diagnostic Algorithm Based on Sub-Patch Weight Color Histogram in Combination of Improved Least Squares Support Vector Machine for Pathological Image. J Med Syst 2019; 43:306. [PMID: 31410693 DOI: 10.1007/s10916-019-1429-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 07/25/2019] [Indexed: 12/18/2022]
Abstract
In order to improve the diagnostic accuracy of colon cancer, a novel classification algorithm based on sub-patch weight color histogram and improved SVM is proposed, which has good approximation ability for complex pathological image. Our proposed algorithm combines wavelet kernel SVM with color histogram to classify pathological image. Firstly, the pathological image is divided into non-overlapping sub-patches, and the features of sub-patch histogram are extracted. Then, the global and local features are fused by the sub-patch weighting algorithm. Then, the RelicfF based forward selection algorithm is used to integrate color features and texture features so as to enhance the characterization capabilities of the tumor cell. Finally, Morlet wavelet kernel-based least squares support vector machine method is adopted to enhance the generalization ability of the model for small sample with non-linear and high-dimensional pattern classification problems. Experimental results show that the proposed pathological diagnostic algorithm can gain higher accuracy compared with existing comparison algorithms.
Collapse
Affiliation(s)
- Kai Yang
- Department of Radiological Intervention, Shanghai Sixth People's Hospital East Campus Affiliated to Shanghai University of Medicine & Health Science, Shanghai, 201306, China.,Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Bi Zhou
- Department of Radiological Intervention, Shanghai Sixth People's Hospital East Campus Affiliated to Shanghai University of Medicine & Health Science, Shanghai, 201306, China
| | - Fei Yi
- Department of Radiological Intervention, Shanghai Sixth People's Hospital East Campus Affiliated to Shanghai University of Medicine & Health Science, Shanghai, 201306, China
| | - Yan Chen
- Department of Radiological Intervention, Shanghai Sixth People's Hospital East Campus Affiliated to Shanghai University of Medicine & Health Science, Shanghai, 201306, China
| | - Yingsheng Chen
- Department of Radiological Intervention, Shanghai Sixth People's Hospital East Campus Affiliated to Shanghai University of Medicine & Health Science, Shanghai, 201306, China. .,Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China.
| |
Collapse
|
25
|
Cummins G, Cox BF, Ciuti G, Anbarasan T, Desmulliez MPY, Cochran S, Steele R, Plevris JN, Koulaouzidis A. Gastrointestinal diagnosis using non-white light imaging capsule endoscopy. Nat Rev Gastroenterol Hepatol 2019; 16:429-447. [PMID: 30988520 DOI: 10.1038/s41575-019-0140-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) has proved to be a powerful tool in the diagnosis and management of small bowel disorders since its introduction in 2001. However, white light imaging (WLI) is the principal technology used in clinical CE at present, and therefore, CE is limited to mucosal inspection, with diagnosis remaining reliant on visible manifestations of disease. The introduction of WLI CE has motivated a wide range of research to improve its diagnostic capabilities through integration with other sensing modalities. These developments have the potential to overcome the limitations of WLI through enhanced detection of subtle mucosal microlesions and submucosal and/or transmural pathology, providing novel diagnostic avenues. Other research aims to utilize a range of sensors to measure physiological parameters or to discover new biomarkers to improve the sensitivity, specificity and thus the clinical utility of CE. This multidisciplinary Review summarizes research into non-WLI CE devices by organizing them into a taxonomic structure on the basis of their sensing modality. The potential of these capsules to realize clinically useful virtual biopsy and computer-aided diagnosis (CADx) is also reported.
Collapse
Affiliation(s)
- Gerard Cummins
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
| | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | | | - Marc P Y Desmulliez
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
| | - Sandy Cochran
- School of Engineering, University of Glasgow, Glasgow, UK
| | - Robert Steele
- School of Medicine, University of Dundee, Dundee, UK
| | - John N Plevris
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
26
|
Kudo SE, Mori Y, Misawa M, Takeda K, Kudo T, Itoh H, Oda M, Mori K. Artificial intelligence and colonoscopy: Current status and future perspectives. Dig Endosc 2019; 31:363-371. [PMID: 30624835 DOI: 10.1111/den.13340] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 12/04/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps. METHODS AND RESULTS Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy. CONCLUSION We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence-assisted medical devices.
Collapse
Affiliation(s)
- Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenichi Takeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
27
|
Wimmer G, Gadermayr M, Wolkersdörfer G, Kwitt R, Tamaki T, Tischendorf J, Häfner M, Yoshida S, Tanaka S, Merhof D, Uhl A. Quest for the best endoscopic imaging modality for computer-assisted colonic polyp staging. World J Gastroenterol 2019; 25:1197-1209. [PMID: 30886503 PMCID: PMC6421240 DOI: 10.3748/wjg.v25.i10.1197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 02/13/2019] [Accepted: 02/15/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND It was shown in previous studies that high definition endoscopy, high magnification endoscopy and image enhancement technologies, such as chromoendoscopy and digital chromoendoscopy [narrow-band imaging (NBI), i-Scan] facilitate the detection and classification of colonic polyps during endoscopic sessions. However, there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps. In this work, we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging. AIM To assess which endoscopic imaging modalities are best suited for the computer-assisted staging of colonic polyps. METHODS In our experiments, we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions. For this purpose, we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database. The image databases were obtained using different imaging modalities. Two databases were obtained by high-definition endoscopy in combination with i-Scan technology (one with chromoendoscopy and one without chromoendoscopy). Three databases were obtained by high-magnification endoscopy (two databases using narrow band imaging and one using chromoendoscopy). The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis. RESULTS Generally, it is feature-dependent which imaging modalities achieve high results and which do not. For the high-definition image databases, we achieved overall classification rates of up to 79.2% with chromoendoscopy and 88.9% without chromoendoscopy. In the case of the database obtained by high-magnification chromoendoscopy, the classification rates were up to 81.4%. For the combination of high-magnification endoscopy with NBI, results of up to 97.4% for one database and up to 84% for the other were achieved. Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions. It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality. CONCLUSION Chromoendoscopy has a negative impact on the results of the methods. NBI is better suited than chromoendoscopy. High-definition and high-magnification endoscopy are equally suited.
Collapse
Affiliation(s)
- Georg Wimmer
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| | - Michael Gadermayr
- Interdisciplinary Imaging and Vision Institute Aachen, RWTH Aachen, Aachen 52074, Germany
| | - Gernot Wolkersdörfer
- Department of Internal Medicine I, Paracelsus Medical University/Salzburger Landeskliniken (SALK), Salzburg 5020, Austria
| | - Roland Kwitt
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| | - Toru Tamaki
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, Hiroshima 7398527, Japan
| | - Jens Tischendorf
- Internal Medicine and Gastroenterology, University Hospital Aachen, Würselen 52146, Germany
| | - Michael Häfner
- Department of Gastroenterologie and Hepatologie, Krankenhaus St. Elisabeth, Wien 1080, Austria
| | - Shigeto Yoshida
- Department of Endoscopy and Medicine, Graduate School of Biomedical and Health Science, Hiroshima University, Hiroshima 7348551, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima 7348551, Japan
| | - Dorit Merhof
- Interdisciplinary Imaging and Vision Institute Aachen, RWTH Aachen, Aachen 52074, Germany
| | - Andreas Uhl
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| |
Collapse
|
28
|
Ayadi W, Elhamzi W, Charfi I, Atri M. A hybrid feature extraction approach for brain MRI classification based on Bag-of-words. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.10.010] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
29
|
Wimmer G, Gadermayr M, Kwitt R, Häfner M, Tamaki T, Yoshida S, Tanaka S, Merhof D, Uhl A. Training of polyp staging systems using mixed imaging modalities. Comput Biol Med 2018; 102:251-259. [PMID: 29773226 DOI: 10.1016/j.compbiomed.2018.05.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 04/24/2018] [Accepted: 05/01/2018] [Indexed: 02/08/2023]
Abstract
BACKGROUND In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. METHOD We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. RESULTS Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. CONCLUSION Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| | | | - Roland Kwitt
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria
| | - Michael Häfner
- St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, A-1030 Vienna, Austria
| | - Toru Tamaki
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Shigeto Yoshida
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Shinji Tanaka
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Dorit Merhof
- RWTH Aachen University, Templergraben 55, 52056 Aachen, Germany
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| |
Collapse
|
30
|
Wimmer G, Vécsei A, Häfner M, Uhl A. Fisher encoding of convolutional neural network features for endoscopic image classification. J Med Imaging (Bellingham) 2018; 5:034504. [PMID: 30840751 PMCID: PMC6152583 DOI: 10.1117/1.jmi.5.3.034504] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 08/21/2018] [Indexed: 12/14/2022] Open
Abstract
We propose an approach for the automated diagnosis of celiac disease (CD) and colonic polyps (CP) based on applying Fisher encoding to the activations of convolutional layers. In our experiments, three different convolutional neural network (CNN) architectures (AlexNet, VGG-f, and VGG-16) are applied to three endoscopic image databases (one CD database and two CP databases). For each network architecture, we perform experiments using a version of the net that is pretrained on the ImageNet database, as well as a version of the net that is trained on a specific endoscopic image database. The Fisher representations of convolutional layer activations are classified using support vector machines. Additionally, experiments are performed by concatenating the Fisher representations of several layers to combine the information of these layers. We will show that our proposed CNN-Fisher approach clearly outperforms other CNN- and non-CNN-based approaches and that our approach requires no training on the target dataset, which results in substantial time savings compared with other CNN-based approaches.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Salzburg, Austria
| | | | | | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Salzburg, Austria
| |
Collapse
|
31
|
Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med Image Anal 2018; 48:230-243. [PMID: 29990688 DOI: 10.1016/j.media.2018.06.005] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 05/04/2018] [Accepted: 06/07/2018] [Indexed: 02/07/2023]
Abstract
Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network-conditional random field (CNN-CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically-realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic-like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN-CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer-aided detection algorithms for detection, segmentation and classification of lesions.
Collapse
|
32
|
van der Sommen F, Curvers WL, Nagengast WB. Novel Developments in Endoscopic Mucosal Imaging. Gastroenterology 2018; 154:1876-1886. [PMID: 29462601 DOI: 10.1053/j.gastro.2018.01.070] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/28/2017] [Accepted: 01/06/2018] [Indexed: 12/20/2022]
Abstract
Endoscopic techniques such as high-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, The Netherlands
| | - Wouter B Nagengast
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
33
|
Sonoyama S, Hirakawa T, Tamaki T, Kurita T, Raytchev B, Kaneda K, Koide T, Yoshida S, Kominami Y, Tanaka S. Transfer learning for Bag-of-Visual words approach to NBI endoscopic image classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2015:785-8. [PMID: 26736379 DOI: 10.1109/embc.2015.7318479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We address a problem of endoscopic image classification taken by different (e.g., old and new) endoscopies. Our proposed method formulates the problem as a constraint optimization that estimates a linear transformation between feature vectors (or Bag-of-Visual words histograms) in a framework of transfer learning. Experimental results show that the proposed method works much better than the case without feature transformation.
Collapse
|
34
|
Shi B, Grimm LJ, Mazurowski MA, Baker JA, Marks JR, King LM, Maley CC, Hwang ES, Lo JY. Prediction of Occult Invasive Disease in Ductal Carcinoma in Situ Using Deep Learning Features. J Am Coll Radiol 2018; 15:527-534. [PMID: 29398498 PMCID: PMC5837927 DOI: 10.1016/j.jacr.2017.11.036] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2017] [Accepted: 11/27/2017] [Indexed: 01/23/2023]
Abstract
PURPOSE The aim of this study was to determine whether deep features extracted from digital mammograms using a pretrained deep convolutional neural network are prognostic of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. METHODS In this retrospective study, digital mammographic magnification views were collected for 99 subjects with DCIS at biopsy, 25 of which were subsequently upstaged to invasive cancer. A deep convolutional neural network model that was pretrained on nonmedical images (eg, animals, plants, instruments) was used as the feature extractor. Through a statistical pooling strategy, deep features were extracted at different levels of convolutional layers from the lesion areas, without sacrificing the original resolution or distorting the underlying topology. A multivariate classifier was then trained to predict which tumors contain occult invasive disease. This was compared with the performance of traditional "handcrafted" computer vision (CV) features previously developed specifically to assess mammographic calcifications. The generalization performance was assessed using Monte Carlo cross-validation and receiver operating characteristic curve analysis. RESULTS Deep features were able to distinguish DCIS with occult invasion from pure DCIS, with an area under the receiver operating characteristic curve of 0.70 (95% confidence interval, 0.68-0.73). This performance was comparable with the handcrafted CV features (area under the curve = 0.68; 95% confidence interval, 0.66-0.71) that were designed with prior domain knowledge. CONCLUSIONS Despite being pretrained on only nonmedical images, the deep features extracted from digital mammograms demonstrated comparable performance with handcrafted CV features for the challenging task of predicting DCIS upstaging.
Collapse
Affiliation(s)
- Bibo Shi
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina.
| | - Lars J Grimm
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina
| | - Maciej A Mazurowski
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina
| | - Jay A Baker
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina
| | - Jeffrey R Marks
- Department of Surgery, Duke University School of Medicine, Durham, North Carolina
| | - Lorraine M King
- Department of Surgery, Duke University School of Medicine, Durham, North Carolina
| | - Carlo C Maley
- Biodesign Center for Personalized Diagnostics and School of Life Sciences, Arizona State University, Tempe, Arizona; Centre for Evolution and Cancer, Institute of Cancer Research, London, United Kingdom
| | - E Shelley Hwang
- Department of Surgery, Duke University School of Medicine, Durham, North Carolina
| | - Joseph Y Lo
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina
| |
Collapse
|
35
|
Liu X, Wang C, Bai J, Liao G, Zhao Y. Hue-texture-embedded region-based model for magnifying endoscopy with narrow-band imaging image segmentation based on visual features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 145:53-66. [PMID: 28552126 DOI: 10.1016/j.cmpb.2017.04.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 02/27/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnification endoscopy with narrow-band imaging (ME-NBI) has become a feasible tool for detecting diseases within the human gastrointestinal tract, and is more applied by physicians to search for pathological abnormalities with gastric cancer such as precancerous lesions, early gastric cancer and advanced cancer. In order to improve the reliability of diseases detection, there is a need for applying or proposing computer-assisted methodologies to efficiently analyze and process ME-NBI images. However, traditional computer vision methodologies, mainly segmentation, do not express well to the specific visual characteristics of NBI scenario. METHODS In this paper, two energy functional items based on specific visual characteristics of ME-NBI images have been integrated in the framework of Chan-Vese model to construct the Hue-texture-embedded model. On the one hand, a global hue energy functional was proposed representing a global color information extracted in H channel (HSI color space). On the other hand, a texture energy was put forward presenting local microvascular textures extracted by the PIF of adaptive threshold in S channel. RESULTS The results of our model have been compared with Chan-Vese model and manual annotations marked by physicians using F-measure and false positive rate. The value of average F-measure and FPR was 0.61 and 0.16 achieved through the Hue-texture-embedded region-based model. And the C-V model achieved the average F-measure and FPR value of 0.52 and 0.32, respectively. Experiments showed that the Hue-texture-embedded region-based outperforms Chan-Vese model in terms of efficiency, universality and lesion detection. CONCLUSIONS Better segmentation results are acquired by the Hue-texture-embedded region-based model compared with the traditional region-based active contour in these five cases: chronic gastritis, intestinal metaplasia and atrophy, low grade neoplasia, high grade neoplasia and early gastric cancer. In the future, we are planning to expand the universality of our proposed methodology to segment other lesions such as intramucosal cancer etc. As long as these issues are solved, we can proceed with the classification of clinically relevant diseases in ME-NBI images to implement a fully automatic computer-assisted diagnosis system.
Collapse
Affiliation(s)
- Xiaoqi Liu
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Chengliang Wang
- College of Computer Science, Chongqing University, Chongqing 400044, China; Key Laboratory of Dependable Service Computing in Cyber Physical Society, Chongqing University, Ministry of Education, China
| | - Jianying Bai
- Department of Gastroenterology, Second Affiliated Hospital, Third Military Medical University, Chongqing, China
| | - Guobin Liao
- Department of Gastroenterology, Second Affiliated Hospital, Third Military Medical University, Chongqing, China
| | - Yanjun Zhao
- Computer Science Department Troy University, Alabama, USA
| |
Collapse
|
36
|
Floer M, Meister T. Endoscopic Improvement of the Adenoma Detection Rate during Colonoscopy - Where Do We Stand in 2015? Digestion 2017; 93:202-13. [PMID: 26986225 DOI: 10.1159/000442464] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Accepted: 11/14/2015] [Indexed: 02/04/2023]
Abstract
BACKGROUND The presence of colorectal adenomas is considered a major risk factor for colorectal cancer development. The implementation of screening colonoscopy programs in the Western world has led to a substantial reduction of colorectal cancer death. Many efforts have been made to reduce the adenoma miss rates by the application of new endoscopic devices and techniques for better adenoma visualization. SUMMARY This special review gives the readership an overview of current endoscopic innovations that can aid in the increase of the adenoma detection rate (ADR) during colonoscopy. These innovations include the use of devices like EndoCuff® and EndoRings® as well as new technical equipment like third-eye endoscope® and full-spectrum endoscopy (FUSE®). KEY MESSAGE Technical improvements and newly developed accessories are able to improve the ADR. However, additional costs and a willingness to invest into potentially expensive equipment might be necessary. Investigator-dependent skills remain the backbone in the ADR detection.
Collapse
Affiliation(s)
- Martin Floer
- Department of Gastroenterology, HELIOS Albert-Schweitzer-Hospital Northeim, Northeim, Germany
| | | |
Collapse
|
37
|
Accuracy of computer-aided diagnosis based on narrow-band imaging endocytoscopy for diagnosing colorectal lesions: comparison with experts. Int J Comput Assist Radiol Surg 2017; 12:757-766. [PMID: 28247214 DOI: 10.1007/s11548-017-1542-4] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Accepted: 02/20/2017] [Indexed: 02/08/2023]
Abstract
PURPOSE Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. METHODS We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. RESULTS ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). CONCLUSIONS ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
Collapse
|
38
|
Evaluation of i-Scan Virtual Chromoendoscopy and Traditional Chromoendoscopy for the Automated Diagnosis of Colonic Polyps. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/978-3-319-54057-3_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
39
|
|
40
|
Zhang R, Zheng Y, Mak TWC, Yu R, Wong SH, Lau JYW, Poon CCY. Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain. IEEE J Biomed Health Inform 2016; 21:41-47. [PMID: 28114040 DOI: 10.1109/jbhi.2016.2635662] [Citation(s) in RCA: 151] [Impact Index Per Article: 18.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4-2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer.
Collapse
|
41
|
Jiang M, Zhang S, Huang J, Yang L, Metaxas DN. Scalable histopathological image analysis via supervised hashing with multiple features. Med Image Anal 2016; 34:3-12. [PMID: 27521299 DOI: 10.1016/j.media.2016.07.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 04/08/2016] [Accepted: 07/28/2016] [Indexed: 11/18/2022]
Abstract
Histopathology is crucial to diagnosis of cancer, yet its interpretation is tedious and challenging. To facilitate this procedure, content-based image retrieval methods have been developed as case-based reasoning tools. Especially, with the rapid growth of digital histopathology, hashing-based retrieval approaches are gaining popularity due to their exceptional efficiency and scalability. Nevertheless, few hashing-based histopathological image analysis methods perform feature fusion, despite the fact that it is a common practice to improve image retrieval performance. In response, we exploit joint kernel-based supervised hashing (JKSH) to integrate complementary features in a hashing framework. Specifically, hashing functions are designed based on linearly combined kernel functions associated with individual features. Supervised information is incorporated to bridge the semantic gap between low-level features and high-level diagnosis. An alternating optimization method is utilized to learn the kernel combination and hashing functions. The obtained hashing functions compress multiple high-dimensional features into tens of binary bits, enabling fast retrieval from a large database. Our approach is extensively validated on 3121 breast-tissue histopathological images by distinguishing between actionable and benign cases. It achieves 88.1% retrieval precision and 91.3% classification accuracy within 16.5 ms query time, comparing favorably with traditional methods.
Collapse
Affiliation(s)
- Menglin Jiang
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
42
|
Lei B, Chen S, Ni D, Wang T. Discriminative Learning for Alzheimer's Disease Diagnosis via Canonical Correlation Analysis and Multimodal Fusion. Front Aging Neurosci 2016; 8:77. [PMID: 27242506 PMCID: PMC4868852 DOI: 10.3389/fnagi.2016.00077] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/29/2016] [Indexed: 01/03/2023] Open
Abstract
To address the challenging task of diagnosing neurodegenerative brain disease, such as Alzheimer's disease (AD) and mild cognitive impairment (MCI), we propose a novel method using discriminative feature learning and canonical correlation analysis (CCA) in this paper. Specifically, multimodal features and their CCA projections are concatenated together to represent each subject, and hence both individual and shared information of AD disease are captured. A discriminative learning with multilayer feature hierarchy is designed to further improve performance. Also, hybrid representation is proposed to maximally explore data from multiple modalities. A novel normalization method is devised to tackle the intra- and inter-subject variations from the multimodal data. Based on our extensive experiments, our method achieves an accuracy of 96.93% [AD vs. normal control (NC)], 86.57 % (MCI vs. NC), and 82.75% [MCI converter (MCI-C) vs. MCI non-converter (MCI-NC)], respectively, which outperforms the state-of-the-art methods in the literature.
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| |
Collapse
|
43
|
Hames SC, Ardigò M, Soyer HP, Bradley AP, Prow TW. Automated Segmentation of Skin Strata in Reflectance Confocal Microscopy Depth Stacks. PLoS One 2016; 11:e0153208. [PMID: 27088865 PMCID: PMC4835045 DOI: 10.1371/journal.pone.0153208] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 03/25/2016] [Indexed: 11/24/2022] Open
Abstract
Reflectance confocal microscopy (RCM) is a powerful tool for in-vivo examination of a variety of skin diseases. However, current use of RCM depends on qualitative examination by a human expert to look for specific features in the different strata of the skin. Developing approaches to quantify features in RCM imagery requires an automated understanding of what anatomical strata is present in a given en-face section. This work presents an automated approach using a bag of features approach to represent en-face sections and a logistic regression classifier to classify sections into one of four classes (stratum corneum, viable epidermis, dermal-epidermal junction and papillary dermis). This approach was developed and tested using a dataset of 308 depth stacks from 54 volunteers in two age groups (20–30 and 50–70 years of age). The classification accuracy on the test set was 85.6%. The mean absolute error in determining the interface depth for each of the stratum corneum/viable epidermis, viable epidermis/dermal-epidermal junction and dermal-epidermal junction/papillary dermis interfaces were 3.1 μm, 6.0 μm and 5.5 μm respectively. The probabilities predicted by the classifier in the test set showed that the classifier learned an effective model of the anatomy of human skin.
Collapse
Affiliation(s)
- Samuel C. Hames
- Dermatology Research Centre, The University of Queensland, School of Medicine, Translational Research Institute, Brisbane, Australia
| | - Marco Ardigò
- San Gallicano Dermatological Institute—IRCCS, Rome, Italy
| | - H. Peter Soyer
- Dermatology Research Centre, The University of Queensland, School of Medicine, Translational Research Institute, Brisbane, Australia
| | - Andrew P. Bradley
- The University of Queensland, School of Information Technology and Electrical Engineering, Brisbane, Australia
| | - Tarl W. Prow
- Dermatology Research Centre, The University of Queensland, School of Medicine, Translational Research Institute, Brisbane, Australia
- * E-mail:
| |
Collapse
|
44
|
Hirakawa T, Tamaki T, Raytchev B, Kaneda K, Koide T, Yoshida S, Kominami Y, Tanaka S. Defocus-aware Dirichlet particle filter for stable endoscopic video frame recognition. Artif Intell Med 2016; 68:1-16. [PMID: 27052678 DOI: 10.1016/j.artmed.2016.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Revised: 03/17/2016] [Accepted: 03/17/2016] [Indexed: 10/22/2022]
Abstract
BACKGROUND AND OBJECTIVE A computer-aided system for colorectal endoscopy could provide endoscopists with important helpful diagnostic support during examinations. A straightforward means of providing an objective diagnosis in real time might be for using classifiers to identify individual parts of every endoscopic video frame, but the results could be highly unstable due to out-of-focus frames. To address this problem, we propose a defocus-aware Dirichlet particle filter (D-DPF) that combines a particle filter with a Dirichlet distribution and defocus information. METHODS We develop a particle filter with a Dirichlet distribution that represents the state transition and likelihood of each video frame. We also incorporate additional defocus information by using isolated pixel ratios to sample from a Rayleigh distribution. RESULTS We tested the performance of the proposed method using synthetic and real endoscopic videos with a frame-wise classifier trained on 1671 images of colorectal endoscopy. Two synthetic videos comprising 600 frames were used for comparisons with a Kalman filter and D-DPF without defocus information, and D-DPF was shown to be more robust against the instability of frame-wise classification results. Computation time was approximately 88ms/frame, which is sufficient for real-time applications. We applied our method to 33 endoscopic videos and showed that the proposed method can effectively smoothen highly unstable probability curves under actual defocus of the endoscopic videos. CONCLUSION The proposed D-DPF is a useful tool for smoothing unstable results of frame-wise classification of endoscopic videos to support real-time diagnosis during endoscopic examinations.
Collapse
Affiliation(s)
- Tsubasa Hirakawa
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan.
| | - Toru Tamaki
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan
| | - Bisser Raytchev
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan
| | - Kazufumi Kaneda
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan
| | - Tetsushi Koide
- Research Institute for Nanodevice and Bio Systems (RNBS), Hiroshima University, 1-4-2 Kagamiyama, Higashi-Hiroshima 739-8527, Japan
| | - Shigeto Yoshida
- Department of Gastroenterology, Hiroshima General Hospital of West Japan Railway Company, 3-1-36 Futabanosato, Higashiku, Hiroshima 732-0057, Japan
| | - Yoko Kominami
- Department of Endoscopy, Hiroshima University Hospital, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| |
Collapse
|
45
|
Wimmer G, Tamaki T, Tischendorf JJW, Häfner M, Yoshida S, Tanaka S, Uhl A. Directional wavelet based features for colonic polyp classification. Med Image Anal 2016; 31:16-36. [PMID: 26948110 DOI: 10.1016/j.media.2016.02.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 02/08/2016] [Accepted: 02/09/2016] [Indexed: 01/27/2023]
Abstract
In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state-of-the-art methods on most of the databases. We will also show that the Weibull distribution is better suited to model the subband coefficient distribution than other commonly used probability distributions like the Gaussian distribution and the generalized Gaussian distribution. So this work gives a reasonable summary of wavelet based methods for colonic polyp classification and the huge amount of endoscopic polyp databases used for our experiments assures a high significance of the achieved results.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| | - Toru Tamaki
- Hiroshima University, Department of Information Engineering, Graduate School of Engineering, 1-4-1 Kagamiyama, Higashi-hiroshima, Hiroshima 739-8527, Japan
| | - J J W Tischendorf
- Medical Department III (Gastroenterology, Hepatology and Metabolic Diseases), RWTH Aachen University Hospital, Paulwelsstr. 30, 52072 Aachen, Germany
| | - Michael Häfner
- St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, A-1030 Vienna, Austria
| | - Shigeto Yoshida
- Hiroshima University Hospital, Department of Endoscopy, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| | - Shinji Tanaka
- Hiroshima University Hospital, Department of Endoscopy, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| |
Collapse
|
46
|
Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:630-44. [PMID: 26462083 DOI: 10.1109/tmi.2015.2487997] [Citation(s) in RCA: 235] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
Collapse
|
47
|
Hiroyasu T, Hayashinuma K, Ichikawa H, Yagi N. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:789-92. [PMID: 26736380 DOI: 10.1109/embc.2015.7318480] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
Collapse
|
48
|
|
49
|
Li W, Coats M, Zhang J, McKenna SJ. Discriminating dysplasia: Optical tomographic texture analysis of colorectal polyps. Med Image Anal 2015; 26:57-69. [DOI: 10.1016/j.media.2015.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Revised: 07/27/2015] [Accepted: 08/13/2015] [Indexed: 12/29/2022]
|
50
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|