1
|
Okumura T, Imai K, Misawa M, Kudo SE, Hotta K, Ito S, Kishida Y, Takada K, Kawata N, Maeda Y, Yoshida M, Yamamoto Y, Minamide T, Ishiwatari H, Sato J, Matsubayashi H, Ono H. Evaluating false-positive detection in a computer-aided detection system for colonoscopy. J Gastroenterol Hepatol 2024; 39:927-934. [PMID: 38273460 DOI: 10.1111/jgh.16491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/21/2023] [Accepted: 01/03/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND AIM Computer-aided detection (CADe) systems can efficiently detect polyps during colonoscopy. However, false-positive (FP) activation is a major limitation of CADe. We aimed to compare the rate and causes of FP using CADe before and after an update designed to reduce FP. METHODS We analyzed CADe-assisted colonoscopy videos recorded between July 2022 and October 2022. The number and causes of FPs and excessive time spent by the endoscopist on FP (ET) were compared pre- and post-update using 1:1 propensity score matching. RESULTS During the study period, 191 colonoscopy videos (94 and 97 in the pre- and post-update groups, respectively) were recorded. Propensity score matching resulted in 146 videos (73 in each group). The mean number of FPs and median ET per colonoscopy were significantly lower in the post-update group than those in the pre-update group (4.2 ± 3.7 vs 18.1 ± 11.1; P < 0.001 and 0 vs 16 s; P < 0.001, respectively). Mucosal tags, bubbles, and folds had the strongest association with decreased FP post-update (pre-update vs post-update: 4.3 ± 3.6 vs 0.4 ± 0.8, 0.32 ± 0.70 vs 0.04 ± 0.20, and 8.6 ± 6.7 vs 1.6 ± 1.7, respectively). There was no significant decrease in the true positive rate (post-update vs pre-update: 95.0% vs 99.2%; P = 0.09) or the adenoma detection rate (post-update vs pre-update: 52.1% vs 49.3%; P = 0.87). CONCLUSIONS The updated CADe can reduce FP without impairing polyp detection. A reduction in FP may help relieve the burden on endoscopists.
Collapse
Affiliation(s)
- Taishi Okumura
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Kenichiro Imai
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Kinichi Hotta
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Sayo Ito
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Kazunori Takada
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Noboru Kawata
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yuki Maeda
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masao Yoshida
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yoichi Yamamoto
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | | | - Junya Sato
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Hiroyuki Ono
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| |
Collapse
|
2
|
Xu C, Fan K, Mo W, Cao X, Jiao K. Dual ensemble system for polyp segmentation with submodels adaptive selection ensemble. Sci Rep 2024; 14:6152. [PMID: 38485963 PMCID: PMC10940608 DOI: 10.1038/s41598-024-56264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
Colonoscopy is one of the main methods to detect colon polyps, and its detection is widely used to prevent and diagnose colon cancer. With the rapid development of computer vision, deep learning-based semantic segmentation methods for colon polyps have been widely researched. However, the accuracy and stability of some methods in colon polyp segmentation tasks show potential for further improvement. In addition, the issue of selecting appropriate sub-models in ensemble learning for the colon polyp segmentation task still needs to be explored. In order to solve the above problems, we first implement the utilization of multi-complementary high-level semantic features through the Multi-Head Control Ensemble. Then, to solve the sub-model selection problem in training, we propose SDBH-PSO Ensemble for sub-model selection and optimization of ensemble weights for different datasets. The experiments were conducted on the public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, ETIS-LaribPolypDB and PolypGen. The results show that the DET-Former, constructed based on the Multi-Head Control Ensemble and the SDBH-PSO Ensemble, consistently provides improved accuracy across different datasets. Among them, the Multi-Head Control Ensemble demonstrated superior feature fusion capability in the experiments, and the SDBH-PSO Ensemble demonstrated excellent sub-model selection capability. The sub-model selection capabilities of the SDBH-PSO Ensemble will continue to have significant reference value and practical utility as deep learning networks evolve.
Collapse
Affiliation(s)
- Cun Xu
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kefeng Fan
- China Electronics Standardization Institute, Beijing, 100007, China.
| | - Wei Mo
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Xuguang Cao
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kaijie Jiao
- Guilin University of Electronic Technology, Guilin, 541000, China
| |
Collapse
|
3
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
4
|
Gayathri R, Suchand Sandeep CS, Vijayan C, Murukeshan VM. Random Lasing for Bimodal Imaging and Detection of Tumor. BIOSENSORS 2023; 13:1003. [PMID: 38131763 PMCID: PMC10742073 DOI: 10.3390/bios13121003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/21/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023]
Abstract
The interaction of light with biological tissues is an intriguing area of research that has led to the development of numerous techniques and technologies. The randomness inherent in biological tissues can trap light through multiple scattering events and provide optical feedback to generate random lasing emission. The emerging random lasing signals carry sensitive information about the scattering dynamics of the medium, which can help in identifying abnormalities in tissues, while simultaneously functioning as an illumination source for imaging. The early detection and imaging of tumor regions are crucial for the successful treatment of cancer, which is one of the major causes of mortality worldwide. In this paper, a bimodal spectroscopic and imaging system, capable of identifying and imaging tumor polyps as small as 1 mm2, is proposed and illustrated using a phantom sample for the early diagnosis of tumor growth. The far-field imaging capabilities of the developed system can enable non-contact in vivo inspections. The integration of random lasing principles with sensing and imaging modalities has the potential to provide an efficient, minimally invasive, and cost-effective means of early detection and treatment of various diseases, including cancer.
Collapse
Affiliation(s)
- R. Gayathri
- Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering, Nanyang Technological University (NTU), Singapore 639798, Singapore; (R.G.); (C.S.S.S.)
| | - C. S. Suchand Sandeep
- Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering, Nanyang Technological University (NTU), Singapore 639798, Singapore; (R.G.); (C.S.S.S.)
| | - C. Vijayan
- Department of Physics, Indian Institute of Technology Madras (IITM), Chennai 600036, India;
| | - V. M. Murukeshan
- Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering, Nanyang Technological University (NTU), Singapore 639798, Singapore; (R.G.); (C.S.S.S.)
| |
Collapse
|
5
|
Chen H, Gao C, Li H, Li C, Wang C, Bai Z, Wu Y, Yao H, Li Y, Gao F, Shao XD, Qi X. Factors of easy and difficult cecal intubation during unsedated colonoscopy. REVISTA ESPANOLA DE ENFERMEDADES DIGESTIVAS 2023; 115:546-552. [PMID: 37114392 DOI: 10.17235/reed.2023.9283/2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
BACKGROUND AND AIMS difficulty of cecal intubation should be a main indicator for the need of sedated colonoscopy and skilled endoscopists. The present study aimed to explore the factors associated with easy and difficult cecal intubation in unsedated colonoscopy. METHODS all consecutive patients who underwent unsedated colonoscopy at our department by the same endoscopist from December 3, 2020 to August 30, 2022 were retrospectively collected. Age, gender, body mass index (BMI), reasons for colonoscopy, position change, Boston Bowel Preparation Scale score, cecal intubation time (CIT) and major colonoscopic findings were analyzed. CIT < 5 min, CIT 5-10 min and CIT > 10 min or failed cecal intubation were defined as easy, moderate and difficult cecal intubation, respectively. Logistic regression analyses were performed to identify independent factors associated with easy and difficult cecal intubation. RESULTS overall, 1,281 patients were included. The proportions of easy and difficult cecal intubation were 29.2 % (374/1,281) and 27.2 % (349/1,281), respectively. Multivariate logistic regression analysis found that age ≤ 50 years, male, BMI > 23.0 kg/m2 and the absence of position change were independently associated with easy cecal intubation, and that age > 50 years, female, BMI ≤ 23.0 kg/m2, position change, and insufficient bowel preparation were independently associated with difficult cecal intubation. CONCLUSIONS some convenient factors independently associated with easy and difficult cecal intubation have been identified, which will be potentially helpful to determine whether a colonoscopy should be sedated and a skilled endoscopist should be selected. The current findings should be further validated in large-scale prospective studies.
Collapse
Affiliation(s)
- Hongxin Chen
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Cong Gao
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Hongyu Li
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Chengkun Li
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Chunmei Wang
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Zhaohui Bai
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Yanyan Wu
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Haijuan Yao
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Yingchao Li
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Fei Gao
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Xiao-Dong Shao
- Gastroenterology, General Hospital of Northern Theater Command, China
| | - Xingshun Qi
- Gastroenterology, General Hospital of Northern Theater Command,
| |
Collapse
|
6
|
De Carvalho T, Kader R, Brandao P, González-Bueno Puyal J, Lovat LB, Mountney P, Stoyanov D. Automated colonoscopy withdrawal phase duration estimation using cecum detection and surgical tasks classification. BIOMEDICAL OPTICS EXPRESS 2023; 14:2629-2644. [PMID: 37342682 PMCID: PMC10278633 DOI: 10.1364/boe.485069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 04/19/2023] [Accepted: 04/20/2023] [Indexed: 06/23/2023]
Abstract
Colorectal cancer is the third most common type of cancer with almost two million new cases worldwide. They develop from neoplastic polyps, most commonly adenomas, which can be removed during colonoscopy to prevent colorectal cancer from occurring. Unfortunately, up to a quarter of polyps are missed during colonoscopies. Studies have shown that polyp detection during a procedure correlates with the time spent searching for polyps, called the withdrawal time. The different phases of the procedure (cleaning, therapeutic, and exploration phases) make it difficult to precisely measure the withdrawal time, which should only include the exploration phase. Separating this from the other phases requires manual time measurement during the procedure which is rarely performed. In this study, we propose a method to automatically detect the cecum, which is the start of the withdrawal phase, and to classify the different phases of the colonoscopy, which allows precise estimation of the final withdrawal time. This is achieved using a Resnet for both detection and classification trained with two public datasets and a private dataset composed of 96 full procedures. Out of 19 testing procedures, 18 have their withdrawal time correctly estimated, with a mean error of 5.52 seconds per minute per procedure.
Collapse
Affiliation(s)
- Thomas De Carvalho
- Odin Vision, London, UK
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Rawen Kader
- Division of Surgery & Interventional Science, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | | | - Juana González-Bueno Puyal
- Odin Vision, London, UK
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Laurence B. Lovat
- Division of Surgery & Interventional Science, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| |
Collapse
|
7
|
Development and deployment of Computer-aided Real-Time feedback for improving quality of colonoscopy in a Multi-Center clinical trial. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
8
|
Shahid B, Abbas M, Ur Rehman A, Ul Abideen Z. IAPC2: Improved and Automatic Classification of Polyp for Colorectal Cancer. 2023 INTERNATIONAL CONFERENCE ON BUSINESS ANALYTICS FOR TECHNOLOGY AND SECURITY (ICBATS) 2023. [DOI: 10.1109/icbats57792.2023.10111431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Bisma Shahid
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Maria Abbas
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Abd Ur Rehman
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | | |
Collapse
|
9
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
10
|
Wang K, Liu L, Fu X, Liu L, Peng W. RA-DENet: Reverse Attention and Distractions Elimination Network for polyp segmentation. Comput Biol Med 2023; 155:106704. [PMID: 36848801 DOI: 10.1016/j.compbiomed.2023.106704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 02/01/2023] [Accepted: 02/19/2023] [Indexed: 02/27/2023]
Abstract
To address the problems of polyps of different shapes, sizes, and colors, low-contrast polyps, various noise distractions, and blurred edges on colonoscopy, we propose the Reverse Attention and Distraction Elimination Network, which includes Improved Reverse Attention, Distraction Elimination, and Feature Enhancement. First, we input the images in the polyp image set, and use the five levels polyp features and the global polyp feature extracted from the Res2Net-based backbone as the input of the Improved Reverse Attention to obtain augmented representations of salient and non-salient regions to capture the different shapes of polyp and distinguish low-contrast polyps from background. Then, the augmented representations of salient and non-salient areas are fed into the Distraction Elimination to obtain the refined polyp feature without false positive and false negative distractions for eliminating noises. Finally, the extracted low-level polyp feature is used as the input of the Feature Enhancement to obtain the edge feature for supplementing missing edge information of polyp. The polyp segmentation result is output by connecting the edge feature with the refined polyp feature. The proposed method is evaluated on five polyp datasets and compared with the current polyp segmentation models. Our model improves the mDice to 0.760 on the most challenge dataset (ETIS).
Collapse
Affiliation(s)
- Kaiqi Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China
| | - Li Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China; Computer Technology Application Key Lab of Yunnan Province, Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
| | - Xiaodong Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China; Computer Technology Application Key Lab of Yunnan Province, Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Lijun Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China; Computer Technology Application Key Lab of Yunnan Province, Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Wei Peng
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China; Computer Technology Application Key Lab of Yunnan Province, Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
11
|
Satrya GB, Ramatryana INA, Shin SY. Compressive Sensing of Medical Images Based on HSV Color Space. SENSORS (BASEL, SWITZERLAND) 2023; 23:2616. [PMID: 36904821 PMCID: PMC10006955 DOI: 10.3390/s23052616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 02/06/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
Recently, compressive sensing (CS) schemes have been studied as a new compression modality that exploits the sensing matrix in the measurement scheme and the reconstruction scheme to recover the compressed signal. In addition, CS is exploited in medical imaging (MI) to support efficient sampling, compression, transmission, and storage of a large amount of MI. Although CS of MI has been extensively investigated, the effect of color space in CS of MI has not yet been studied in the literature. To fulfill these requirements, this article proposes a novel CS of MI based on hue-saturation value (HSV), using spread spectrum Fourier sampling (SSFS) and sparsity averaging with reweighted analysis (SARA). An HSV loop that performs SSFS is proposed to obtain a compressed signal. Next, HSV-SARA is proposed to reconstruct MI from the compressed signal. A set of color MIs is investigated, such as colonoscopy, magnetic resonance imaging of the brain and eye, and wireless capsule endoscopy images. Experiments were performed to show the superiority of HSV-SARA over benchmark methods in terms of signal-to-noise ratio (SNR), structural similarity (SSIM) index, and measurement rate (MR). The experiments showed that a color MI, with a resolution of 256×256 pixels, could be compressed by the proposed CS at MR of 0.1, and could be improved in terms of SNR being 15.17% and SSIM being 2.53%. The proposed HSV-SARA can be a solution for color medical image compression and sampling to improve the image acquisition of medical devices.
Collapse
Affiliation(s)
| | - I Nyoman Apraz Ramatryana
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Republic of Korea
| | - Soo Young Shin
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Republic of Korea
| |
Collapse
|
12
|
Hsu CM, Hsu CC, Hsu ZM, Chen TH, Kuo T. Intraprocedure Artificial Intelligence Alert System for Colonoscopy Examination. SENSORS (BASEL, SWITZERLAND) 2023; 23:1211. [PMID: 36772251 PMCID: PMC9921893 DOI: 10.3390/s23031211] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Colonoscopy is a valuable tool for preventing and reducing the incidence and mortality of colorectal cancer. Although several computer-aided colorectal polyp detection and diagnosis systems have been proposed for clinical application, many remain susceptible to interference problems, including low image clarity, unevenness, and low accuracy for the analysis of dynamic images; these drawbacks affect the robustness and practicality of these systems. This study proposed an intraprocedure alert system for colonoscopy examination developed on the basis of deep learning. The proposed system features blurred image detection, foreign body detection, and polyp detection modules facilitated by convolutional neural networks. The training and validation datasets included high-quality images and low-quality images, including blurred images and those containing folds, fecal matter, and opaque water. For the detection of blurred images and images containing folds, fecal matter, and opaque water, the accuracy rate was 96.2%. Furthermore, the study results indicated a per-polyp detection accuracy of 100% when the system was applied to video images. The recall rates for high-quality image frames and polyp image frames were 95.7% and 92%, respectively. The overall alert accuracy rate and the false-positive rate of low quality for video images obtained through per-frame analysis were 95.3% and 0.18%, respectively. The proposed system can be used to alert colonoscopists to the need to slow their procedural speed or to perform flush or lumen inflation in cases where the colonoscope is being moved too rapidly, where fecal residue is present in the intestinal tract, or where the colon has been inadequately distended.
Collapse
Affiliation(s)
- Chen-Ming Hsu
- Department of Gastroenterology and Hepatology, Taoyuan Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Chien-Chang Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, New Taipei 242, Taiwan
| | - Zhe-Ming Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, New Taipei 242, Taiwan
| | - Tsung-Hsing Chen
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Tony Kuo
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
| |
Collapse
|
13
|
Ali S. Where do we stand in AI for endoscopic image analysis? Deciphering gaps and future directions. NPJ Digit Med 2022; 5:184. [PMID: 36539473 PMCID: PMC9767933 DOI: 10.1038/s41746-022-00733-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Recent developments in deep learning have enabled data-driven algorithms that can reach human-level performance and beyond. The development and deployment of medical image analysis methods have several challenges, including data heterogeneity due to population diversity and different device manufacturers. In addition, more input from experts is required for a reliable method development process. While the exponential growth in clinical imaging data has enabled deep learning to flourish, data heterogeneity, multi-modality, and rare or inconspicuous disease cases still need to be explored. Endoscopy being highly operator-dependent with grim clinical outcomes in some disease cases, reliable and accurate automated system guidance can improve patient care. Most designed methods must be more generalisable to the unseen target data, patient population variability, and variable disease appearances. The paper reviews recent works on endoscopic image analysis with artificial intelligence (AI) and emphasises the current unmatched needs in this field. Finally, it outlines the future directions for clinically relevant complex AI solutions to improve patient outcomes.
Collapse
Affiliation(s)
- Sharib Ali
- grid.9909.90000 0004 1936 8403School of Computing, University of Leeds, LS2 9JT Leeds, UK
| |
Collapse
|
14
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
15
|
Rao HB, Sastry NB, Venu RP, Pattanayak P. The role of artificial intelligence based systems for cost optimization in colorectal cancer prevention programs. Front Artif Intell 2022; 5:955399. [PMID: 36248620 PMCID: PMC9563712 DOI: 10.3389/frai.2022.955399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Colorectal Cancer (CRC) has seen a dramatic increase in incidence globally. In 2019, colorectal cancer accounted for 1.15 million deaths and 24.28 million disability-adjusted life-years (DALYs) worldwide. In India, the annual incidence rates (AARs) for colon cancer was 4.4 per 100,000. There has been a steady rise in the prevalence of CRC in India which may be attributed to urbanization, mass migration of population, westernization of diet and lifestyle practices and a rise of obesity and metabolic risk factors that place the population at a higher risk of CRC. Moreoever, CRC in India differs from that described in the Western countries, with a higher proportion of young patients and more patients presenting with an advanced stage. This may be due to poor access to specialized healthcare and socio-economic factors. Early identification of adenomatous colonic polyps, which are well-recognized pre-cancerous lesions, at the time of screening colonoscopy has been shown to be the most effective measure used for CRC prevention. However, colonic polyps are frequently missed during colonoscopy and moreover, these screening programs necessitate man-power, time and resources for processing resected polyps, that may hamper penetration and efficacy in mid- to low-income countries. In the last decade, there has been significant progress made in the automatic detection of colonic polyps by multiple AI-based systems. With the advent of better AI methodology, the focus has shifted from mere detection to accurate discrimination and diagnosis of colonic polyps. These systems, once validated, could usher in a new era in Colorectal Cancer (CRC) prevention programs which would center around “Leave in-situ” and “Resect and discard” strategies. These new strategies hinge around the specificity and accuracy of AI based systems in correctly identifying the pathological diagnosis of the polyps, thereby providing the endoscopist with real-time information in order to make a clinical decision of either leaving the lesion in-situ (mucosal polyps) or resecting and discarding the polyp (hyperplastic polyps). The major advantage of employing these strategies would be in cost optimization of CRC prevention programs while ensuring good clinical outcomes. The adoption of these AI-based systems in the national cancer prevention program of India in accordance with the mandate to increase technology integration could prove to be cost-effective and enable implementation of CRC prevention programs at the population level. This level of penetration could potentially reduce the incidence of CRC and improve patient survival by enabling early diagnosis and treatment. In this review, we will highlight key advancements made in the field of AI in the identification of polyps during colonoscopy and explore the role of AI based systems in cost optimization during the universal implementation of CRC prevention programs in the context of mid-income countries like India.
Collapse
Affiliation(s)
- Harshavardhan B. Rao
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
- *Correspondence: Harshavardhan B. Rao
| | - Nandakumar Bidare Sastry
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
| | - Rama P. Venu
- Department of Gastroenterology, Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala, India
| | - Preetiparna Pattanayak
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
| |
Collapse
|
16
|
Hu X, Lei J, Hu X, Sun F, Liu D. Dark-field scattering image compression using a sparse matrix. APPLIED OPTICS 2022; 61:8072-8080. [PMID: 36255930 DOI: 10.1364/ao.460860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 08/22/2022] [Indexed: 06/16/2023]
Abstract
Dark-field scattering imaging is an imaging method with high contrast and high sensitivity. It has been widely employed in optical components evaluation, biomedical detection, semiconductor manufacturing, etc. However, useless background information causes data redundancy, which increases unnecessary time-space costs in processing. Furthermore, the problem is particularly serious in high-resolution imaging systems for large-aperture components. The dark-field scattering image compression (DFSIC) based on the compressed sparse row is proposed to solve this problem. The compression method realizes local data access for a sparse matrix. The result of the experiments shows that the average time-space consumption of the DFSIC is reduced to less than 2%, compared with the raw image structure, and is still kept below 68% in dense cases. This method provides a more efficient program implementation for the dark-field scattering imaging and exhibits potential in the application of the optical detection with large scale.
Collapse
|
17
|
Arpaia P, Bracale U, Corcione F, De Benedetto E, Di Bernardo A, Di Capua V, Duraccio L, Peltrini R, Prevete R. Assessment of blood perfusion quality in laparoscopic colorectal surgery by means of Machine Learning. Sci Rep 2022; 12:14682. [PMID: 36038561 PMCID: PMC9424219 DOI: 10.1038/s41598-022-16030-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 07/04/2022] [Indexed: 11/20/2022] Open
Abstract
An innovative algorithm to automatically assess blood perfusion quality of the intestinal sector in laparoscopic colorectal surgery is proposed. Traditionally, the uniformity of the brightness in indocyanine green-based fluorescence consists only in a qualitative, empirical evaluation, which heavily relies on the surgeon’s subjective assessment. As such, this leads to assessments that are strongly experience-dependent. To overcome this limitation, the proposed algorithm assesses the level and uniformity of indocyanine green used during laparoscopic surgery. The algorithm adopts a Feed Forward Neural Network receiving as input a feature vector based on the histogram of the green band of the input image. It is used to (i) acquire information related to perfusion during laparoscopic colorectal surgery, and (ii) support the surgeon in assessing objectively the outcome of the procedure. In particular, the algorithm provides an output that classifies the perfusion as adequate or inadequate. The algorithm was validated on videos captured during surgical procedures carried out at the University Hospital Federico II in Naples, Italy. The obtained results show a classification accuracy equal to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$99.9\%$$\end{document}99.9%, with a repeatability of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$1.9\%$$\end{document}1.9%. Finally, the real-time operation of the proposed algorithm was tested by analyzing the video streaming captured directly from an endoscope available in the OR.
Collapse
Affiliation(s)
- Pasquale Arpaia
- University of Naples Federico II - Interdepartmental Research Center in Health Management and Innovation in Healthcare (CIRMIS), Naples, 80131, Italy. .,Department of Information Technology and Electrical Engineering, University of Naples Federico II, Naples, 80125, Italy.
| | - Umberto Bracale
- University of Naples Federico II - Interdepartmental Research Center in Health Management and Innovation in Healthcare (CIRMIS), Naples, 80131, Italy.,Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, 80131, Italy
| | - Francesco Corcione
- University of Naples Federico II - Interdepartmental Research Center in Health Management and Innovation in Healthcare (CIRMIS), Naples, 80131, Italy.,Department of Public Health, University of Naples Federico II, Naples, 80131, Italy
| | - Egidio De Benedetto
- University of Naples Federico II - Interdepartmental Research Center in Health Management and Innovation in Healthcare (CIRMIS), Naples, 80131, Italy.,Department of Information Technology and Electrical Engineering, University of Naples Federico II, Naples, 80125, Italy
| | - Alessandro Di Bernardo
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, Naples, 80125, Italy
| | - Vincenzo Di Capua
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, Naples, 80125, Italy
| | - Luigi Duraccio
- Department of Electronics and Telecommunications, Polytechnic University of Turin, Turin, 10129, Italy
| | - Roberto Peltrini
- Department of Public Health, University of Naples Federico II, Naples, 80131, Italy
| | - Roberto Prevete
- University of Naples Federico II - Interdepartmental Research Center in Health Management and Innovation in Healthcare (CIRMIS), Naples, 80131, Italy.,Department of Information Technology and Electrical Engineering, University of Naples Federico II, Naples, 80125, Italy
| |
Collapse
|
18
|
Ali H, Sharif M, Yasmin M, Rehmani MH. A shallow extraction of texture features for classification of abnormal video endoscopy frames. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Adjei PE, Lonseko ZM, Du W, Zhang H, Rao N. Examining the effect of synthetic data augmentation in polyp detection and segmentation. Int J Comput Assist Radiol Surg 2022; 17:1289-1302. [PMID: 35678960 DOI: 10.1007/s11548-022-02651-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/21/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE As with several medical image analysis tasks based on deep learning, gastrointestinal image analysis is plagued with data scarcity, privacy concerns and an insufficient number of pathology samples. This study examines the generation and utility of synthetic samples of colonoscopy images with polyps for data augmentation. METHODS We modify and train a pix2pix model to generate synthetic colonoscopy samples with polyps to augment the original dataset. Subsequently, we create a variety of datasets by varying the quantity of synthetic samples and traditional augmentation samples, to train a U-Net network and Faster R-CNN model for segmentation and detection of polyps, respectively. We compare the performance of the models when trained with the resulting datasets in terms of F1 score, intersection over union, precision and recall. Further, we compare the performances of the models with unseen polyp datasets to assess their generalization ability. RESULTS The average F1 coefficient and intersection over union are improved with increasing number of synthetic samples in U-Net over all test datasets. The performance of the Faster R-CNN model is also improved in terms of polyp detection, while decreasing the false-negative rate. Further, the experimental results for polyp detection outperform similar studies in the literature on the ETIS-PolypLaribDB dataset. CONCLUSION By varying the quantity of synthetic and traditional augmentation, there is the potential to control the sensitivity of deep learning models in polyp segmentation and detection. Further, GAN-based augmentation is a viable option for improving the performance of models for polyp segmentation and detection.
Collapse
Affiliation(s)
- Prince Ebenezer Adjei
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.,Department of Computer Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Zenebe Markos Lonseko
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wenju Du
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Han Zhang
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China. .,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
20
|
Rao B H, Trieu JA, Nair P, Gressel G, Venu M, Venu RP. Artificial intelligence in endoscopy: More than what meets the eye in screening colonoscopy and endosonographic evaluation of pancreatic lesions. Artif Intell Gastrointest Endosc 2022; 3:16-30. [DOI: 10.37126/aige.v3.i3.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/07/2022] [Accepted: 05/07/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI)-based tools have ushered in a new era of innovation in the field of gastrointestinal (GI) endoscopy. Despite vast improvements in endoscopic techniques and equipment, diagnostic endoscopy remains heavily operator-dependent, in particular, colonoscopy and endoscopic ultrasound (EUS). Recent reports have shown that as much as 25% of colonic adenomas may be missed at colonoscopy. This can result in an increased incidence of interval colon cancer. Similarly, EUS has been shown to have high inter-observer variability, overlap in diagnoses with a relatively low specificity for pancreatic lesions. Our understanding of Machine-learning (ML) techniques in AI have evolved over the last decade and its application in AI–based tools for endoscopic detection and diagnosis is being actively investigated at several centers. ML is an aspect of AI that is based on neural networks, and is widely used for image classification, object detection, and semantic segmentation which are key functional aspects of AI-related computer aided diagnostic systems. In this review, current status and limitations of ML, specifically for adenoma detection and endosonographic diagnosis of pancreatic lesions, will be summarized from existing literature. This will help to better understand its role as viewed through the prism of real world application in the field of GI endoscopy.
Collapse
Affiliation(s)
- Harshavardhan Rao B
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| | - Judy A Trieu
- Internal Medicine - Gastroenterology, Loyola University Medical Center, Maywood, IL 60153, United States
| | - Priya Nair
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| | - Gilad Gressel
- Center for Cyber Security Systems and Networks, Amrita Vishwavidyapeetham, Kollam 690546, Kerala, India
| | - Mukund Venu
- Internal Medicine - Gastroenterology, Loyola University Medical Center, Maywood, IL 60153, United States
| | - Rama P Venu
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| |
Collapse
|
21
|
Luca M, Ciobanu A. Polyp detection in video colonoscopy using deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Video colonoscopy automatic processing is a challenge and further development of computer assisted diagnosis is very helpful in correctness assessment of the exam, in e-learning and training, for statistics on polyps’ malignity or in polyps’ survey. New devices and programming languages are emerging and deep learning begun already to furnish astonishing results, in the quest for high speed and optimal polyp detection software. This paper presents a successful attempt in detecting the intestinal polyps in real time video colonoscopy with deep learning, using Mobile Net.
Collapse
Affiliation(s)
- Mihaela Luca
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| | - Adrian Ciobanu
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| |
Collapse
|
22
|
Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P. Lung and colon cancer classification using medical imaging: a feature engineering approach. Phys Eng Sci Med 2022; 45:729-746. [PMID: 35670909 DOI: 10.1007/s13246-022-01139-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 05/03/2022] [Indexed: 12/24/2022]
Abstract
Lung and colon cancers lead to a significant portion of deaths. Their simultaneous occurrence is uncommon, however, in the absence of early diagnosis, the metastasis of cancer cells is very high between these two organs. Currently, histopathological diagnosis and appropriate treatment are the only way to improve the chances of survival and reduce cancer mortality. Using artificial intelligence in the histopathological diagnosis of colon and lung cancer can provide significant help to specialists in identifying cases of colon and lung cancers with less effort, time and cost. The objective of this study is to set up a computer-aided diagnostic system that can accurately classify five types of colon and lung tissues (two classes for colon cancer and three classes for lung cancer) by analyzing their histopathological images. Using machine learning, features engineering and image processing techniques, the six models XGBoost, SVM, RF, LDA, MLP and LightGBM were used to perform the classification of histopathological images of lung and colon cancers that were acquired from the LC25000 dataset. The main advantage of using machine learning models is that they allow a better interpretability of the classification model since they are based on feature engineering; however, deep learning models are black box networks whose working is very difficult to understand due to the complex network design. The acquired experimental results show that machine learning models give satisfactory results and are very precise in identifying classes of lung and colon cancer subtypes. The XGBoost model gave the best performance with an accuracy of 99% and a F1-score of 98.8%. The implementation and the development of this model will help healthcare specialists identify types of colon and lung cancers. The code will be available upon request.
Collapse
Affiliation(s)
| | - Nassib Abdallah
- LARIS, SFR MATHSTIC, Univ Angers, Angers, France.,LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | | | | | | |
Collapse
|
23
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
24
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
25
|
Tavanapong W, Oh J, Riegler MA, Khaleel M, Mittal B, de Groen PC. Artificial Intelligence for Colonoscopy: Past, Present, and Future. IEEE J Biomed Health Inform 2022; 26:3950-3965. [PMID: 35316197 PMCID: PMC9478992 DOI: 10.1109/jbhi.2022.3160098] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
During the past decades, many automated image analysis methods have been developed for colonoscopy. Real-time implementation of the most promising methods during colonoscopy has been tested in clinical trials, including several recent multi-center studies. All trials have shown results that may contribute to prevention of colorectal cancer. We summarize the past and present development of colonoscopy video analysis methods, focusing on two categories of artificial intelligence (AI) technologies used in clinical trials. These are (1) analysis and feedback for improving colonoscopy quality and (2) detection of abnormalities. Our survey includes methods that use traditional machine learning algorithms on carefully designed hand-crafted features as well as recent deep-learning methods. Lastly, we present the gap between current state-of-the-art technology and desirable clinical features and conclude with future directions of endoscopic AI technology development that will bridge the current gap.
Collapse
|
26
|
Su Y, Tian X, Gao R, Guo W, Chen C, Chen C, Jia D, Li H, Lv X. Colon cancer diagnosis and staging classification based on machine learning and bioinformatics analysis. Comput Biol Med 2022; 145:105409. [PMID: 35339846 DOI: 10.1016/j.compbiomed.2022.105409] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/20/2022] [Accepted: 03/12/2022] [Indexed: 12/13/2022]
Abstract
Advanced metastasis of colon cancer makes it more difficult to treat colon cancer. Finding the markers of colon cancer (Colon Cancer) can diagnose the stage of cancer in time and improve the prognosis with timely treatment. This paper uses gene expression profiling data from The Cancer Genome Atlas (TCGA) for the diagnosis of colon cancer and its staging. In this study, we first selected the gene modules with the greatest correlation with cancer by Weighted Gene Co-expression Network Analysis (WGCNA), extracted the characteristic genes for differential expression results using the least absolute shrinkage and selection operator algorithm (Lasso) and performed survival analysis, and then combined the genes in the modules with the Lasso-extracted feature genes were combined to diagnose colon cancer versus healthy controls using RF, SVM and decision trees, and colon cancer staging was diagnosed using differentially expressed genes for each stage. Finally, Protein-Protein Interaction Networks (PPI) networks were done for 289 genes to identify clusters of aggregated proteins for survival analysis. Finally, the RF model had the best results in the diagnosis of colon cancer versus control group fold cross-validation with an average accuracy of 99.81%, F1 value reaching 0.9968, accuracy of 99.88%, and recall of 99.5%, and an average accuracy of 91.5%, F1 value reaching 0.7679, accuracy of 86.94%, and recall in the diagnosis of colon cancer stages I, II, III and IV. The recall rate reached 73.04%, and eight genes associated with colon cancer prognosis were identified for GCNT2, GLDN, SULT1B1, UGT2B15, PTGDR2, GPR15, BMP5 and CPT2.
Collapse
Affiliation(s)
- Ying Su
- College of Software, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Xuecong Tian
- College of Software, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Rui Gao
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Wenjia Guo
- Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, Xinjiang, China.
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China; Cloud Computing Engineering Technology Research Center of Xinjiang, Kelamayi, 834099, China
| | - Dongfang Jia
- College of Software, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Hongtao Li
- Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, Xinjiang, China; Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, Xinjiang, China.
| |
Collapse
|
27
|
Picon A, Terradillos E, Sánchez-Peralta LF, Mattana S, Cicchi R, Blover BJ, Arbide N, Velasco J, Etzezarraga MC, Pavone FS, Garrote E, Saratxaga CL. Novel Pixelwise Co-Registered Hematoxylin-Eosin and Multiphoton Microscopy Image Dataset for Human Colon Lesion Diagnosis. J Pathol Inform 2022; 13:100012. [PMID: 35223136 PMCID: PMC8855324 DOI: 10.1016/j.jpi.2022.100012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 01/09/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer presents one of the most elevated incidences of cancer worldwide. Colonoscopy relies on histopathology analysis of hematoxylin-eosin (H&E) images of the removed tissue. Novel techniques such as multi-photon microscopy (MPM) show promising results for performing real-time optical biopsies. However, clinicians are not used to this imaging modality and correlation between MPM and H&E information is not clear. The objective of this paper is to describe and make publicly available an extensive dataset of fully co-registered H&E and MPM images that allows the research community to analyze the relationship between MPM and H&E histopathological images and the effect of the semantic gap that prevents clinicians from correctly diagnosing MPM images. The dataset provides a fully scanned tissue images at 10x optical resolution (0.5 µm/px) from 50 samples of lesions obtained by colonoscopies and colectomies. Diagnostics capabilities of TPF and H&E images were compared. Additionally, TPF tiles were virtually stained into H&E images by means of a deep-learning model. A panel of 5 expert pathologists evaluated the different modalities into three classes (healthy, adenoma/hyperplastic, and adenocarcinoma). Results showed that the performance of the pathologists over MPM images was 65% of the H&E performance while the virtual staining method achieved 90%. MPM imaging can provide appropriate information for diagnosing colorectal cancer without the need for H&E staining. However, the existing semantic gap among modalities needs to be corrected.
Collapse
Affiliation(s)
- Artzai Picon
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain.,University of the Basque Country UPV/EHU, Ingeniero Torres Quevedo Plaza, 1, 48013 Bilbao, Spain
| | - Elena Terradillos
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| | - Luisa F Sánchez-Peralta
- Centro de Cirugía de Mínima Invasión Jesús Usón, Carretera N-521, km. 41,8, 10071 Cáceres, Spain
| | - Sara Mattana
- National Institute of Optics, National Research Council (CNR-INO), Largo E. Fermi 6, 50125 Florence, Italy.,European Laboratory for Non-Linear Spectroscopy (LENS), Via N. Carrara 1, Sesto Fiorentino 50019, Italy
| | - Riccardo Cicchi
- National Institute of Optics, National Research Council (CNR-INO), Largo E. Fermi 6, 50125 Florence, Italy.,European Laboratory for Non-Linear Spectroscopy (LENS), Via N. Carrara 1, Sesto Fiorentino 50019, Italy
| | - Benjamin J Blover
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Nagore Arbide
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Jacques Velasco
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Mª Carmen Etzezarraga
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Francesco S Pavone
- Department of Physics, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy
| | - Estibaliz Garrote
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| | - Cristina L Saratxaga
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| |
Collapse
|
28
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
29
|
Song H, Ruan C, Xu Y, Xu T, Fan R, Jiang T, Cao M, Song J. Survival stratification for colorectal cancer via multi-omics integration using an autoencoder-based model. Exp Biol Med (Maywood) 2021; 247:898-909. [PMID: 34904882 DOI: 10.1177/15353702211065010] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Prognosis stratification in colorectal cancer helps to address cancer heterogeneity and contributes to the improvement of tailored treatments for colorectal cancer patients. In this study, an autoencoder-based model was implemented to predict the prognosis of colorectal cancer via the integration of multi-omics data. DNA methylation, RNA-seq, and miRNA-seq data from The Cancer Genome Atlas (TCGA) database were integrated as input for the autoencoder, and 175 transformed features were produced. The survival-related features were used to cluster the samples using k-means clustering. The autoencoder-based strategy was compared to the principal component analysis (PCA)-, t-distributed random neighbor embedded (t-SNE)-, non-negative matrix factorization (NMF)-, or individual Cox proportional hazards (Cox-PH)-based strategies. Using the 175 transformed features, tumor samples were clustered into two groups (G1 and G2) with significantly different survival rates. The autoencoder-based strategy performed better at identifying survival-related features than the other transformation strategies. Further, the two survival groups were robustly validated using "hold-out" validation and five validation cohorts. Gene expression profiles, miRNA profiles, DNA methylation, and signaling pathway profiles varied from the poor prognosis group (G2) to the good prognosis group (G1). miRNA-mRNA networks were constructed using six differentially expressed miRNAs (let-7c, mir-34c, mir-133b, let-7e, mir-144, and mir-106a) and 19 predicted target genes. The autoencoder-based computational framework could distinguish good prognosis samples from bad prognosis samples and facilitate a better understanding of the molecular biology of colorectal cancer.
Collapse
Affiliation(s)
- Hu Song
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Chengwei Ruan
- Department of Anorectal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Yixin Xu
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Teng Xu
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Ruizhi Fan
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Tao Jiang
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Meng Cao
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| | - Jun Song
- Department of Gastrointestinal Surgery, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221002, PR China
| |
Collapse
|
30
|
Wang S, Yin Y, Wang D, Lv Z, Wang Y, Jin Y. An interpretable deep neural network for colorectal polyp diagnosis under colonoscopy. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
31
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
32
|
Aminnejad R, Hormati A, Shafiee H, Alemi F, Hormati M, Saeidi M, Ahmadpour S, Sabouri SM, Aghaali M. Comparing the efficacy and safety of Dexmedetomidine/Ketamine with Propofol/Fentanyl for sedation in colonoscopy patients: A double-blinded randomized clinical trial. CNS & NEUROLOGICAL DISORDERS-DRUG TARGETS 2021; 21:724-731. [PMID: 34620069 DOI: 10.2174/1871527320666211006141406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/03/2021] [Accepted: 08/26/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND In this double-blinded randomized clinical trial, we aimed to compare the safety and efficacy of a combination of dexmedetomidine and ketamine [DK] with propofol and fentanyl [PF] for sedation in colonoscopy patients. METHODS In this study, 64 patients who underwent colonoscopy were randomized into two groups: 1) A, which received PF, and 2) B, which received DK for sedation. Among 64 patients, 31 patients were included in PF, and 33 patients were included in the DK group. Both groups were similar in terms of demographics. Patients' sedation score (based on Ramsay sedation scale) and vital signs were recorded at 2, 5, 10, and 15 minutes. Complications including apnea, hypotension, hypoxia, nausea, and vomiting, along with gastroenterologist satisfaction and patients' pain score (based on Wong-Baker faces pain assessment scale), were recorded by a checklist. Data were analyzed by SPSS v.18 software, using chi-square, independent t-tests, and repeated measures analysis with p<0.05 as the criterion for significant differences. RESULTS The mean score of sedation was 4.82±0.49 in the DK group and 5.22±0.45 in the PF group [p value=0.001]. Serious complications, including hypotension [p value=0.005] and apnea [p value=0.10] were significantly higher in the PF group. Satisfaction of gastroenterologist [p value= 0.400] and patients' pain score [p value = 0.900] were similar among groups. CONCLUSION Combination of DK provides sufficient sedation with fewer complications in comparison with PF in colonoscopy patients.
Collapse
Affiliation(s)
- Reza Aminnejad
- Department of Anesthesiology, School of Medicine, Shahid Beheshti Hospital, Qom University of Medical Sciences, Qom. Iran
| | - Ahmad Hormati
- Gastrointestinal and Liver Diseases Research Center, Iran University of Medical Sciences, Tehran. Iran
| | | | - Faezeh Alemi
- Gastroenterology and Hepatology Diseases Research Center, Qom University of Medical Sciences, Qom. Iran
| | | | | | - Sajjad Ahmadpour
- Gastroenterology and Hepatology Diseases Research Center, Qom University of Medical Sciences, Qom. Iran
| | - Seyed Mahdi Sabouri
- Department of Family and Community Medicine, School of Medicine, Qom University of Medical Sciences, Qom. Iran
| | - Mohammad Aghaali
- Department of Family and Community Medicine, School of Medicine, Qom University of Medical Sciences, Qom. Iran
| |
Collapse
|
33
|
Nogueira-Rodríguez A, Domínguez-Carbajales R, Campos-Tato F, Herrero J, Puga M, Remedios D, Rivas L, Sánchez E, Iglesias Á, Cubiella J, Fdez-Riverola F, López-Fernández H, Reboiro-Jato M, Glez-Peña D. Real-time polyp detection model using convolutional neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06496-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractColorectal cancer is a major health problem, where advances towards computer-aided diagnosis (CAD) systems to assist the endoscopist can be a promising path to improvement. Here, a deep learning model for real-time polyp detection based on a pre-trained YOLOv3 (You Only Look Once) architecture and complemented with a post-processing step based on an object-tracking algorithm to reduce false positives is reported. The base YOLOv3 network was fine-tuned using a dataset composed of 28,576 images labelled with locations of 941 polyps that will be made public soon. In a frame-based evaluation using isolated images containing polyps, a general F1 score of 0.88 was achieved (recall = 0.87, precision = 0.89), with lower predictive performance in flat polyps, but higher for sessile, and pedunculated morphologies, as well as with the usage of narrow band imaging, whereas polyp size < 5 mm does not seem to have significant impact. In a polyp-based evaluation using polyp and normal mucosa videos, with a positive criterion defined as the presence of at least one 50-frames-length (window size) segment with a ratio of 75% of frames with predicted bounding boxes (frames positivity), 72.61% of sensitivity (95% CI 68.99–75.95) and 83.04% of specificity (95% CI 76.70–87.92) were achieved (Youden = 0.55, diagnostic odds ratio (DOR) = 12.98). When the positive criterion is less stringent (window size = 25, frames positivity = 50%), sensitivity reaches around 90% (sensitivity = 89.91%, 95% CI 87.20–91.94; specificity = 54.97%, 95% CI 47.49–62.24; Youden = 0.45; DOR = 10.76). The object-tracking algorithm has demonstrated a significant improvement in specificity whereas maintaining sensitivity, as well as a marginal impact on computational performance. These results suggest that the model could be effectively integrated into a CAD system.
Collapse
|
34
|
Terradillos E, Saratxaga CL, Mattana S, Cicchi R, Pavone FS, Andraka N, Glover BJ, Arbide N, Velasco J, Etxezarraga MC, Picon A. Analysis on the Characterization of Multiphoton Microscopy Images for Malignant Neoplastic Colon Lesion Detection under Deep Learning Methods. J Pathol Inform 2021; 12:27. [PMID: 34447607 PMCID: PMC8359734 DOI: 10.4103/jpi.jpi_113_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/29/2021] [Accepted: 06/21/2021] [Indexed: 12/22/2022] Open
Abstract
Background: Colorectal cancer has a high incidence rate worldwide, with over 1.8 million new cases and 880,792 deaths in 2018. Fortunately, its early detection significantly increases the survival rate, reaching a cure rate of 90% when diagnosed at a localized stage. Colonoscopy is the gold standard technique for detection and removal of colorectal lesions with potential to evolve into cancer. When polyps are found in a patient, the current procedure is their complete removal. However, in this process, gastroenterologists cannot assure complete resection and clean margins which are given by the histopathology analysis of the removed tissue, which is performed at laboratory. Aims: In this paper, we demonstrate the capabilities of multiphoton microscopy (MPM) technology to provide imaging biomarkers that can be extracted by deep learning techniques to identify malignant neoplastic colon lesions and distinguish them from healthy, hyperplastic, or benign neoplastic tissue, without the need for histopathological staining. Materials and Methods: To this end, we present a novel MPM public dataset containing 14,712 images obtained from 42 patients and grouped into 2 classes. A convolutional neural network is trained on this dataset and a spatially coherent predictions scheme is applied for performance improvement. Results: We obtained a sensitivity of 0.8228 ± 0.1575 and a specificity of 0.9114 ± 0.0814 on detecting malignant neoplastic lesions. We also validated this approach to estimate the self-confidence of the network on its own predictions, obtaining a mean sensitivity of 0.8697 and a mean specificity of 0.9524 with the 18.67% of the images classified as uncertain. Conclusions: This work lays the foundations for performing in vivo optical colon biopsies by combining this novel imaging technology together with deep learning algorithms, hence avoiding unnecessary polyp resection and allowing in situ diagnosis assessment.
Collapse
Affiliation(s)
| | | | - Sara Mattana
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
| | - Riccardo Cicchi
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
| | - Francesco S Pavone
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
| | - Nagore Andraka
- Basque Foundation for Health Innovation and Research, Barakaldo, Spain
| | - Benjamin J Glover
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Nagore Arbide
- Department of Pathological Anatomy, Osakidetza Basque Health Service, Basurto University Hospital, Bilbao, Spain
| | - Jacques Velasco
- Department of Pathological Anatomy, Osakidetza Basque Health Service, Basurto University Hospital, Bilbao, Spain
| | - Mª Carmen Etxezarraga
- Department of Pathological Anatomy, Osakidetza Basque Health Service, Basurto University Hospital, Bilbao, Spain
| | - Artzai Picon
- University of the Basque Country UPV/EHU, Bilbao, Spain
| |
Collapse
|
35
|
Durak S, Bayram B, Bakırman T, Erkut M, Doğan M, Gürtürk M, Akpınar B. Deep neural network approaches for detecting gastric polyps in endoscopic images. Med Biol Eng Comput 2021; 59:1563-1574. [PMID: 34259974 DOI: 10.1007/s11517-021-02398-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/18/2021] [Indexed: 12/18/2022]
Abstract
Gastrointestinal endoscopy is the primary method used for the diagnosis and treatment of gastric polyps. The early detection and removal of polyps is vitally important in preventing cancer development. Many studies indicate that a high workload can contribute to misdiagnosing gastric polyps, even for experienced physicians. In this study, we aimed to establish a deep learning-based computer-aided diagnosis system for automatic gastric polyp detection. A private gastric polyp dataset was generated for this purpose consisting of 2195 endoscopic images and 3031 polyp labels. Retrospective gastrointestinal endoscopy data from the Karadeniz Technical University, Farabi Hospital, were used in the study. YOLOv4, CenterNet, EfficientNet, Cross Stage ResNext50-SPP, YOLOv3, YOLOv3-SPP, Single Shot Detection, and Faster Regional CNN deep learning models were implemented and assessed to determine the most efficient model for precancerous gastric polyp detection. The dataset was split 70% and 30% for training and testing all the implemented models. YOLOv4 was determined to be the most accurate model, with an 87.95% mean average precision. We also evaluated all the deep learning models using a public gastric polyp dataset as the test data. The results show that YOLOv4 has significant potential applicability in detecting gastric polyps and can be used effectively in gastrointestinal CAD systems. Gastric Polyp Detection Process using Deep Learning with Private Dataset.
Collapse
Affiliation(s)
- Serdar Durak
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Bülent Bayram
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Tolga Bakırman
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey.
| | - Murat Erkut
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Metehan Doğan
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Mert Gürtürk
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Burak Akpınar
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
36
|
Liew WS, Tang TB, Lin CH, Lu CK. Automatic colonic polyp detection using integration of modified deep residual convolutional neural network and ensemble learning approaches. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106114. [PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately. METHODS In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps. RESULTS The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively. CONCLUSIONS These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
Collapse
Affiliation(s)
- Win Sheng Liew
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Tong Boon Tang
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Cheng-Hung Lin
- Department of Electrical Engineering and Biomedical Engineering Research Center, Yuan Ze University, Jungli 32003, Taiwan
| | - Cheng-Kai Lu
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
37
|
Computer-Aided Detection False Positives in Colonoscopy. Diagnostics (Basel) 2021; 11:diagnostics11061113. [PMID: 34207226 PMCID: PMC8235696 DOI: 10.3390/diagnostics11061113] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 06/08/2021] [Accepted: 06/14/2021] [Indexed: 12/24/2022] Open
Abstract
Randomized control trials and meta-analyses comparing colonoscopies with and without computer-aided detection (CADe) assistance showed significant increases in adenoma detection rates (ADRs) with CADe. A major limitation of CADe is its false positives (FPs), ranked 3rd in importance among 59 research questions in a modified Delphi consensus review. The definition of FPs varies. One commonly used definition defines an FP as an activation of the CADe system, irrespective of the number of frames or duration of time, not due to any polypoid or nonpolypoid lesions. Although only 0.07 to 0.2 FPs were observed per colonoscopy, video analysis studies using FPs as the primary outcome showed much higher numbers of 26 to 27 per colonoscopy. Most FPs were of short duration (91% < 0.5 s). A higher number of FPs was also associated with suboptimal bowel preparation. The appearance of FPs can lead to user fatigue. The polypectomy of FPs results in increased procedure time and added use of resources. Re-training the CADe algorithms is one way to reduce FPs but is not practical in the clinical setting during colonoscopy. Water exchange (WE) is an emerging method that the colonoscopist can use to provide salvage cleaning during insertion. We discuss the potential of WE for reducing FPs as well as the augmentation of ADRs through CADe.
Collapse
|
38
|
Hsiao YJ, Wen YC, Lai WY, Lin YY, Yang YP, Chien Y, Yarmishyn AA, Hwang DK, Lin TC, Chang YC, Lin TY, Chang KJ, Chiou SH, Jheng YC. Application of artificial intelligence-driven endoscopic screening and diagnosis of gastric cancer. World J Gastroenterol 2021; 27:2979-2993. [PMID: 34168402 PMCID: PMC8192292 DOI: 10.3748/wjg.v27.i22.2979] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 03/10/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
The landscape of gastrointestinal endoscopy continues to evolve as new technologies and techniques become available. The advent of image-enhanced and magnifying endoscopies has highlighted the step toward perfecting endoscopic screening and diagnosis of gastric lesions. Simultaneously, with the development of convolutional neural network, artificial intelligence (AI) has made unprecedented breakthroughs in medical imaging, including the ongoing trials of computer-aided detection of colorectal polyps and gastrointestinal bleeding. In the past demi-decade, applications of AI systems in gastric cancer have also emerged. With AI’s efficient computational power and learning capacities, endoscopists can improve their diagnostic accuracies and avoid the missing or mischaracterization of gastric neoplastic changes. So far, several AI systems that incorporated both traditional and novel endoscopy technologies have been developed for various purposes, with most systems achieving an accuracy of more than 80%. However, their feasibility, effectiveness, and safety in clinical practice remain to be seen as there have been no clinical trials yet. Nonetheless, AI-assisted endoscopies shed light on more accurate and sensitive ways for early detection, treatment guidance and prognosis prediction of gastric lesions. This review summarizes the current status of various AI applications in gastric cancer and pinpoints directions for future research and clinical practice implementation from a clinical perspective.
Collapse
Affiliation(s)
- Yu-Jer Hsiao
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yuan-Chih Wen
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Medical Education, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Wei-Yi Lai
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ying Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ping Yang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Internal Medicine, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Critical Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Yueh Chien
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | | | - De-Kuang Hwang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Tai-Chi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Yun-Chia Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Ting-Yi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Medicine, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
| | - Kao-Jung Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Shih-Hwa Chiou
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Big Data Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| |
Collapse
|
39
|
Ortega-Morán JF, Azpeitia Á, Sánchez-Peralta LF, Bote-Curiel L, Pagador B, Cabezón V, Saratxaga CL, Sánchez-Margallo FM. Medical needs related to the endoscopic technology and colonoscopy for colorectal cancer diagnosis. BMC Cancer 2021; 21:467. [PMID: 33902503 PMCID: PMC8077886 DOI: 10.1186/s12885-021-08190-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 04/14/2021] [Indexed: 12/14/2022] Open
Abstract
Background The high incidence and mortality rate of colorectal cancer require new technologies to improve its early diagnosis. This study aims at extracting the medical needs related to the endoscopic technology and the colonoscopy procedure currently used for colorectal cancer diagnosis, essential for designing these demanded technologies. Methods Semi-structured interviews and an online survey were used. Results Six endoscopists were interviewed and 103 were surveyed, obtaining the demanded needs that can be divided into: a) clinical needs, for better polyp detection and classification (especially flat polyps), location, size, margins and penetration depth; b) computer-aided diagnosis (CAD) system needs, for additional visual information supporting polyp characterization and diagnosis; and c) operational/physical needs, related to limitations of image quality, colon lighting, flexibility of the endoscope tip, and even poor bowel preparation. Conclusions This study shows some undertaken initiatives to meet the detected medical needs and challenges to be solved. The great potential of advanced optical technologies suggests their use for a better polyp detection and classification since they provide additional functional and structural information than the currently used image enhancement technologies. The inspection of remaining tissue of diminutive polyps (< 5 mm) should be addressed to reduce recurrence rates. Few progresses have been made in estimating the infiltration depth. Detection and classification methods should be combined into one CAD system, providing visual aids over polyps for detection and displaying a Kudo-based diagnosis suggestion to assist the endoscopist on real-time decision making. Estimated size and location of polyps should also be provided. Endoscopes with 360° vision are still a challenge not met by the mechanical and optical systems developed to improve the colon inspection. Patients and healthcare providers should be trained to improve the patient’s bowel preparation. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-021-08190-z.
Collapse
Affiliation(s)
| | - Águeda Azpeitia
- Biobanco Vasco, Fundación Vasca de Investigaciones e Innovación Sanitaria (BIOEF), Ronda de Azkue, 1, 48902, Barakaldo, Spain
| | | | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, Km 41.8, 10071, Cáceres, Spain
| | - Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, Km 41.8, 10071, Cáceres, Spain
| | - Virginia Cabezón
- Biobanco Vasco, Fundación Vasca de Investigaciones e Innovación Sanitaria (BIOEF), Ronda de Azkue, 1, 48902, Barakaldo, Spain
| | - Cristina L Saratxaga
- TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/Geldo. Edificio 700, E-48160, Derio, Bizkaia, Spain
| | | |
Collapse
|
40
|
Guo C, Kang X, Cao F, Yang J, Xu Y, Liu X, Li Y, Ma X, Fu X. Network Pharmacology and Molecular Docking on the Molecular Mechanism of Luo-hua-zi-zhu (LHZZ) Granule in the Prevention and Treatment of Bowel Precancerous Lesions. Front Pharmacol 2021; 12:629021. [PMID: 33692692 PMCID: PMC7938190 DOI: 10.3389/fphar.2021.629021] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 01/18/2021] [Indexed: 12/15/2022] Open
Abstract
The Luo-hua-zi-zhu (LHZZ) granule has been widely used for the treatment of colorectal adenoma (CRA), which is a precursor of colorectal cancer (CRC). However, the active components of LUZZ and its mechanism of action against CRA have not yet been elucidated. This study was designed to investigate the effect of LHZZ on CRA and explore its pharmacological mechanisms. First, a total of 24 chemical constituents were identified in the 50% aqueous methanol extract of LHZZ granule based on the mass fragment patterns and mass spectral library using the high resolution UPLC-Q-TOF MS/MS system. Subsequently, based on a network pharmacology study, 16 bioactive compounds and 28 targets of the LHZZ associated with CRA were obtained, forming a compound-target network. Molecular docking tests showed tight docking of these compounds with predicted targeted proteins. The protein–protein interaction (PPI) network identified AKT1, CASP3, TP53 and EGFR as hub targets. The Kyoto Encyclopedia of Genes and Genomes pathway network and pathway-target-compound network revealed that the apoptosis pathway was enriched by multiple signaling pathways and multiple targets, including the hub targets. Finally, the reliability of the core targets was evaluated using molecular docking technology and in vitro studies. Our study indicated that the LHZZ particle has preventive and treatment effect on colorectal adenoma through multi-component, multi-target and multi-pathway.
Collapse
Affiliation(s)
- Cui Guo
- Second Department of Oncology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,Liaoning University of Traditional Chinese Medicine, Shenyang, China
| | - Xingdong Kang
- School of Chinese Materia Medica, Nanjing University of Chinese Medicine, Shanghai, China
| | - Fang Cao
- Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Jian Yang
- The Second Military Medical University, Shanghai, China
| | - Yimin Xu
- Second Department of Oncology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiaoqiang Liu
- Second Department of Oncology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,Department of Pain, Shibei Hospital, Shanghai, China
| | - Yuan Li
- Infection Prevention and Control Department, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiumei Ma
- Department of Radiotherapy, Renji Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoling Fu
- Second Department of Oncology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| |
Collapse
|
41
|
Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA. A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. SENSORS 2021; 21:s21030748. [PMID: 33499364 PMCID: PMC7865416 DOI: 10.3390/s21030748] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/10/2021] [Accepted: 01/18/2021] [Indexed: 12/19/2022]
Abstract
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like cancer continue to haunt us since we are still vulnerable to them. Cancer is the second leading cause of death globally; about one in every six people die suffering from it. Among many types of cancers, the lung and colon variants are the most common and deadliest ones. Together, they account for more than 25% of all cancer cases. However, identifying the disease at an early stage significantly improves the chances of survival. Cancer diagnosis can be automated by using the potential of Artificial Intelligence (AI), which allows us to assess more cases in less time and cost. With the help of modern Deep Learning (DL) and Digital Image Processing (DIP) techniques, this paper inscribes a classification framework to differentiate among five types of lung and colon tissues (two benign and three malignant) by analyzing their histopathological images. The acquired results show that the proposed framework can identify cancer tissues with a maximum of 96.33% accuracy. Implementation of this model will help medical professionals to develop an automatic and reliable system capable of identifying various types of lung and colon cancers.
Collapse
Affiliation(s)
- Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
- Correspondence:
| | - Niloy Sikder
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Abdullah-Al Nahid
- Electronics and Communication Engineering Discipline, Khulna University, Khulna 9208, Bangladesh;
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Mohammed A. AlZain
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| |
Collapse
|
42
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_308-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
43
|
Suzuki H, Yoshitaka T, Yoshio T, Tada T. Artificial intelligence for cancer detection of the upper gastrointestinal tract. Dig Endosc 2021; 33:254-262. [PMID: 33222330 DOI: 10.1111/den.13897] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 11/16/2020] [Indexed: 12/20/2022]
Abstract
In recent years, artificial intelligence (AI) has been found to be useful to physicians in the field of image recognition due to three elements: deep learning (that is, CNN, convolutional neural network), a high-performance computer, and a large amount of digitized data. In the field of gastrointestinal endoscopy, Japanese endoscopists have produced the world's first achievements of CNN-based AI system for detecting gastric and esophageal cancers. This study reviews papers on CNN-based AI for gastrointestinal cancers, and discusses the future of this technology in clinical practice. Employing AI-based endoscopes would enable early cancer detection. The better diagnostic abilities of AI technology may be beneficial in early gastrointestinal cancers in which endoscopists have variable diagnostic abilities and accuracy. AI coupled with the expertise of endoscopists would increase the accuracy of endoscopic diagnosis.
Collapse
Affiliation(s)
- Hideo Suzuki
- Department of Gastroenterology, Graduate School of Institute Clinical Medicine, University of Tsukuba, Ibaraki, Japan
| | - Tokai Yoshitaka
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,AI Medical Service Inc., Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|
44
|
PICCOLO White-Light and Narrow-Band Imaging Colonoscopic Dataset: A Performance Comparative of Models and Datasets. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238501] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Colorectal cancer is one of the world leading death causes. Fortunately, an early diagnosis allows for effective treatment, increasing the survival rate. Deep learning techniques have shown their utility for increasing the adenoma detection rate at colonoscopy, but a dataset is usually required so the model can automatically learn features that characterize the polyps. In this work, we present the PICCOLO dataset, that comprises 3433 manually annotated images (2131 white-light images 1302 narrow-band images), originated from 76 lesions from 40 patients, which are distributed into training (2203), validation (897) and test (333) sets assuring patient independence between sets. Furthermore, clinical metadata are also provided for each lesion. Four different models, obtained by combining two backbones and two encoder–decoder architectures, are trained with the PICCOLO dataset and other two publicly available datasets for comparison. Results are provided for the test set of each dataset. Models trained with the PICCOLO dataset have a better generalization capacity, as they perform more uniformly along test sets of all datasets, rather than obtaining the best results for its own test set. This dataset is available at the website of the Basque Biobank, so it is expected that it will contribute to the further development of deep learning methods for polyp detection, localisation and classification, which would eventually result in a better and earlier diagnosis of colorectal cancer, hence improving patient outcomes.
Collapse
|
45
|
Abstract
Colorectal cancer is one of the leading cancer death causes worldwide, but its early diagnosis highly improves the survival rates. The success of deep learning has also benefited this clinical field. When training a deep learning model, it is optimized based on the selected loss function. In this work, we consider two networks (U-Net and LinkNet) and two backbones (VGG-16 and Densnet121). We analyzed the influence of seven loss functions and used a principal component analysis (PCA) to determine whether the PCA-based decomposition allows for the defining of the coefficients of a non-redundant primal loss function that can outperform the individual loss functions and different linear combinations. The eigenloss is defined as a linear combination of the individual losses using the elements of the eigenvector as coefficients. Empirical results show that the proposed eigenloss improves the general performance of individual loss functions and outperforms other linear combinations when Linknet is used, showing potential for its application in polyp segmentation problems.
Collapse
|