1
|
Gowthamy J, Ramesh SSS. Augmented histopathology: Enhancing colon cancer detection through deep learning and ensemble techniques. Microsc Res Tech 2025; 88:298-314. [PMID: 39344821 DOI: 10.1002/jemt.24692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 05/20/2024] [Accepted: 08/24/2024] [Indexed: 10/01/2024]
Abstract
Colon cancer poses a significant threat to human life with a high global mortality rate. Early and accurate detection is crucial for improving treatment quality and the survival rate. This paper presents a comprehensive approach to enhance colon cancer detection and classification. The histopathological images are gathered from the CRC-VAL-HE-7K dataset. The images undergo preprocessing to improve quality, followed by augmentation to increase dataset size and enhance model generalization. A deep learning based transformer model is designed for efficient feature extraction and enhancing classification by incorporating a convolutional neural network (CNN). A cross-transformation model captures long-range dependencies between regions, and an attention mechanism assigns weights to highlight crucial features. To boost classification accuracy, a Siamese network distinguishes colon cancer tissue classes based on probabilities. Optimization algorithms fine-tune model parameters, categorizing colon cancer tissues into different classes. The multi-class classification performance is evaluated in the experimental evaluation, which demonstrates that the proposed model provided highest accuracy rate of 98.84%. In this research article, the proposed method achieved better performance in all analyses by comparing with other existing methods. RESEARCH HIGHLIGHTS: Deep learning-based techniques are proposed. DL methods are used to enhance colon cancer detection and classification. CRC-VAL-HE-7K dataset is utilized to enhance image quality. Hybrid particle swarm optimization (PSO) and dwarf mongoose optimization (DMO) are used. The deep learning models are tuned by implementing the PSO-DMO algorithm.
Collapse
Affiliation(s)
- J Gowthamy
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India
| | - S S Subashka Ramesh
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India
| |
Collapse
|
2
|
Chung GE, Lee J, Lim SH, Kang HY, Kim J, Song JH, Yang SY, Choi JM, Seo JY, Bae JH. A prospective comparison of two computer aided detection systems with different false positive rates in colonoscopy. NPJ Digit Med 2024; 7:366. [PMID: 39702474 DOI: 10.1038/s41746-024-01334-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 11/08/2024] [Indexed: 12/21/2024] Open
Abstract
This study evaluated the impact of differing false positive (FP) rates in two computer-aided detection (CADe) systems on the clinical effectiveness of artificial intelligence (AI)-assisted colonoscopy. The primary outcomes were adenoma detection rate (ADR) and adenomas per colonoscopy (APC). The ADR in the control, system A (3.2% FP rate), and system B (0.6% FP rate) groups were 44.3%, 43.4%, and 50.4%, respectively, with system B showing a significantly higher ADR than the control group. The APC for the control, A, and B groups were 0.75, 0.83, and 0.90, respectively, with system B also showing a higher APC than the control. The non-true lesion resection rates were 23.8%, 29.2%, and 21.3%, with system B having the lowest. The system with lower FP rates demonstrated improved ADR and APC without increasing the resection of non-neoplastic lesions. These findings suggest that higher FP rates negatively affect the clinical performance of AI-assisted colonoscopy.
Collapse
Affiliation(s)
- Goh Eun Chung
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Jooyoung Lee
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Seon Hee Lim
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Hae Yeon Kang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Jung Kim
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Ji Hyun Song
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Sun Young Yang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Ji Min Choi
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Ji Yeon Seo
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Jung Ho Bae
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea.
| |
Collapse
|
3
|
Lee J, Cho WS, Kim BS, Yoon D, Kim J, Song JH, Yang SY, Lim SH, Chung GE, Choi JM, Han YM, Kong HJ, Lee JC, Kim S, Bae JH. Impact of User's Background Knowledge and Polyp Characteristics in Colonoscopy with Computer-Aided Detection. Gut Liver 2024; 18:857-866. [PMID: 39054913 PMCID: PMC11391145 DOI: 10.5009/gnl240068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 05/03/2024] [Accepted: 05/09/2024] [Indexed: 07/27/2024] Open
Abstract
Background/Aims We investigated how interactions between humans and computer-aided detection (CADe) systems are influenced by the user's experience and polyp characteristics. Methods We developed a CADe system using YOLOv4, trained on 16,996 polyp images from 1,914 patients and 1,800 synthesized sessile serrated lesion (SSL) images. The performance of polyp detection with CADe assistance was evaluated using a computerized test module. Eighteen participants were grouped by colonoscopy experience (nurses, fellows, and experts). The value added by CADe based on the histopathology and detection difficulty of polyps were analyzed. Results The area under the curve for CADe was 0.87 (95% confidence interval [CI], 0.83 to 0.91). CADe assistance increased overall polyp detection accuracy from 69.7% to 77.7% (odds ratio [OR], 1.88; 95% CI, 1.69 to 2.09). However, accuracy decreased when CADe inaccurately detected a polyp (OR, 0.72; 95% CI, 0.58 to 0.87). The impact of CADe assistance was most and least prominent in the nurses (OR, 1.97; 95% CI, 1.71 to 2.27) and the experts (OR, 1.42; 95% CI, 1.15 to 1.74), respectively. Participants demonstrated better sensitivity with CADe assistance, achieving 81.7% for adenomas and 92.4% for easy-to-detect polyps, surpassing the standalone CADe performance of 79.7% and 89.8%, respectively. For SSLs and difficult-to-detect polyps, participants' sensitivities with CADe assistance (66.5% and 71.5%, respectively) were below those of standalone CADe (81.1% and 74.4%). Compared to the other two groups (56.1% and 61.7%), the expert group showed sensitivity closest to that of standalone CADe in detecting SSLs (79.7% vs 81.1%, respectively). Conclusions CADe assistance boosts polyp detection significantly, but its effectiveness depends on the user's experience, particularly for challenging lesions.
Collapse
Affiliation(s)
- Jooyoung Lee
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Woo Sang Cho
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Jung Kim
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Ji Hyun Song
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Sun Young Yang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Seon Hee Lim
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Goh Eun Chung
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Ji Min Choi
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Yoo Min Han
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Hyoun-Joong Kong
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, Korea
| | - Jung Chan Lee
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea
- Institute of Bioengineering, Seoul National University, Seoul, Korea
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea
- Institute of Bioengineering, Seoul National University, Seoul, Korea
| | - Jung Ho Bae
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| |
Collapse
|
4
|
Wen D, Soltan A, Trucco E, Matin RN. From data to diagnosis: skin cancer image datasets for artificial intelligence. Clin Exp Dermatol 2024; 49:675-685. [PMID: 38549552 DOI: 10.1093/ced/llae112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/11/2024] [Accepted: 03/25/2024] [Indexed: 06/26/2024]
Abstract
Artificial intelligence (AI) solutions for skin cancer diagnosis continue to gain momentum, edging closer towards broad clinical use. These AI models, particularly deep-learning architectures, require large digital image datasets for development. This review provides an overview of the datasets used to develop AI algorithms and highlights the importance of dataset transparency for the evaluation of algorithm generalizability across varying populations and settings. Current challenges for curation of clinically valuable datasets are detailed, which include dataset shifts arising from demographic variations and differences in data collection methodologies, along with inconsistencies in labelling. These shifts can lead to differential algorithm performance, compromise of clinical utility, and the propagation of discriminatory biases when developed algorithms are implemented in mismatched populations. Limited representation of rare skin cancers and minoritized groups in existing datasets are highlighted, which can further skew algorithm performance. Strategies to address these challenges are presented, which include improving transparency, representation and interoperability. Federated learning and generative methods, which may improve dataset size and diversity without compromising privacy, are also examined. Lastly, we discuss model-level techniques that may address biases entrained through the use of datasets derived from routine clinical care. As the role of AI in skin cancer diagnosis becomes more prominent, ensuring the robustness of underlying datasets is increasingly important.
Collapse
Affiliation(s)
- David Wen
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Andrew Soltan
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
- Oxford Cancer and Haematology Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Department of Oncology, University of Oxford, Oxford, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Rubeta N Matin
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Artificial Intelligence Working Party Group, British Association of Dermatologists, London, UK
| |
Collapse
|
5
|
Huang Y, Yang X, Liu L, Zhou H, Chang A, Zhou X, Chen R, Yu J, Chen J, Chen C, Liu S, Chi H, Hu X, Yue K, Li L, Grau V, Fan DP, Dong F, Ni D. Segment anything model for medical images? Med Image Anal 2024; 92:103061. [PMID: 38086235 DOI: 10.1016/j.media.2023.103061] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/28/2023] [Accepted: 12/05/2023] [Indexed: 01/12/2024]
Abstract
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Han Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Junxuan Yu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jiongquan Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Sijing Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Kejuan Yue
- Hunan First Normal University, Changsha, China
| | - Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Fajin Dong
- Ultrasound Department, the Second Clinical Medical College, Jinan University, China; First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
6
|
Choi JY, Ryu IH, Kim JK, Lee IS, Yoo TK. Development of a generative deep learning model to improve epiretinal membrane detection in fundus photography. BMC Med Inform Decis Mak 2024; 24:25. [PMID: 38273286 PMCID: PMC10811871 DOI: 10.1186/s12911-024-02431-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and development department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
7
|
Kim BS, Cho M, Chung GE, Lee J, Kang HY, Yoon D, Cho WS, Lee JC, Bae JH, Kong HJ, Kim S. Density clustering-based automatic anatomical section recognition in colonoscopy video using deep learning. Sci Rep 2024; 14:872. [PMID: 38195632 PMCID: PMC10776865 DOI: 10.1038/s41598-023-51056-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/29/2023] [Indexed: 01/11/2024] Open
Abstract
Recognizing anatomical sections during colonoscopy is crucial for diagnosing colonic diseases and generating accurate reports. While recent studies have endeavored to identify anatomical regions of the colon using deep learning, the deformable anatomical characteristics of the colon pose challenges for establishing a reliable localization system. This study presents a system utilizing 100 colonoscopy videos, combining density clustering and deep learning. Cascaded CNN models are employed to estimate the appendix orifice (AO), flexures, and "outside of the body," sequentially. Subsequently, DBSCAN algorithm is applied to identify anatomical sections. Clustering-based analysis integrates clinical knowledge and context based on the anatomical section within the model. We address challenges posed by colonoscopy images through non-informative removal preprocessing. The image data is labeled by clinicians, and the system deduces section correspondence stochastically. The model categorizes the colon into three sections: right (cecum and ascending colon), middle (transverse colon), and left (descending colon, sigmoid colon, rectum). We estimated the appearance time of anatomical boundaries with an average error of 6.31 s for AO, 9.79 s for HF, 27.69 s for SF, and 3.26 s for outside of the body. The proposed method can facilitate future advancements towards AI-based automatic reporting, offering time-saving efficacy and standardization.
Collapse
Grants
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- IITP-2023-2018-0-01833 the Ministry of Science and ICT, Korea under the Information Technology Research Center (ITRC) support program
Collapse
Affiliation(s)
- Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Minwoo Cho
- Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, 03080, Korea
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
- Department of Medicine, Seoul National University College of Medicine, Seoul, 03080, Korea
| | - Goh Eun Chung
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Jooyoung Lee
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Hae Yeon Kang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Woo Sang Cho
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Jung Chan Lee
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, 03080, Korea
| | - Jung Ho Bae
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea.
| | - Hyoun-Joong Kong
- Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, 03080, Korea.
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea.
- Department of Medicine, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, 03087, Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Artificial Intelligence Institute, Seoul National University, Research Park Building 942, 2 Fl., Seoul, 08826, Korea.
| |
Collapse
|
8
|
Jacobs F, D'Amico S, Benvenuti C, Gaudio M, Saltalamacchia G, Miggiano C, De Sanctis R, Della Porta MG, Santoro A, Zambelli A. Opportunities and Challenges of Synthetic Data Generation in Oncology. JCO Clin Cancer Inform 2023; 7:e2300045. [PMID: 37535875 DOI: 10.1200/cci.23.00045] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/05/2023] [Accepted: 05/25/2023] [Indexed: 08/05/2023] Open
Abstract
Widespread interest in artificial intelligence (AI) in health care has focused mainly on deductive systems that analyze available real-world data to discover patterns not otherwise visible. Generative adversarial network, a new type of inductive AI, has recently evolved to generate high-fidelity virtual synthetic data (SD) trained on relatively limited real-world information. The AI system is fed with a collection of real data, and it learns to generate new augmented data while maintaining the general characteristics of the original data set. The use of SD to enhance clinical research and protect patient privacy has drawn a lot of interest in medicine and in the complex field of oncology. This article summarizes the main characteristics of this innovative technology and critically discusses how it can be used to accelerate data access for secondary purposes, providing an overview of the opportunities and challenges of SD generation for clinical cancer research and health care.
Collapse
Affiliation(s)
- Flavia Jacobs
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Benvenuti
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Mariangela Gaudio
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Miggiano
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Rita De Sanctis
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Matteo Giovanni Della Porta
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Armando Santoro
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Alberto Zambelli
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| |
Collapse
|
9
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|