1
|
Lee L, Lin C, Hsu CJ, Lin HH, Lin TC, Liu YH, Hu JM. Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1606-1616. [PMID: 39482492 DOI: 10.1007/s10278-024-01309-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 09/26/2024] [Accepted: 10/14/2024] [Indexed: 11/03/2024]
Abstract
Early screening is crucial in reducing the mortality of colorectal cancer (CRC). Current screening methods, including fecal occult blood tests (FOBT) and colonoscopy, are primarily limited by low patient compliance and the invasive nature of the procedures. Several advanced imaging techniques such as computed tomography (CT) and histological imaging have been integrated with artificial intelligence (AI) to enhance the detection of CRC. There are still limitations because of the challenges associated with image acquisition and the cost. Kidney, ureter, and bladder (KUB) radiograph which is inexpensive and widely used for abdominal assessments in emergency settings and shows potential for detecting CRC when enhanced using advanced techniques. This study aimed to develop a deep learning model (DLM) to detect CRC using KUB radiographs. This retrospective study was conducted using data from the Tri-Service General Hospital (TSGH) between January 2011 and December 2020, including patients with at least one KUB radiograph. Patients were divided into development (n = 28,055), tuning (n = 11,234), and internal validation (n = 16,875) sets. An additional 15,876 patients were collected from a community hospital as the external validation set. A 121-layer DenseNet convolutional network was trained to classify KUB images for CRC detection. The model performance was evaluated using receiver operating characteristic curves, with sensitivity, specificity, and area under the curve (AUC) as metrics. The AUC, sensitivity, and specificity of the DLM in the internal and external validation sets achieved 0.738, 61.3%, and 74.4%, as well as 0.656, 47.7%, and 72.9%, respectively. The model performed better for high-grade CRC, with AUCs of 0.744 and 0.674 in the internal and external sets, respectively. Stratified analysis showed superior performance in females aged 55-64 with high-grade cancers. AI-positive predictions were associated with a higher long-term risk of all-cause mortality in both validation cohorts. AI-enhanced KUB X-ray analysis can enhance CRC screening coverage and effectiveness, providing a cost-effective alternative to traditional methods. Further prospective studies are necessary to validate these findings and fully integrate this technology into clinical practice.
Collapse
Affiliation(s)
- Ling Lee
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Chin Lin
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Military Digital Medical Center, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Chia-Jung Hsu
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Medical Informatics Office, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Heng-Hsiu Lin
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Medical Informatics Office, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Tzu-Chiao Lin
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Yu-Hong Liu
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Je-Ming Hu
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan.
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan.
- Graduate Institute of Medical Sciences, National Defense Medical Center, No 325, Section 2, Cheng-Kung Road, Neihu 114, Taipei, R.O.C, Taiwan.
| |
Collapse
|
2
|
Sasmal P, Kumar Panigrahi S, Panda SL, Bhuyan MK. Attention-guided deep framework for polyp localization and subsequent classification via polyp local and Siamese feature fusion. Med Biol Eng Comput 2025:10.1007/s11517-025-03369-z. [PMID: 40314710 DOI: 10.1007/s11517-025-03369-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2024] [Accepted: 04/16/2025] [Indexed: 05/03/2025]
Abstract
Colorectal cancer (CRC) is one of the leading causes of death worldwide. This paper proposes an automated diagnostic technique to detect, localize, and classify polyps in colonoscopy video frames. The proposed model adopts the deep YOLOv4 model that incorporates both spatial and contextual information in the form of spatial attention and channel attention blocks, respectively for better localization of polyps. Finally, leveraging a fusion of deep and handcrafted features, the detected polyps are classified as adenoma or non-adenoma. Polyp shape and texture are essential features in discriminating polyp types. Therefore, the proposed work utilizes a pyramid histogram of oriented gradient (PHOG) and embedding features learned via triplet Siamese architecture to extract these features. The PHOG extracts local shape information from each polyp class, whereas the Siamese network extracts intra-polyp discriminating features. The individual and cross-database performances on two databases suggest the robustness of our method in polyp localization. The competitive analysis based on significant clinical parameters with current state-of-the-art methods confirms that our method can be used for automated polyp localization in both real-time and offline colonoscopic video frames. Our method provides an average precision of 0.8971 and 0.9171 and an F1 score of 0.8869 and 0.8812 for the Kvasir-SEG and SUN databases. Similarly, the proposed classification framework for the detected polyps yields a classification accuracy of 96.66% on a publicly available UCI colonoscopy video dataset. Moreover, the classification framework provides an F1 score of 96.54% that validates the potential of the proposed framework in polyp localization and classification.
Collapse
Affiliation(s)
- Pradipta Sasmal
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, West Bengal, 721302, India.
| | - Susant Kumar Panigrahi
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, West Bengal, 721302, India
| | - Swarna Laxmi Panda
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India
| | - M K Bhuyan
- Department of Electronics and Electrical Engineering, Indian Institute of Technology, Guwahati, Assam, 781039, India
| |
Collapse
|
3
|
Zhang MX, Liu PF, Zhang MD, Su PG, Shang HS, Zhu JT, Wang DY, Ji XY, Liao QM. Deep learning in nuclear medicine: from imaging to therapy. Ann Nucl Med 2025; 39:424-440. [PMID: 40080372 DOI: 10.1007/s12149-025-02031-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Accepted: 02/24/2025] [Indexed: 03/15/2025]
Abstract
BACKGROUND Deep learning, a leading technology in artificial intelligence (AI), has shown remarkable potential in revolutionizing nuclear medicine. OBJECTIVE This review presents recent advancements in deep learning applications, particularly in nuclear medicine imaging, lesion detection, and radiopharmaceutical therapy. RESULTS Leveraging various neural network architectures, deep learning has significantly enhanced the accuracy of image reconstruction, lesion segmentation, and diagnosis, improving the efficiency of disease detection and treatment planning. The integration of deep learning with functional imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) enable more precise diagnostics, while facilitating the development of personalized treatment strategies. Despite its promising outlook, there are still some limitations and challenges, particularly in model interpretability, generalization across diverse datasets, multimodal data fusion, and the ethical and legal issues faced in its application. CONCLUSION As technological advancements continue, deep learning is poised to drive substantial changes in nuclear medicine, particularly in the areas of precision healthcare, real-time treatment monitoring, and clinical decision-making. Future research will likely focus on overcoming these challenges and further enhancing model transparency, thus improving clinical applicability.
Collapse
Affiliation(s)
- Meng-Xin Zhang
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China
- Department of Nuclear Medicine, Henan International Joint Laboratory for Nuclear Protein Regulation, The First Affiliated Hospital, Henan University College of Medicine, Ximen St, Kaifeng, 475004, Henan, China
| | - Peng-Fei Liu
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China
- Department of Nuclear Medicine, Henan International Joint Laboratory for Nuclear Protein Regulation, The First Affiliated Hospital, Henan University College of Medicine, Ximen St, Kaifeng, 475004, Henan, China
| | - Meng-Di Zhang
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China
- Department of Nuclear Medicine, Henan International Joint Laboratory for Nuclear Protein Regulation, The First Affiliated Hospital, Henan University College of Medicine, Ximen St, Kaifeng, 475004, Henan, China
| | - Pei-Gen Su
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China
- School of Medical Technology, Qiqihar Medical University, Qiqihar, 161006, Heilongjiang, China
| | - He-Shan Shang
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China
- School of Computer and Information Engineering, Henan University, Kaifeng, 475004, Henan, China
| | - Jiang-Tao Zhu
- Faculty of Basic Medical Subjects, Shu-Qing Medical College of Zhengzhou, Zhengzhou, 450064, Henan, China.
- Department of Surgery, Faculty of Clinical Medicine, Zhengzhou Shu-Qing Medical College, Gongming Rd, Mazhai Town, Zhengzhou, 450064, Henan, China.
| | - Da-Yong Wang
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China.
- Department of Nuclear Medicine, Henan International Joint Laboratory for Nuclear Protein Regulation, The First Affiliated Hospital, Henan University College of Medicine, Ximen St, Kaifeng, 475004, Henan, China.
| | - Xin-Ying Ji
- Department of Microbiology and Immunology, Henan Provincial Research Center of Engineering Technology for Nuclear Protein Medical Detection, Zhengzhou Health College, Zhengzhou, 45000, Henan, China.
- Department of Nuclear Medicine, Henan International Joint Laboratory for Nuclear Protein Regulation, The First Affiliated Hospital, Henan University College of Medicine, Ximen St, Kaifeng, 475004, Henan, China.
- Faculty of Basic Medical Subjects, Shu-Qing Medical College of Zhengzhou, Zhengzhou, 450064, Henan, China.
| | - Qi-Ming Liao
- Department of Medical Informatics and Computer, Shu-Qing Medical College of Zhengzhou, Gong-Ming Rd, Mazhai Town, Erqi District, Zhengzhou, 450064, Henan, China.
| |
Collapse
|
4
|
Chan SM, Chan D, Yip HC, Scheppach MW, Lam R, Ng SKK, Ng EKW, Chiu PW. Artificial intelligence-assisted esophagogastroduodenoscopy improves procedure quality for endoscopists in early stages of training. Endosc Int Open 2025; 13:a25476645. [PMID: 40309064 PMCID: PMC12042994 DOI: 10.1055/a-2547-6645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Accepted: 02/11/2025] [Indexed: 05/02/2025] Open
Abstract
Background and study aims Completeness of esophagagogastroduodenoscopy (EGD) varies among endoscopists, leading to a high miss rate for gastric neoplasms. This study aimed to determine the effect of the Cerebro real-time artificial intelligence (AI) system on completeness of EGD for endoscopists in early stages of training. Patients and methods The AI system was built with CNN and Motion Adaptive Temporal Feature Aggregation (MA-TFA). A prospective sequential cohort study was conducted. Endoscopists were taught about the standardized EGD protocol to examine 27 sites. Then, each subject performed diagnostic EGDs per protocol (control arm). After completion of the required sample size, subjects performed diagnostic EGDs with assistance of the AI (study arm). The primary outcome was the rate of completeness of EGD. Secondary outcomes included overall inspection time, individual site inspection time, completeness of photodocumentation, and rate of positive pathologies. Results A total of 466 EGDs were performed with 233 in each group. Use of AI significantly improved completeness of EGD [mean (SD) (92.6% (6.2%) vs 71.2% (16.8%)]; P <0.001 (95% confidence interval 19.2%-23.8%, SD 0.012). There was no difference in overall mean (SD) inspection time [765.5 (338.4) seconds vs 740.4 (266.2); P =0.374]. Mean (SD) number of photos for photo-documentation significantly increased in the AI group [26.9 (0.4) vs 10.3 (4.4); P <0.001]. There was no difference in detection rates for pathologies in the two groups [8/233 (3.43%) vs 5/233 (2.16%), P =0.399]. Conclusions Completeness of EGD examination and photodocumentation by endoscopists in early stages of are improved by the AI-assisted software Cerebro.
Collapse
Affiliation(s)
- Shannon Melissa Chan
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Daniel Chan
- Surgery, UNSW St George & Sutherland, Kogarah, Australia
| | - Hon Chi Yip
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Markus Wolfgang Scheppach
- Internal Medicine III - Gastroenterology, University of Augsburg Faculty of Medicine, Augsburg, Germany
| | - Ray Lam
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Stephen KK Ng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Enders Kwok Wai Ng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Philip W Chiu
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
5
|
Huhulea EN, Huang L, Eng S, Sumawi B, Huang A, Aifuwa E, Hirani R, Tiwari RK, Etienne M. Artificial Intelligence Advancements in Oncology: A Review of Current Trends and Future Directions. Biomedicines 2025; 13:951. [PMID: 40299653 PMCID: PMC12025054 DOI: 10.3390/biomedicines13040951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 04/03/2025] [Accepted: 04/10/2025] [Indexed: 05/01/2025] Open
Abstract
Cancer remains one of the leading causes of mortality worldwide, driving the need for innovative approaches in research and treatment. Artificial intelligence (AI) has emerged as a powerful tool in oncology, with the potential to revolutionize cancer diagnosis, treatment, and management. This paper reviews recent advancements in AI applications within cancer research, focusing on early detection through computer-aided diagnosis, personalized treatment strategies, and drug discovery. We survey AI-enhanced diagnostic applications and explore AI techniques such as deep learning, as well as the integration of AI with nanomedicine and immunotherapy for cancer care. Comparative analyses of AI-based models versus traditional diagnostic methods are presented, highlighting AI's superior potential. Additionally, we discuss the importance of integrating social determinants of health to optimize cancer care. Despite these advancements, challenges such as data quality, algorithmic biases, and clinical validation remain, limiting widespread adoption. The review concludes with a discussion of the future directions of AI in oncology, emphasizing its potential to reshape cancer care by enhancing diagnosis, personalizing treatments and targeted therapies, and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Ellen N. Huhulea
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Lillian Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Shirley Eng
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Bushra Sumawi
- Barshop Institute, The University of Texas Health Science Center, San Antonio, TX 78229, USA
| | - Audrey Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Esewi Aifuwa
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Rahim Hirani
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Raj K. Tiwari
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Mill Etienne
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Department of Neurology, New York Medical College, Valhalla, NY 10595, USA
| |
Collapse
|
6
|
Gao Y, Wen P, Liu Y, Sun Y, Qian H, Zhang X, Peng H, Gao Y, Li C, Gu Z, Zeng H, Hong Z, Wang W, Yan R, Hu Z, Fu H. Application of artificial intelligence in the diagnosis of malignant digestive tract tumors: focusing on opportunities and challenges in endoscopy and pathology. J Transl Med 2025; 23:412. [PMID: 40205603 PMCID: PMC11983949 DOI: 10.1186/s12967-025-06428-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 03/25/2025] [Indexed: 04/11/2025] Open
Abstract
BACKGROUND Malignant digestive tract tumors are highly prevalent and fatal tumor types globally, often diagnosed at advanced stages due to atypical early symptoms, causing patients to miss optimal treatment opportunities. Traditional endoscopic and pathological diagnostic processes are highly dependent on expert experience, facing problems such as high misdiagnosis rates and significant inter-observer variations. With the development of artificial intelligence (AI) technologies such as deep learning, real-time lesion detection with endoscopic assistance and automated pathological image analysis have shown potential in improving diagnostic accuracy and efficiency. However, relevant applications still face challenges including insufficient data standardization, inadequate interpretability, and weak clinical validation. OBJECTIVE This study aims to systematically review the current applications of artificial intelligence in diagnosing malignant digestive tract tumors, focusing on the progress and bottlenecks in two key areas: endoscopic examination and pathological diagnosis, and to provide feasible ideas and suggestions for subsequent research and clinical translation. METHODS A systematic literature search strategy was adopted to screen relevant studies published between 2017 and 2024 from databases including PubMed, Web of Science, Scopus, and IEEE Xplore, supplemented with searches of early classical literature. Inclusion criteria included studies on malignant digestive tract tumors such as esophageal cancer, gastric cancer, or colorectal cancer, involving the application of artificial intelligence technology in endoscopic diagnosis or pathological analysis. The effects and main limitations of AI diagnosis were summarized through comprehensive analysis of research design, algorithmic methods, and experimental results from relevant literature. RESULTS In the field of endoscopy, multiple deep learning models have significantly improved detection rates in real-time polyp detection, early gastric cancer, and esophageal cancer screening, with some commercialized systems successfully entering clinical trials. However, the scale and quality of data across different studies vary widely, and the generalizability of models to multi-center, multi-device environments remains to be verified. In pathological analysis, using convolutional neural networks, multimodal pre-training models, etc., automatic tissue segmentation, tumor grading, and assisted diagnosis can be achieved, showing good scalability in interactive question-answering. Nevertheless, clinical implementation still faces obstacles such as non-uniform data standards, lack of large-scale prospective validation, and insufficient model interpretability and continuous learning mechanisms. CONCLUSION Artificial intelligence provides new technological opportunities for endoscopic and pathological diagnosis of malignant digestive tract tumors, achieving positive results in early lesion identification and assisted decision-making. However, to achieve the transition from research to widespread clinical application, data standardization, model reliability, and interpretability still need to be improved through multi-center joint research, and a complete regulatory and ethical system needs to be established. In the future, artificial intelligence will play a more important role in the standardization and precision management of diagnosis and treatment of digestive tract tumors.
Collapse
Affiliation(s)
- Yinhu Gao
- Department of Gastroenterology, Shaanxi Province Rehabilitation Hospital, Xi'an, Shaanxi, China
| | - Peizhen Wen
- Department of General Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China.
| | - Yuan Liu
- Shanghai Lung Cancer Center, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yahuang Sun
- Division of Colorectal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China
| | - Hui Qian
- Department of Gastroenterology, Changzheng Hospital, Naval Medical University, 415 Fengyang Road, Shanghai, 200003, China
| | - Xin Zhang
- Department of Gastrointestinal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China
| | - Huan Peng
- Division of Colorectal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China
| | - Yanli Gao
- Infection Control Office, Shaanxi Province Rehabilitation Hospital, Xi'an, Shaanxi, China
| | - Cuiyu Li
- Department of Radiology, The First Hospital of Nanchang, the Third Affiliated Hospital of Nanchang University, Nanchang, 330008, Jiangxi, China
| | - Zhangyuan Gu
- Tongji University School of Medicine, Tongji University, Shanghai, 200092, People's Republic of China
| | - Huajin Zeng
- Department of General Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China
| | - Zhijun Hong
- Tongji University School of Medicine, Tongji University, Shanghai, 200092, People's Republic of China
| | - Weijun Wang
- Department of Gastrointestinal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China.
| | - Ronglin Yan
- Department of Gastroenterology, Changzheng Hospital, Naval Medical University, 415 Fengyang Road, Shanghai, 200003, China.
| | - Zunqi Hu
- Department of Gastrointestinal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China.
| | - Hongbing Fu
- Department of Gastrointestinal Surgery, Changzheng Hospital, Navy Medical University, 415 Fengyang Road, Shanghai, 200003, China.
| |
Collapse
|
7
|
Osagiede O, Wallace MB. The Role of Artificial Intelligence for Advanced Endoscopy. Gastrointest Endosc Clin N Am 2025; 35:419-430. [PMID: 40021238 DOI: 10.1016/j.giec.2024.10.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
Artificial intelligence (AI) application in gastroenterology has grown in the last decade and continues to evolve very rapidly. Early promising results have opened the door to explore its potential application to advanced endoscopy (AE). The aim of this review is to discuss the current state of the art and future directions of AI in AE. Current evidence suggests that AI-assisted endoscopic ultrasound models can be used in clinical practice to distinguish between benign and malignant pancreatic diseases with excellent results. AI-assisted endoscopic retrograde cholangiopancreatography models could also be useful in identifying the papilla, predicting difficult cannulation, and differentiating between benign and malignant strictures.
Collapse
Affiliation(s)
- Osayande Osagiede
- Division of Gastroenterology and Hepatology, Mayo Clinic, 4500 San Pablo Road South, Jacksonville, FL 32224, USA.
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, 4500 San Pablo Road South, Jacksonville, FL 32224, USA
| |
Collapse
|
8
|
Sultan S, Shung DL, Kolb JM, Foroutan F, Hassan C, Kahi CJ, Liang PS, Levin TR, Siddique SM, Lebwohl B. AGA Living Clinical Practice Guideline on Computer-Aided Detection-Assisted Colonoscopy. Gastroenterology 2025; 168:691-700. [PMID: 40121061 DOI: 10.1053/j.gastro.2025.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/25/2025]
Abstract
BACKGROUND & AIMS This American Gastroenterological Association (AGA) guideline is intended to provide an overview of the evidence and support endoscopists and patients on the use of computer-aided detection (CADe) systems for the detection of colorectal polyps during colonoscopy. METHODS A multidisciplinary panel of content experts and guideline methodologists used the Grading of Recommendations Assessment, Development and Evaluation framework and relied on the following sources of evidence: (1) a systematic review examining the desirable and undesirable effects (ie, benefits and harms) of CADe-assisted colonoscopy, (2) a microsimulation study estimating the effects of CADe on longer-term patient-important outcomes, (3) a systematic search of evidence evaluating the values and preferences of patients undergoing colonoscopy, and (4) a systematic review of studies evaluating health care providers' trust in artificial intelligence technology in gastroenterology. RESULTS The panel reached the conclusion that no recommendation could be made for or against the use of CADe-assisted colonoscopy in light of very low certainty of evidence for the critical outcomes, desirable and undesirable (11 fewer colorectal cancers per 10,000 individuals and 2 fewer colorectal cancer deaths per 10,000 individuals), increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications. The panel acknowledged the 8% (95% CI, 6%-10%) increase in adenoma detection rate and 2% (95% CI, 0%-4%) increase in advanced adenoma and/or sessile serrated lesion detection rate. CONCLUSIONS This guideline highlights the close tradeoff between desirable and undesirable effects and the limitations in the current evidence to support a recommendation. The panel acknowledged the potential for CADe to continually improve as an iterative artificial intelligence application. Ongoing publications providing evidence for critical outcomes will help inform a future recommendation.
Collapse
Affiliation(s)
- Shahnaz Sultan
- Division of Gastroenterology, Hepatology, and Nutrition, University of Minnesota, Minneapolis, Minnesota; Minneapolis Veterans Affairs Healthcare System, Minneapolis, Minnesota
| | - Dennis L Shung
- Department of Medicine, Section of Digestive Diseases, Yale School of Medicine, New Haven, Connecticut
| | - Jennifer M Kolb
- Vatche and Tamar Manoukian Division of Digestive Diseases, David Geffen School of Medicine at University of California Los Angeles, Los Angeles, California; Division of Gastroenterology, Hepatology and Parenteral Nutrition, Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, California
| | - Farid Foroutan
- MAGIC Evidence Ecosystem Foundation, Oslo, Norway; Ted Rogers Centre for Heart Research, University Health Network, Toronto, Ontario, Canada
| | - Cesare Hassan
- IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy; Department of Biomedical Sciences, Humanitas University, Milan, Italy
| | - Charles J Kahi
- Department of Gastroenterology, Indiana University Medical Center, Indianapolis, Indiana
| | - Peter S Liang
- Department of Medicine, Division of Gastroenterology and Hepatology, NYU Langone Health, New York, New York; Department of Medicine, Veterans Affairs New York Harbor Health Care System, New York, New York
| | - Theodore R Levin
- Division of Research, Kaiser Permanente Northern California, Pleasanton, California; Department of Gastroenterology, Kaiser Permanente Walnut Creek, Walnut Creek, California
| | - Shazia Mehmood Siddique
- Division of Gastroenterology, University of Pennsylvania, Philadelphia, Pennsylvania; Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania; Center for Healthcare Improvement and Patient Safety, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Benjamin Lebwohl
- Department of Medicine, Columbia University Irving Medical Center, New York, New York; Department of Epidemiology, Mailman School of Public Health, Columbia University, New York, New York
| |
Collapse
|
9
|
Kumar A, Aravind N, Gillani T, Kumar D. Artificial intelligence breakthrough in diagnosis, treatment, and prevention of colorectal cancer – A comprehensive review. Biomed Signal Process Control 2025; 101:107205. [DOI: 10.1016/j.bspc.2024.107205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2024]
|
10
|
Kerbage A, Souaid T, Singh K, Burke CA. Taking the Guess Work Out of Endoscopic Polyp Measurement: From Traditional Methods to AI. J Clin Gastroenterol 2025:00004836-990000000-00427. [PMID: 39998964 DOI: 10.1097/mcg.0000000000002161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 02/07/2025] [Indexed: 02/27/2025]
Abstract
Colonoscopy is a crucial tool for evaluating lower gastrointestinal disease, monitoring high-risk patients for colorectal neoplasia, and screening for colorectal cancer. In the United States, over 14 million colonoscopies are performed annually, with a significant portion dedicated to post-polypectomy follow-up. Accurate measurement of colorectal polyp size during colonoscopy is essential, as it influences patient management, including the determination of surveillance intervals, resection strategies, and the assessment of malignancy risk. Despite its importance, many endoscopists typically rely on visual estimation alone, which is often imprecise due to technological and human biases, frequently leading to overestimations of polyp size and unnecessarily shortened surveillance intervals. To address these challenges, multiple tools and technologies have been developed to enhance the accuracy of polyp size estimation. The review examines the evolution of polyp measurement techniques, ranging from through-the-scope tools to computer-based and artificial intelligence-assisted technologies.
Collapse
Affiliation(s)
| | - Tarek Souaid
- Department of Internal Medicine, Cleveland Clinic
| | | | - Carol A Burke
- Department of Gastroenterology, Hepatology and Nutrition, Cleveland Clinic
| |
Collapse
|
11
|
He C, Zhang J, Liang Y, Li H. A unified framework harnessing multi-scale feature ensemble and attention mechanism for gastric polyp and protrusion identification in gastroscope imaging. Sci Rep 2025; 15:5734. [PMID: 39962226 PMCID: PMC11833082 DOI: 10.1038/s41598-025-90034-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Accepted: 02/10/2025] [Indexed: 02/20/2025] Open
Abstract
This study aims to address the diagnostic challenges in distinguishing gastric polyps from protrusions, emphasizing the need for accurate and cost-effective diagnosis strategies. It explores the application of Convolutional Neural Networks (CNNs) to improve diagnostic accuracy. This research introduces MultiAttentiveScopeNet, a deep learning model that incorporates multi-layer feature ensemble and attention mechanisms to enhance gastroscopy image analysis accuracy. A weakly supervised labeling strategy was employed to construct a large multi-class gastroscopy image dataset for training and validation. MultiAttentiveScopeNet demonstrates significant improvements in prediction accuracy and interpretability. The integrated attention mechanism effectively identifies critical areas in images to aid clinical decisions. Its multi-layer feature ensemble enables robust analysis of complex gastroscopy images. Comparative testing against human experts shows exceptional diagnostic performance, with accuracy, micro and macro precision, micro and macro recall, and micro and macro AUC reaching 0.9308, 0.9312, 0.9325, 0.9283, 0.9308, 0.9847 and 0.9853 respectively. This highlights its potential as an effective tool for primary healthcare settings. This study provides a comprehensive solution to address diagnostic challenges differentiating gastric polyps and protrusions. MultiAttentiveScopeNet improves accuracy and interpretability, demonstrating the potential of deep learning for gastroscopy image analysis. The constructed dataset facilitates continued model optimization and validation. The model shows promise in enhancing diagnostic outcomes in primary care.
Collapse
Affiliation(s)
- Chunyou He
- People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530016, China
| | - Jingda Zhang
- People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530016, China
| | - Yunxiao Liang
- People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530016, China.
| | - Hao Li
- People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530016, China.
| |
Collapse
|
12
|
Sukumar A, Zaman S, Mostafa OES, Patel J, Akingboye A, Waterland P. Disparity in endoscopic localisation of early distal colorectal cancers: a retrospective cohort analysis from a single institution. Langenbecks Arch Surg 2025; 410:68. [PMID: 39945868 PMCID: PMC11825640 DOI: 10.1007/s00423-025-03642-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Accepted: 02/06/2025] [Indexed: 02/16/2025]
Abstract
BACKGROUND Accurate staging of distal colorectal cancers is paramount in guiding neoadjuvant therapy, peri-operative, and ostomy planning. Early colonic lesions can be difficult to visualise on computed tomography (CT) scans, with tumour location solely deduced via endoscopy with the potential for introducing error. We aimed to address the paucity in literature in this area and assessed the accuracy of radiological and endoscopic localisation of distal colorectal cancers. METHODS Retrospective analysis of an electronic database of patients at a large District General Hospital (DGH) diagnosed with distal colorectal cancer between January 2014 to January 2023 was performed. Patient demographics, investigations, endoscopic, and operative findings were analysed. Outcomes were assessed to determine disparities between pre-operative endoscopy and final tumour location. RESULTS A total of 212 patients were endoscopically diagnosed with distal sigmoid tumour. Of these, 207 (97.6%) had a CT scan performed with 25.1% (52/207) lesions not being identified on this imaging modality with the remainder (74.9%; 155/207) being reported as visible. 38.2% (79/207) of tumours were in the sigmoid colon, 17.4% (36/207) rectosigmoid, and 19.3% (40/207) in the rectum. Pre-operative magnetic resonance imaging (MRI) was performed in 42.5% (90/212) of cases showing 84 tumours: 6.0% (5/84) sigmoid colon, 9.5% (8/84) rectosigmoid and 83.3% (70/84) rectal cancers (upper: 34, mid-rectum: 26, low: 10), with one anal cancer. 42.3% (22/52) of patients with non-visible lesions on CT had MRI scans: 68.2% (15/22) had rectal cancer (upper: 10, mid-rectum: 4, low: 1). Of the 30 where MRI was not performed, 46.7% (14) had sigmoid cancer, 16.7% (5) rectosigmoid, and 33.3% (10) rectal intraoperatively. Overall, 30.7% (65/212) of patients reported as having a distal sigmoid lesion endoscopically in fact had rectal cancer intra-operatively (rectosigmoid lesions excluded). CONCLUSION Endoscopic localisation of distal colorectal tumours can be unreliable for accurate staging and operative planning. A pre-operative MRI scan should be considered in such instances, and particularly for non-visible lesions on CT scan. This may improve peri-operative planning, staging accuracy and patient outcomes.
Collapse
Affiliation(s)
- Aiswarya Sukumar
- Department of General and Colorectal Surgery, Royal Devon and Exeter Hospital, Royal Devon University Healthcare NHS Foundation Trust, Exeter, Devon, UK.
| | - Shafquat Zaman
- Department of General and Colorectal Surgery, Queen's Hospital Burton, University Hospitals of Derby and Burton NHS Foundation Trust, Burton on Trent, Derby, UK
| | - Omar E S Mostafa
- Department of General and Colorectal Surgery, Russells Hall Hospital, The Dudley Group NHS Foundation Trust, Dudley, West Midlands, UK
| | - Jamie Patel
- Department of ENT, Basingstoke and North Hampshire Hospital, Hampshire Hospitals NHS Foundation Trust, Basingstoke, UK
| | - Akinfemi Akingboye
- Department of General and Colorectal Surgery, Russells Hall Hospital, The Dudley Group NHS Foundation Trust, Dudley, West Midlands, UK
- College of Medicine and Life Sciences, Aston University, Birmingham, UK
| | - Peter Waterland
- Department of General and Colorectal Surgery, Russells Hall Hospital, The Dudley Group NHS Foundation Trust, Dudley, West Midlands, UK
| |
Collapse
|
13
|
Chen HY, Tu MH, Chen MY. Using a Mobile Health App (ColonClean) to Enhance the Effectiveness of Bowel Preparation: Development and Usability Study. JMIR Hum Factors 2025; 12:e58479. [PMID: 39791869 PMCID: PMC11735013 DOI: 10.2196/58479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 11/08/2024] [Accepted: 11/26/2024] [Indexed: 01/12/2025] Open
Abstract
Background Colonoscopy is the standard diagnostic method for colorectal cancer. Patients usually receive written and verbal instructions for bowel preparation (BP) before the procedure. Failure to understand the importance of BP can lead to inadequate BP in 25%-30% of patients. The quality of BP impacts the success of colonoscopy in diagnostic yield and adenoma detection. We developed the "ColonClean" mobile health (mHealth) app for Android devices. It incorporates visual representations of dietary guidelines, steps for using bowel cleansing agents, and observations of the last bowel movement. We used the Technology Acceptance Model to investigate whether the use of the ColonClean mHealth app can improve users' attitudes and behaviors toward BP. Objective This study aims to validate the effectiveness of the ColonClean app in enhancing user behavior and improving BP, providing safe and cost-effective outpatient colonoscopy guidance. Methods This study uses a structured questionnaire to assess perceived usefulness, perceived ease of use, and users' attitudes and behaviors toward BP regarding the ColonClean mHealth app. A total of 40 outpatients who were physically and mentally healthy and proficient in Chinese were randomly chosen for this study. The data were analyzed using SPSS 25.0, and we used Pearson product-moment correlation and simple regression analysis to predict the perception of ColonClean. Results The results showed that 75% (30/40) of participants achieved an "excellent" or "good" level of BP according to the Aronchick Bowel Preparation Scale. Perceived usefulness and perceived ease of use of the ColonClean mHealth app were positively correlated with users' attitudes and behaviors (P<.05). Conclusions The ColonClean mHealth app serves as an educational reference and enhances the effectiveness of BP. Users expressed their willingness to use the app again in the future and recommend it to family and friends, highlighting its effectiveness as an educational guide for BP.
Collapse
Affiliation(s)
- Hui-Yu Chen
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Nursing, National Taipei University of Nursing and Health Sciences, Room B631, No. 365, Ming-te Road, Peitou District, Taipei City, 11219, Taiwan, 886 2 28227101 ext 3186
| | - Ming-Hsiang Tu
- School of Nursing, National Taipei University of Nursing and Health Sciences, Room B631, No. 365, Ming-te Road, Peitou District, Taipei City, 11219, Taiwan, 886 2 28227101 ext 3186
| | - Miao-Yen Chen
- School of Nursing, National Taipei University of Nursing and Health Sciences, Room B631, No. 365, Ming-te Road, Peitou District, Taipei City, 11219, Taiwan, 886 2 28227101 ext 3186
| |
Collapse
|
14
|
Misawa M, Kudo SE. Current Status of Artificial Intelligence Use in Colonoscopy. Digestion 2024; 106:138-145. [PMID: 39724867 DOI: 10.1159/000543345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 12/24/2024] [Indexed: 12/28/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has significantly impacted medical imaging, particularly in gastrointestinal endoscopy. Computer-aided detection and diagnosis systems (CADe and CADx) are thought to enhance the quality of colonoscopy procedures. SUMMARY Colonoscopy is essential for colorectal cancer screening but often misses a significant percentage of adenomas. AI-assisted systems employing deep learning offer improved detection and differentiation of colorectal polyps, potentially increasing adenoma detection rates by 8%-10%. The main benefit of CADe is in detecting small adenomas, whereas it has a limited impact on advanced neoplasm detection. Recent advancements include real-time CADe systems and CADx for histopathological predictions, aiding in the differentiation of neoplastic and nonneoplastic lesions. Biases such as the Hawthorne effect and potential overdiagnosis necessitate large-scale clinical trials to validate the long-term benefits of AI. Additionally, novel concepts such as computer-aided quality improvement systems are emerging to address limitations facing current CADe systems. KEY MESSAGES Despite the potential of AI for enhancing colonoscopy outcomes, its effectiveness in reducing colorectal cancer incidence and mortality remains unproven. Further prospective studies are essential to establish the overall utility and clinical benefits of AI in colonoscopy.
Collapse
Affiliation(s)
- Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki, Yokohama, Japan
| |
Collapse
|
15
|
Saha S, Ghosh S, Ghosh S, Nandi S, Nayak A. Unraveling the complexities of colorectal cancer and its promising therapies - An updated review. Int Immunopharmacol 2024; 143:113325. [PMID: 39405944 DOI: 10.1016/j.intimp.2024.113325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Revised: 10/01/2024] [Accepted: 10/02/2024] [Indexed: 10/30/2024]
Abstract
Colorectal cancer (CRC) continues to be a global health concern, necessitating further research into its complex biology and innovative treatment approaches. The etiology, pathogenesis, diagnosis, and treatment of colorectal cancer are summarized in this thorough review along with recent developments. The multifactorial nature of colorectal cancer is examined, including genetic predispositions, environmental factors, and lifestyle decisions. The focus is on deciphering the complex interactions between signaling pathways such as Wnt/β-catenin, MAPK, TGF-β as well as PI3K/AKT that participate in the onset, growth, and metastasis of CRC. There is a discussion of various diagnostic modalities that span from traditional colonoscopy to sophisticated molecular techniques like liquid biopsy and radiomics, emphasizing their functions in early identification, prognostication, and treatment stratification. The potential of artificial intelligence as well as machine learning algorithms in improving accuracy as well as efficiency in colorectal cancer diagnosis and management is also explored. Regarding therapy, the review provides a thorough overview of well-known treatments like radiation, chemotherapy, and surgery as well as delves into the newly-emerging areas of targeted therapies as well as immunotherapies. Immune checkpoint inhibitors as well as other molecularly targeted treatments, such as anti-epidermal growth factor receptor (anti-EGFR) as well as anti-vascular endothelial growth factor (anti-VEGF) monoclonal antibodies, show promise in improving the prognosis of colorectal cancer patients, in particular, those suffering from metastatic disease. This review focuses on giving readers a thorough understanding of colorectal cancer by considering its complexities, the present status of treatment, and potential future paths for therapeutic interventions. Through unraveling the intricate web of this disease, we can develop a more tailored and effective approach to treating CRC.
Collapse
Affiliation(s)
- Sayan Saha
- Guru Nanak Institute of Pharmaceutical Science and Technology, 157/F, Nilgunj Rd, Sahid Colony, Panihati, Kolkata, West Bengal 700114, India
| | - Shreya Ghosh
- Guru Nanak Institute of Pharmaceutical Science and Technology, 157/F, Nilgunj Rd, Sahid Colony, Panihati, Kolkata, West Bengal 700114, India
| | - Suman Ghosh
- Guru Nanak Institute of Pharmaceutical Science and Technology, 157/F, Nilgunj Rd, Sahid Colony, Panihati, Kolkata, West Bengal 700114, India
| | - Sumit Nandi
- Department of Pharmacology, Gupta College of Technological Sciences, Asansol, West Bengal 713301, India
| | - Aditi Nayak
- Guru Nanak Institute of Pharmaceutical Science and Technology, 157/F, Nilgunj Rd, Sahid Colony, Panihati, Kolkata, West Bengal 700114, India.
| |
Collapse
|
16
|
Koh GE, Ng B, Lagström RM, Foo FJ, Chin SE, Wan FT, Kam JH, Yeung B, Kwan C, Hassan C, Gögenur I, Koh FH. Real-World Assessment of the Efficacy of Computer-Assisted Diagnosis in Colonoscopy: A Single Institution Cohort Study in Singapore. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:647-655. [PMID: 40206538 PMCID: PMC11976013 DOI: 10.1016/j.mcpdig.2024.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Objective To review the efficacy and accuracy of the GI Genius Intelligent Endoscopy Module Computer-Assisted Diagnosis (CADx) program in colonic adenoma detection and real-time polyp characterization. Patients and Methods Colonoscopy remains the gold standard in colonic screening and evaluation. The incorporation of artificial intelligence (AI) technology therefore allows for optimized endoscopic performance. However, validation of most CADx programs with real-world data remains scarce. This prospective cohort study was conducted within a single Singaporean institution between April 1, 2023 and December 31, 2023. Videos of all AI-enabled colonoscopies were reviewed with polyp-by-polyp analysis performed. Real-time polyp characterization predictions after sustained polyp detection were compared against final histology results to assess the accuracy of the CADx system at colonic adenoma identification. Results A total of 808 videos of CADx colonoscopies were reviewed. Out of the 781 polypectomies performed, 543 (69.5%) and 222 (28.4%) were adenomas and non-adenomas on final histology, respectively. Overall, GI Genius correctly characterized adenomas with 89.4% sensitivity, 61.7% specificity, a positive predictive value of 85.4%, a negative predictive value of 69.8%, and 81.5% accuracy. The negative predictive value for rectosigmoid lesions (80.3%) was notably higher than for colonic lesions (54.2%), attributed to the increased prevalence of hyperplastic rectosigmoid polyps (11.4%) vs other colonic regions (5.4%). Conclusion Computer-Assisted Diagnosis is therefore a promising adjunct in colonoscopy with substantial clinical implications. Accurate identification of low-risk non-adenomatous polyps encourages the adoption of "resect-and-discard" strategies. However, further calibration of AI systems is needed before the acceptance of such strategies as the new standard of care.
Collapse
Affiliation(s)
- Gabrielle E. Koh
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Brittany Ng
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Ronja M.B. Lagström
- Center for Surgical Science, Department of Surgery, Zealand University Hospital, Koege, Denmark
| | - Fung-Joon Foo
- Colorectal Service, Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Shuen-Ern Chin
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Fang-Ting Wan
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Juinn Huar Kam
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Baldwin Yeung
- Department of Surgery, Sengkang General Hospital, Singapore, Singapore
| | - Clarence Kwan
- Department of Gastroenterology, Sengkang General Hospital, Singapore, Singapore
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Rozzano, Milan, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Milan, Italy
| | - Ismail Gögenur
- Center for Surgical Science, Department of Surgery, Zealand University Hospital, Koege, Denmark
| | - Frederick H. Koh
- Colorectal Service, Department of Surgery, Sengkang General Hospital, Singapore, Singapore
- Department of Surgery, Duke-National University of Singapore (NUS) Medical School, National University of Singapore, Singapore, Singapore
- Department of Surgery, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
17
|
Li S, Xu M, Meng Y, Sun H, Zhang T, Yang H, Li Y, Ma X. The application of the combination between artificial intelligence and endoscopy in gastrointestinal tumors. MEDCOMM – ONCOLOGY 2024; 3. [DOI: 10.1002/mog2.91] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 09/03/2024] [Indexed: 01/04/2025]
Abstract
AbstractGastrointestinal (GI) tumors have always been a major type of malignant tumor and a leading cause of tumor‐related deaths worldwide. The main principles of modern medicine for GI tumors are early prevention, early diagnosis, and early treatment, with early diagnosis being the most effective measure. Endoscopy, due to its ability to visualize lesions, has been one of the primary modalities for screening, diagnosing, and treating GI tumors. However, a qualified endoscopist often requires long training and extensive experience, which to some extent limits the wider use of endoscopy. With advances in data science, artificial intelligence (AI) has brought a new development direction for the endoscopy of GI tumors. AI can quickly process large quantities of data and images and improve diagnostic accuracy with some training, greatly reducing the workload of endoscopists and assisting them in early diagnosis. Therefore, this review focuses on the combined application of endoscopy and AI in GI tumors in recent years, describing the latest research progress on the main types of tumors and their performance in clinical trials, the application of multimodal AI in endoscopy, the development of endoscopy, and the potential applications of AI within it, with the aim of providing a reference for subsequent research.
Collapse
Affiliation(s)
- Shen Li
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Maosen Xu
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy, West China Hospital, National Clinical Research, Sichuan University Chengdu Sichuan China
| | - Yuanling Meng
- West China School of Stomatology Sichuan University Chengdu Sichuan China
| | - Haozhen Sun
- College of Life Sciences Sichuan University Chengdu Sichuan China
| | - Tao Zhang
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Hanle Yang
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Yueyi Li
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Xuelei Ma
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| |
Collapse
|
18
|
Nie MY, An XW, Xing YC, Wang Z, Wang YQ, Lü JQ. Artificial intelligence algorithms for real-time detection of colorectal polyps during colonoscopy: a review. Am J Cancer Res 2024; 14:5456-5470. [PMID: 39659923 PMCID: PMC11626263 DOI: 10.62347/bziz6358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/14/2024] [Indexed: 12/12/2024] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Early detection and removal of colorectal polyps during colonoscopy are crucial for preventing such cancers. With the development of artificial intelligence (AI) technology, it has become possible to detect and localize colorectal polyps in real time during colonoscopy using computer-aided diagnosis (CAD). This provides a reliable endoscopist reference and leads to more accurate diagnosis and treatment. This paper reviews AI-based algorithms for real-time detection of colorectal polyps, with a particular focus on the development of deep learning algorithms aimed at optimizing both efficiency and correctness. Furthermore, the challenges and prospects of AI-based colorectal polyp detection are discussed.
Collapse
Affiliation(s)
- Meng-Yuan Nie
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Xin-Wei An
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Yun-Can Xing
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Zheng Wang
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Yan-Qiu Wang
- Langfang Traditional Chinese Medicine HospitalLangfang, Hebei, China
| | - Jia-Qi Lü
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| |
Collapse
|
19
|
Sinonquel P, Eelbode T, Pech O, De Wulf D, Dewint P, Neumann H, Antonelli G, Iacopini F, Tate D, Lemmers A, Pilonis ND, Kaminski MF, Roelandt P, Hassan C, Ingrid D, Maes F, Bisschops R. Clinical consequences of computer-aided colorectal polyp detection. Gut 2024; 73:1974-1983. [PMID: 38876773 DOI: 10.1136/gutjnl-2024-331943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/02/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND AND AIM Randomised trials show improved polyp detection with computer-aided detection (CADe), mostly of small lesions. However, operator and selection bias may affect CADe's true benefit. Clinical outcomes of increased detection have not yet been fully elucidated. METHODS In this multicentre trial, CADe combining convolutional and recurrent neural networks was used for polyp detection. Blinded endoscopists were monitored in real time by a second observer with CADe access. CADe detections prompted reinspection. Adenoma detection rates (ADR) and polyp detection rates were measured prestudy and poststudy. Histological assessments were done by independent histopathologists. The primary outcome compared polyp detection between endoscopists and CADe. RESULTS In 946 patients (51.9% male, mean age 64), a total of 2141 polyps were identified, including 989 adenomas. CADe was not superior to human polyp detection (sensitivity 94.6% vs 96.0%) but outperformed them when restricted to adenomas. Unblinding led to an additional yield of 86 true positive polyp detections (1.1% ADR increase per patient; 73.8% were <5 mm). CADe also increased non-neoplastic polyp detection by an absolute value of 4.9% of the cases (1.8% increase of entire polyp load). Procedure time increased with 6.6±6.5 min (+42.6%). In 22/946 patients, the additional detection of adenomas changed surveillance intervals (2.3%), mostly by increasing the number of small adenomas beyond the cut-off. CONCLUSION Even if CADe appears to be slightly more sensitive than human endoscopists, the additional gain in ADR was minimal and follow-up intervals rarely changed. Additional inspection of non-neoplastic lesions was increased, adding to the inspection and/or polypectomy workload.
Collapse
Affiliation(s)
- Pieter Sinonquel
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Tom Eelbode
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Oliver Pech
- Gastroenterology and Hepatology, Krankenhaus Barmherzige Bruder Regensburg, Regensburg, Germany
| | - Dominiek De Wulf
- Gastroenterology and Hepatology, AZ Delta vzw, Roeselare, Belgium
| | - Pieter Dewint
- Gastroenterology and Hepatology, AZ Maria Middelares vzw, Gent, Belgium
| | - Helmut Neumann
- Gastroenterology and Hepatology, Gastrozentrum Lippe, Bad Salzuflen, Germany
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale Nuovo Regina Margherita, Roma, Italy
| | - Federico Iacopini
- Gastroenterology and Digestive endoscopy, Ospedale dei Castelli, Ariccia, Italy
| | - David Tate
- Gastroenterology and Hepatology, UZ Gent, Gent, Belgium
| | - Arnaud Lemmers
- Gastroenterology and Hepatology, ULB Erasme, Bruxelles, Belgium
| | | | - Michal Filip Kaminski
- Department of Gastroenterology, Hepatology and Oncology, Medical Centre fo Postgraduate Education, Warsaw, Poland
- Department of Gastroenterological Oncology, The Maria Sklodowska-Curie Memorial Cancer Centre, Instytute of Oncology, Warsaw, Poland
| | - Philip Roelandt
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Humanitas Research Hospital, Milan, Italy
| | - Demedts Ingrid
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Frederik Maes
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Raf Bisschops
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| |
Collapse
|
20
|
Chen J, Xia K, Zhang Z, Ding Y, Wang G, Xu X. Establishing an AI model and application for automated capsule endoscopy recognition based on convolutional neural networks (with video). BMC Gastroenterol 2024; 24:394. [PMID: 39501161 PMCID: PMC11539301 DOI: 10.1186/s12876-024-03482-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2024] [Accepted: 10/24/2024] [Indexed: 11/08/2024] Open
Abstract
BACKGROUND Although capsule endoscopy (CE) is a crucial tool for diagnosing small bowel diseases, the need to process a vast number of images imposes a significant workload on physicians, leading to a high risk of missed diagnoses. This study aims to develop an artificial intelligence (AI) model and application based on convolutional neural networks that can automatically recognize various lesions in small bowel capsule endoscopy. METHODS Three small bowel capsule endoscopy datasets were used for AI model training, validation, and testing, encompassing 12 categories of images. The model's performance was evaluated using metrics such as AUC, sensitivity, specificity, precision, accuracy, and F1 score to select the best model. A human-machine comparison experiment was conducted using the best model and endoscopists with varying levels of experience. Model interpretability was analyzed using Grad-CAM and SHAP techniques. Finally, a clinical application was developed based on the best model using PyQt5 technology. RESULTS A total of 34,303 images were included in this study. The best model, MobileNetv3-large, achieved a weighted average sensitivity of 87.17%, specificity of 98.77%, and an AUC of 0.9897 across all categories. The application developed based on this model performed exceptionally well in comparison with endoscopists, achieving an accuracy of 87.17% and a processing speed of 75.04 frames per second, surpassing endoscopists of varying experience levels. CONCLUSION The AI model and application developed based on convolutional neural networks can quickly and accurately identify 12 types of small bowel lesions. With its high sensitivity, this system can effectively assist physicians in interpreting small bowel capsule endoscopy images.Future studies will validate the AI system for video evaluations and real-world clinical integration.
Collapse
Affiliation(s)
- Jian Chen
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
- Changshu Key Laboratory of Medical Artificial Intelligence and Big Data, Changshu City, Suzhou, 215500, China
| | - Kaijian Xia
- Center of Intelligent Medical Technology Research, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
- Changshu Key Laboratory of Medical Artificial Intelligence and Big Data, Changshu City, Suzhou, 215500, China
| | - Zihao Zhang
- Shanghai Haoxiong Education Technology Co., Ltd., Shanghai, 200434, China
| | - Yu Ding
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
| | - Ganhong Wang
- Department of Gastroenterology, Changshu Hospital Affiliated to Nanjing University of Chinese Medicine, Suzhou, 215500, China.
| | - Xiaodan Xu
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China.
| |
Collapse
|
21
|
Lin J, Zhu S, Gao X, Liu X, Xu C, Xu Z, Zhu J. Evaluation of super resolution technology for digestive endoscopic images. Heliyon 2024; 10:e38920. [PMID: 39430485 PMCID: PMC11489312 DOI: 10.1016/j.heliyon.2024.e38920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 09/25/2024] [Accepted: 10/02/2024] [Indexed: 10/22/2024] Open
Abstract
Object This study aims to evaluate the value of super resolution (SR) technology in augmenting the quality of digestive endoscopic images. Methods In the retrospective study, we employed two advanced SR models, i.e., SwimIR and ESRGAN. Two discrete datasets were utilized, with training conducted using the dataset of the First Affiliated Hospital of Soochow University (12,212 high-resolution images) and evaluation conducted using the HyperKvasir dataset (2,566 low-resolution images). Furthermore, an assessment of the impact of enhanced low-resolution images was conducted using a 5-point Likert scale from the perspectives of endoscopists. Finally, two endoscopic image classification tasks were employed to evaluate the effect of SR technology on computer vision (CV). Results SwinIR demonstrated superior performance, which achieved a PSNR of 32.60, an SSIM of 0.90, and a VIF of 0.47 in test set. 90 % of endoscopists supported that SR preprocessing moderately ameliorated the readability of endoscopic images. For CV, enhanced images bolstered the performance of convolutional neural networks, whether in the classification task of Barrett's esophagus (improved F1-score: 0.04) or Mayo Endoscopy Score (improved F1-score: 0.04). Conclusions SR technology demonstrates the capacity to produce high-resolution endoscopic images. The approach enhanced clinical readability and CV models' performance of low-resolution endoscopic images.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xin Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Zhonghua Xu
- Department of Orthopedics, Jintan Affiliated Hospital to Jiangsu University, Changzhou, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
22
|
Sun L, Zhang R, Gu Y, Huang L, Jin C. Application of Artificial Intelligence in the diagnosis and treatment of colorectal cancer: a bibliometric analysis, 2004-2023. Front Oncol 2024; 14:1424044. [PMID: 39464716 PMCID: PMC11502294 DOI: 10.3389/fonc.2024.1424044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 09/23/2024] [Indexed: 10/29/2024] Open
Abstract
BACKGROUND An increasing number of studies have turned their lens to the application of Artificial Intelligence (AI) in the diagnosis and treatment of colorectal cancer (CRC). OBJECTIVE To clarify and visualize the basic situation, research hotspots, and development trends of AI in the diagnosis and treatment of CRC, and provide clues for research in the future. METHODS On January 31, 2024, the Web of Science Core Collection (WoSCC) database was searched to screen and export the relevant research published during 2004-2023, and Cite Space, VoSviewer, Bibliometrix were used to visualize the number of publications, countries (regions), institutions, journals, authors, citations, keywords, etc. RESULTS A total of 2715 pieces of literature were included. The number of publications grew slowly until the end of 2016, but rapidly after 2017, till to the peak of 798 in 2023. A total of 92 countries, 3997 organizations, and 15,667 authors were involved in this research. Chinese scholars released the highest number of publications, and the U.S. contributed the highest number of total citations. As to authors, MORI, YUICHI had the highest number of publications, and WANG, PU had the highest number of total citations. According to the analysis of citations and keywords, the current research hotspots are mainly related to "Colonoscopy", "Polyp Segmentation", "Digital Pathology", "Radiomics", "prognosis". CONCLUSION Research on the application of AI in the diagnosis and treatment of CRC has made significant progress and is flourishing across the world. Current research hotspots include AI-assisted early screening and diagnosis, pathology, and staging, and prognosis assessment, and future research is predicted to put weight on multimodal data fusion, personalized treatment, and drug development.
Collapse
Affiliation(s)
- Lamei Sun
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
- Department of Traditional Chinese Medicine, Jiangyin Nanzha Community Health Service Center, Wuxi, China
| | - Rong Zhang
- Department of General Surgery, Jiangyin Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Yidan Gu
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Lei Huang
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Chunhui Jin
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| |
Collapse
|
23
|
Wang L, Wan J, Meng X, Chen B, Shao W. MCH-PAN: gastrointestinal polyp detection model integrating multi-scale feature information. Sci Rep 2024; 14:23382. [PMID: 39379452 PMCID: PMC11461898 DOI: 10.1038/s41598-024-74609-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024] Open
Abstract
The rise of object detection models has brought new breakthroughs to the development of clinical decision support systems. However, in the field of gastrointestinal polyp detection, there are still challenges such as uncertainty in polyp identification and inadequate coping with polyp scale variations. To address these challenges, this paper proposes a novel gastrointestinal polyp object detection model. The model can automatically identify polyp regions in gastrointestinal images and accurately label them. In terms of design, the model integrates multi-channel information to enhance the ability and robustness of channel feature expression, thus better coping with the complexity of polyp structures. At the same time, a hierarchical structure is constructed in the model to enhance the model's adaptability to multi-scale targets, effectively addressing the problem of large-scale variations in polyps. Furthermore, a channel attention mechanism is designed in the model to improve the accuracy of target positioning and reduce uncertainty in diagnosis. By integrating these strategies, the proposed gastrointestinal polyp object detection model can achieve accurate polyp detection, providing clinicians with reliable and valuable references. Experimental results show that the model exhibits superior performance in gastrointestinal polyp detection, which helps improve the diagnostic level of digestive system diseases and provides useful references for related research fields.
Collapse
Affiliation(s)
- Ling Wang
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Jingjing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223002, China.
| | - Xianchun Meng
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Bolun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Wei Shao
- Nanjing University of Aeronautics and Astronautics Shenzhen Research Institute, Shenzhen, 518038, China.
| |
Collapse
|
24
|
Mushtaq S, Khan AB. Comment on: "Detection of high-risk polyps at screening colonoscopy indicates risk for liver and biliary cancer death". Dig Liver Dis 2024; 56:1799-1800. [PMID: 39079830 DOI: 10.1016/j.dld.2024.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 07/04/2024] [Indexed: 09/29/2024]
Affiliation(s)
- Saba Mushtaq
- Ayub Medical College, New Westridge Colony Street# 8 House#6 Misrial Road Rawalpindi, Abbottabad, Punjab Pakistan.
| | - Abdul Basit Khan
- Ayub Medical College, Street#7, Phul Gulaab Road, Al Mansoor Town, Abbottabad, Pakistan
| |
Collapse
|
25
|
Tahir AM, Guo L, Ward RK, Yu X, Rideout A, Hore M, Wang ZJ. Explainable machine learning for assessing upper respiratory tract of racehorses from endoscopy videos. Comput Biol Med 2024; 181:109030. [PMID: 39173488 DOI: 10.1016/j.compbiomed.2024.109030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 06/20/2024] [Accepted: 08/13/2024] [Indexed: 08/24/2024]
Abstract
Laryngeal hemiplegia (LH) is a major upper respiratory tract (URT) complication in racehorses. Endoscopy imaging of horse throat is a gold standard for URT assessment. However, current manual assessment faces several challenges, stemming from the poor quality of endoscopy videos and subjectivity of manual grading. To overcome such limitations, we propose an explainable machine learning (ML)-based solution for efficient URT assessment. Specifically, a cascaded YOLOv8 architecture is utilized to segment the key semantic regions and landmarks per frame. Several spatiotemporal features are then extracted from key landmarks points and fed to a decision tree (DT) model to classify LH as Grade 1,2,3 or 4 denoting absence of LH, mild, moderate, and severe LH, respectively. The proposed method, validated through 5-fold cross-validation on 107 videos, showed promising performance in classifying different LH grades with 100%, 91.18%, 94.74% and 100% sensitivity values for Grade 1 to 4, respectively. Further validation on an external dataset of 72 cases confirmed its generalization capability with 90%, 80.95%, 100%, and 100% sensitivity values for Grade 1 to 4, respectively. We introduced several explainability related assessment functions, including: (i) visualization of YOLOv8 output to detect landmark estimation errors which can affect the final classification, (ii) time-series visualization to assess video quality, and (iii) backtracking of the DT output to identify borderline cases. We incorporated domain knowledge (e.g., veterinarian diagnostic procedures) into the proposed ML framework. This provides an assistive tool with clinical-relevance and explainability that can ease and speed up the URT assessment by veterinarians.
Collapse
Affiliation(s)
- Anas Mohammed Tahir
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Li Guo
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Rabab K Ward
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Xinhui Yu
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Andrew Rideout
- Point To Point Research & Development, Vancouver, BC, Canada.
| | - Michael Hore
- Hagyard Equine Medical Institute, Lexington, KY, USA.
| | - Z Jane Wang
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
26
|
Oukdach Y, Garbaz A, Kerkaou Z, El Ansari M, Koutti L, El Ouafdi AF, Salihoun M. UViT-Seg: An Efficient ViT and U-Net-Based Framework for Accurate Colorectal Polyp Segmentation in Colonoscopy and WCE Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2354-2374. [PMID: 38671336 PMCID: PMC11522253 DOI: 10.1007/s10278-024-01124-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/01/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024]
Abstract
Colorectal cancer (CRC) stands out as one of the most prevalent global cancers. The accurate localization of colorectal polyps in endoscopy images is pivotal for timely detection and removal, contributing significantly to CRC prevention. The manual analysis of images generated by gastrointestinal screening technologies poses a tedious task for doctors. Therefore, computer vision-assisted cancer detection could serve as an efficient tool for polyp segmentation. Numerous efforts have been dedicated to automating polyp localization, with the majority of studies relying on convolutional neural networks (CNNs) to learn features from polyp images. Despite their success in polyp segmentation tasks, CNNs exhibit significant limitations in precisely determining polyp location and shape due to their sole reliance on learning local features from images. While gastrointestinal images manifest significant variation in their features, encompassing both high- and low-level ones, a framework that combines the ability to learn both features of polyps is desired. This paper introduces UViT-Seg, a framework designed for polyp segmentation in gastrointestinal images. Operating on an encoder-decoder architecture, UViT-Seg employs two distinct feature extraction methods. A vision transformer in the encoder section captures long-range semantic information, while a CNN module, integrating squeeze-excitation and dual attention mechanisms, captures low-level features, focusing on critical image regions. Experimental evaluations conducted on five public datasets, including CVC clinic, ColonDB, Kvasir-SEG, ETIS LaribDB, and Kvasir Capsule-SEG, demonstrate UViT-Seg's effectiveness in polyp localization. To confirm its generalization performance, the model is tested on datasets not used in training. Benchmarking against common segmentation methods and state-of-the-art polyp segmentation approaches, the proposed model yields promising results. For instance, it achieves a mean Dice coefficient of 0.915 and a mean intersection over union of 0.902 on the CVC Colon dataset. Furthermore, UViT-Seg has the advantage of being efficient, requiring fewer computational resources for both training and testing. This feature positions it as an optimal choice for real-world deployment scenarios.
Collapse
Affiliation(s)
- Yassine Oukdach
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco.
| | - Anass Garbaz
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Zakaria Kerkaou
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mohamed El Ansari
- Informatics and Applications Laboratory, Department of Computer Sciences, Faculty of Science, Moulay Ismail University, B.P 11201, Meknès, 52000, Morocco
| | - Lahcen Koutti
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Ahmed Fouad El Ouafdi
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mouna Salihoun
- Faculty of Medicine and Pharmacy of Rabat, Mohammed V University of Rabat, Rabat, 10000, Morocco
| |
Collapse
|
27
|
Wang S, Zhao X, Guo H, Qi F, Qiao Y, Wang C. Fusion model of gray level co-occurrence matrix and convolutional neural network faced for histopathological images. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2024; 95:105124. [PMID: 39451106 DOI: 10.1063/5.0216417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 10/08/2024] [Indexed: 10/26/2024]
Abstract
The image recognition of cancer cells plays an important role in diagnosing and treating cancer. Deep learning is suitable for classifying histopathological images and providing auxiliary technology for cancer diagnosis. The convolutional neural network is employed in the classification of histopathological images; however, the model's accuracy may decrease along with the increase in network layers. Extracting appropriate image features is helpful for image classification. In this paper, different features of histopathological images are represented by extracting features of the gray co-occurrence matrix. These features are recombined into a 16 × 16 × 3 matrix to form an artificial image. The original image and the artificial image are fused by summing the softmax output. The histopathological images are divided into the training set, validation set, and testing set. Each training dataset consists of 1500 images, while the validation dataset and test dataset each consist of 500 images. The results indicate that the effectiveness of our fusion model is demonstrated through significant improvements in accuracy, precision, recall, and F1-score, with an average accuracy reaching 99.31%. This approach not only enhances the classification performance of tissue pathology images but also holds promise for advancing computer-aided diagnosis in cancer pathology.
Collapse
Affiliation(s)
- Shanxiang Wang
- School of Mechanical and Electric Engineering, Soochow University, Suzhou 215123, China
| | - Xiaoxue Zhao
- School of Mechanical and Electric Engineering, Soochow University, Suzhou 215123, China
| | - Hao Guo
- School of Mechanical and Electric Engineering, Soochow University, Suzhou 215123, China
- Jiangsu Provincial Key Laboratory of Advanced Robotics, Soochow University, Suzhou 215123, China
| | - Fei Qi
- School of Mechanical and Electric Engineering, Soochow University, Suzhou 215123, China
| | - Yu Qiao
- Department of Oncology, Beijing Hospital, National Center of Gerontology, Beijing 100730, China
| | - Chunju Wang
- School of Mechanical and Electric Engineering, Soochow University, Suzhou 215123, China
- Jiangsu Provincial Key Laboratory of Advanced Robotics, Soochow University, Suzhou 215123, China
| |
Collapse
|
28
|
Shukla A, Chaudhary R, Nayyar N. Role of artificial intelligence in gastrointestinal surgery. Artif Intell Cancer 2024; 5. [DOI: 10.35713/aic.v5.i2.97317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/11/2024] [Accepted: 07/17/2024] [Indexed: 09/05/2024] Open
Abstract
Artificial intelligence is rapidly evolving and its application is increasing day-by-day in the medical field. The application of artificial intelligence is also valuable in gastrointestinal diseases, by calculating various scoring systems, evaluating radiological images, preoperative and intraoperative assistance, processing pathological slides, prognosticating, and in treatment responses. This field has a promising future and can have an impact on many management algorithms. In this minireview, we aimed to determine the basics of artificial intelligence, the role that artificial intelligence may play in gastrointestinal surgeries and malignancies, and the limitations thereof.
Collapse
Affiliation(s)
- Ankit Shukla
- Department of Surgery, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| | - Rajesh Chaudhary
- Department of Renal Transplantation, Dr Rajendra Prasad Government Medical College, Kangra 176001, India
| | - Nishant Nayyar
- Department of Radiology, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| |
Collapse
|
29
|
Ba Q, Yuan X, Wang Y, Shen N, Xie H, Lu Y. Development and Validation of Machine Learning Algorithms for Prediction of Colorectal Polyps Based on Electronic Health Records. Biomedicines 2024; 12:1955. [PMID: 39335469 PMCID: PMC11429196 DOI: 10.3390/biomedicines12091955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 08/02/2024] [Accepted: 08/22/2024] [Indexed: 09/30/2024] Open
Abstract
BACKGROUND Colorectal Polyps are the main source of precancerous lesions in colorectal cancer. To increase the early diagnosis of tumors and improve their screening, we aimed to develop a simple and non-invasive diagnostic prediction model for colorectal polyps based on machine learning (ML) and using accessible health examination records. METHODS We conducted a single-center observational retrospective study in China. The derivation cohort, consisting of 5426 individuals who underwent colonoscopy screening from January 2021 to January 2024, was separated for training (cohort 1) and validation (cohort 2). The variables considered in this study included demographic data, vital signs, and laboratory results recorded by health examination records. With features selected by univariate analysis and Lasso regression analysis, nine machine learning methods were utilized to develop a colorectal polyp diagnostic model. Several evaluation indexes, including the area under the receiver-operating-characteristic curve (AUC), were used to compare the predictive performance. The SHapley additive explanation method (SHAP) was used to rank the feature importance and explain the final model. RESULTS 14 independent predictors were identified as the most valuable features to establish the models. The adaptive boosting machine (AdaBoost) model exhibited the best performance among the 9 ML models in cohort 1, with accuracy, sensitivity, specificity, positive predictive value, negative predictive value, F1 score, and AUC (95% CI) of 0.632 (0.618-0.646), 0.635 (0.550-0.721), 0.674 (0.591-0.758), 0.593 (0.576-0.611), 0.673 (0.654-0.691), 0.608 (0.560-0.655) and 0.687 (0.626-0.749), respectively. The final model gave an AUC of 0.675 in cohort 2. Additionally, the precision recall (PR) curve for the AdaBoost model reached the highest AUPR of 0.648, positioning it nearest to the upper right corner. SHAP analysis provided visualized explanations, reaffirming the critical factors associated with the risk of colorectal polyps in the asymptomatic population. CONCLUSIONS This study integrated the clinical and laboratory indicators with machine learning techniques to establish the predictive model for colorectal polyps, providing non-invasive, cost-effective screening strategies for asymptomatic individuals and guiding decisions for further examination and treatment.
Collapse
Affiliation(s)
- Qinwen Ba
- Department of Laboratory Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Xu Yuan
- Department of Laboratory Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Yun Wang
- Department of Laboratory Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Na Shen
- Department of Laboratory Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Huaping Xie
- Department of Gastroenterology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Yanjun Lu
- Department of Laboratory Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
30
|
Bou Jaoude J, Al Bacha R, Abboud B. Will artificial intelligence reach any limit in gastroenterology? Artif Intell Gastroenterol 2024; 5:91336. [DOI: 10.35712/aig.v5.i2.91336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/25/2024] [Accepted: 06/07/2024] [Indexed: 08/08/2024] Open
Abstract
Endoscopy is the cornerstone in the management of digestive diseases. Over the last few decades, technology has played an important role in the development of this field, helping endoscopists in better detecting and characterizing luminal lesions. However, despite ongoing advancements in endoscopic technology, the incidence of missed pre-neoplastic and neoplastic lesions remains high due to the operator-dependent nature of endoscopy and the challenging learning curve associated with new technologies. Artificial intelligence (AI), an operator-independent field, could be an invaluable solution. AI can serve as a “second observer”, enhancing the performance of endoscopists in detecting and characterizing luminal lesions. By utilizing deep learning (DL), an innovation within machine learning, AI automatically extracts input features from targeted endoscopic images. DL encompasses both computer-aided detection and computer-aided diagnosis, assisting endoscopists in reducing missed detection rates and predicting the histology of luminal digestive lesions. AI applications in clinical gastrointestinal diseases are continuously expanding and evolving the entire digestive tract. In all published studies, real-time AI assists endoscopists in improving the performance of non-expert gastroenterologists, bringing it to a level comparable to that of experts. The development of DL may be affected by selection biases. Studies have utilized different AI-assisted models, which are heterogeneous. In the future, algorithms need validation through large, randomized trials. Theoretically, AI has no limit to assist endoscopists in increasing the accuracy and the quality of endoscopic exams. However, practically, we still have a long way to go before standardizing our AI models to be accepted and applied by all gastroenterologists.
Collapse
Affiliation(s)
- Joseph Bou Jaoude
- Department of Gastroenterology, Levant Hospital, Beirut 166830, Lebanon
| | - Rose Al Bacha
- Department of Gastroenterology, Levant Hospital, Beirut 166830, Lebanon
| | - Bassam Abboud
- Department of General Surgery, Geitaoui Hospital, Faculty of Medicine, Lebanese University, Lebanon, Beirut 166830, Lebanon
| |
Collapse
|
31
|
Chang Q, Ahmad D, Toth J, Bascom R, Higgins WE. ESFPNet: Efficient Stage-Wise Feature Pyramid on Mix Transformer for Deep Learning-Based Cancer Analysis in Endoscopic Video. J Imaging 2024; 10:191. [PMID: 39194980 DOI: 10.3390/jimaging10080191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/19/2024] [Accepted: 08/01/2024] [Indexed: 08/29/2024] Open
Abstract
For patients at risk of developing either lung cancer or colorectal cancer, the identification of suspect lesions in endoscopic video is an important procedure. The physician performs an endoscopic exam by navigating an endoscope through the organ of interest, be it the lungs or intestinal tract, and performs a visual inspection of the endoscopic video stream to identify lesions. Unfortunately, this entails a tedious, error-prone search over a lengthy video sequence. We propose a deep learning architecture that enables the real-time detection and segmentation of lesion regions from endoscopic video, with our experiments focused on autofluorescence bronchoscopy (AFB) for the lungs and colonoscopy for the intestinal tract. Our architecture, dubbed ESFPNet, draws on a pretrained Mix Transformer (MiT) encoder and a decoder structure that incorporates a new Efficient Stage-Wise Feature Pyramid (ESFP) to promote accurate lesion segmentation. In comparison to existing deep learning models, the ESFPNet model gave superior lesion segmentation performance for an AFB dataset. It also produced superior segmentation results for three widely used public colonoscopy databases and nearly the best results for two other public colonoscopy databases. In addition, the lightweight ESFPNet architecture requires fewer model parameters and less computation than other competing models, enabling the real-time analysis of input video frames. Overall, these studies point to the combined superior analysis performance and architectural efficiency of the ESFPNet for endoscopic video analysis. Lastly, additional experiments with the public colonoscopy databases demonstrate the learning ability and generalizability of ESFPNet, implying that the model could be effective for region segmentation in other domains.
Collapse
Affiliation(s)
- Qi Chang
- School of Electrical Engineering and Computer Science, Penn State University, University Park, PA 16802, USA
| | - Danish Ahmad
- Penn State Milton S. Hershey Medical Center, Hershey, PA 17033, USA
| | - Jennifer Toth
- Penn State Milton S. Hershey Medical Center, Hershey, PA 17033, USA
| | - Rebecca Bascom
- Penn State Milton S. Hershey Medical Center, Hershey, PA 17033, USA
| | - William E Higgins
- School of Electrical Engineering and Computer Science, Penn State University, University Park, PA 16802, USA
| |
Collapse
|
32
|
Zhang C, Yao L, Jiang R, Wang J, Wu H, Li X, Wu Z, Luo R, Luo C, Tan X, Wang W, Xiao B, Hu H, Yu H. Assessment of the role of false-positive alerts in computer-aided polyp detection for assistance capabilities. J Gastroenterol Hepatol 2024; 39:1623-1635. [PMID: 38744667 DOI: 10.1111/jgh.16615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 05/02/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND AND AIM False positives (FPs) pose a significant challenge in the application of artificial intelligence (AI) for polyp detection during colonoscopy. The study aimed to quantitatively evaluate the impact of computer-aided polyp detection (CADe) systems' FPs on endoscopists. METHODS The model's FPs were categorized into four gradients: 0-5, 5-10, 10-15, and 15-20 FPs per minute (FPPM). Fifty-six colonoscopy videos were collected for a crossover study involving 10 endoscopists. Polyp missed rate (PMR) was set as primary outcome. Subsequently, to further verify the impact of FPPM on the assistance capability of AI in clinical environments, a secondary analysis was conducted on a prospective randomized controlled trial (RCT) from Renmin Hospital of Wuhan University in China from July 1 to October 15, 2020, with the adenoma detection rate (ADR) as primary outcome. RESULTS Compared with routine group, CADe reduced PMR when FPPM was less than 5. However, with the continuous increase of FPPM, the beneficial effect of CADe gradually weakens. For secondary analysis of RCT, a total of 956 patients were enrolled. In AI-assisted group, ADR is higher when FPPM ≤ 5 compared with FPPM > 5 (CADe group: 27.78% vs 11.90%; P = 0.014; odds ratio [OR], 0.351; 95% confidence interval [CI], 0.152-0.812; COMBO group: 38.40% vs 23.46%, P = 0.029; OR, 0.427; 95% CI, 0.199-0.916). After AI intervention, ADR increased when FPPM ≤ 5 (27.78% vs 14.76%; P = 0.001; OR, 0.399; 95% CI, 0.231-0.690), but no statistically significant difference was found when FPPM > 5 (11.90% vs 14.76%, P = 0.788; OR, 1.111; 95% CI, 0.514-2.403). CONCLUSION The level of FPs of CADe does affect its effectiveness as an aid to endoscopists, with its best effect when FPPM is less than 5.
Collapse
Affiliation(s)
- Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Liwen Yao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Ruiqing Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Jing Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Huiling Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Xun Li
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Zhifeng Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Renquan Luo
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Chaijie Luo
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Xia Tan
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Wen Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Bing Xiao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Huiyan Hu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| |
Collapse
|
33
|
Angel MC, Rinehart JB, Cannesson MP, Baldi P. Clinical Knowledge and Reasoning Abilities of AI Large Language Models in Anesthesiology: A Comparative Study on the American Board of Anesthesiology Examination. Anesth Analg 2024; 139:349-356. [PMID: 38640076 PMCID: PMC11373264 DOI: 10.1213/ane.0000000000006892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2024]
Abstract
BACKGROUND Over the past decade, artificial intelligence (AI) has expanded significantly with increased adoption across various industries, including medicine. Recently, AI-based large language models such as Generative Pretrained Transformer-3 (GPT-3), Bard, and Generative Pretrained Transformer-3 (GPT-4) have demonstrated remarkable language capabilities. While previous studies have explored their potential in general medical knowledge tasks, here we assess their clinical knowledge and reasoning abilities in a specialized medical context. METHODS We studied and compared the performance of all 3 models on both the written and oral portions of the comprehensive and challenging American Board of Anesthesiology (ABA) examination, which evaluates candidates' knowledge and competence in anesthesia practice. RESULTS Our results reveal that only GPT-4 successfully passed the written examination, achieving an accuracy of 78% on the basic section and 80% on the advanced section. In comparison, the less recent or smaller GPT-3 and Bard models scored 58% and 47% on the basic examination, and 50% and 46% on the advanced examination, respectively. Consequently, only GPT-4 was evaluated in the oral examination, with examiners concluding that it had a reasonable possibility of passing the structured oral examination. Additionally, we observe that these models exhibit varying degrees of proficiency across distinct topics, which could serve as an indicator of the relative quality of information contained in the corresponding training datasets. This may also act as a predictor for determining which anesthesiology subspecialty is most likely to witness the earliest integration with AI. CONCLUSIONS GPT-4 outperformed GPT-3 and Bard on both basic and advanced sections of the written ABA examination, and actual board examiners considered GPT-4 to have a reasonable possibility of passing the real oral examination; these models also exhibit varying degrees of proficiency across distinct topics.
Collapse
Affiliation(s)
- Mirana C. Angel
- Department of Computer Science, University of California Irvine, Irvine, California
- Institute for Genomics and Bioinformatics, University of California Irvine, Irvine, California
| | - Joseph B. Rinehart
- Department of Anesthesiology & Perioperative Care, University of California Irvine, Irvine, California
| | - Maxime P. Cannesson
- Department of Anesthesiology & Perioperative Medicine, University of California Los Angeles, Los Angeles, California
| | - Pierre Baldi
- Department of Computer Science, University of California Irvine, Irvine, California
- Institute for Genomics and Bioinformatics, University of California Irvine, Irvine, California
| |
Collapse
|
34
|
Kikuchi R, Okamoto K, Ozawa T, Shibata J, Ishihara S, Tada T. Endoscopic Artificial Intelligence for Image Analysis in Gastrointestinal Neoplasms. Digestion 2024; 105:419-435. [PMID: 39068926 DOI: 10.1159/000540251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/02/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) using deep learning systems has recently been utilized in various medical fields. In the field of gastroenterology, AI is primarily implemented in image recognition and utilized in the realm of gastrointestinal (GI) endoscopy. In GI endoscopy, computer-aided detection/diagnosis (CAD) systems assist endoscopists in GI neoplasm detection or differentiation of cancerous or noncancerous lesions. Several AI systems for colorectal polyps have already been applied in colonoscopy clinical practices. In esophagogastroduodenoscopy, a few CAD systems for upper GI neoplasms have been launched in Asian countries. The usefulness of these CAD systems in GI endoscopy has been gradually elucidated. SUMMARY In this review, we outline recent articles on several studies of endoscopic AI systems for GI neoplasms, focusing on esophageal squamous cell carcinoma (ESCC), esophageal adenocarcinoma (EAC), gastric cancer (GC), and colorectal polyps. In ESCC and EAC, computer-aided detection (CADe) systems were mainly developed, and a recent meta-analysis study showed sensitivities of 91.2% and 93.1% and specificities of 80% and 86.9%, respectively. In GC, a recent meta-analysis study on CADe systems demonstrated that their sensitivity and specificity were as high as 90%. A randomized controlled trial (RCT) also showed that the use of the CADe system reduced the miss rate. Regarding computer-aided diagnosis (CADx) systems for GC, although RCTs have not yet been conducted, most studies have demonstrated expert-level performance. In colorectal polyps, multiple RCTs have shown the usefulness of the CADe system for improving the polyp detection rate, and several CADx systems have been shown to have high accuracy in colorectal polyp differentiation. KEY MESSAGES Most analyses of endoscopic AI systems suggested that their performance was better than that of nonexpert endoscopists and equivalent to that of expert endoscopists. Thus, endoscopic AI systems may be useful for reducing the risk of overlooking lesions and improving the diagnostic ability of endoscopists.
Collapse
Affiliation(s)
- Ryosuke Kikuchi
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazuaki Okamoto
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tsuyoshi Ozawa
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Junichi Shibata
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| |
Collapse
|
35
|
Spadaccini M, Troya J, Khalaf K, Facciorusso A, Maselli R, Hann A, Repici A. Artificial Intelligence-assisted colonoscopy and colorectal cancer screening: Where are we going? Dig Liver Dis 2024; 56:1148-1155. [PMID: 38458884 DOI: 10.1016/j.dld.2024.01.203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/22/2024] [Accepted: 01/23/2024] [Indexed: 03/10/2024]
Abstract
Colorectal cancer is a significant global health concern, necessitating effective screening strategies to reduce its incidence and mortality rates. Colonoscopy plays a crucial role in the detection and removal of colorectal neoplastic precursors. However, there are limitations and variations in the performance of endoscopists, leading to missed lesions and suboptimal outcomes. The emergence of artificial intelligence (AI) in endoscopy offers promising opportunities to improve the quality and efficacy of screening colonoscopies. In particular, AI applications, including computer-aided detection (CADe) and computer-aided characterization (CADx), have demonstrated the potential to enhance adenoma detection and optical diagnosis accuracy. Additionally, AI-assisted quality control systems aim to standardize the endoscopic examination process. This narrative review provides an overview of AI principles and discusses the current knowledge on AI-assisted endoscopy in the context of screening colonoscopies. It highlights the significant role of AI in improving lesion detection, characterization, and quality assurance during colonoscopy. However, further well-designed studies are needed to validate the clinical impact and cost-effectiveness of AI-assisted colonoscopy before its widespread implementation.
Collapse
Affiliation(s)
- Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, 20089 Rozzano, Italy; Department of Biomedical Sciences, Humanitas University, 20089 Rozzano, Italy.
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| | - Kareem Khalaf
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Antonio Facciorusso
- Gastroenterology Unit, Department of Surgical and Medical Sciences, University of Foggia, Foggia, Italy
| | - Roberta Maselli
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, 20089 Rozzano, Italy; Department of Biomedical Sciences, Humanitas University, 20089 Rozzano, Italy
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| | - Alessandro Repici
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, 20089 Rozzano, Italy; Department of Biomedical Sciences, Humanitas University, 20089 Rozzano, Italy
| |
Collapse
|
36
|
Jha D, Tomar NK, Bhattacharya D, Biswas K, Bagci U. TransRUPNet for Improved Polyp Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40038943 DOI: 10.1109/embc53108.2024.10781511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Colorectal cancer is among the most common cause of cancer worldwide. Removal of precancerous polyps through early detection is essential to prevent them from progressing to colon cancer. We develop an advanced deep learning-based architecture, Transformer based Residual Upsampling Network (TransRUPNet) for automatic and real-time polyp segmentation. The proposed architecture, TransRUPNet, is an encoder-decoder network consisting of three encoder and decoder blocks with additional upsampling blocks at the end of the network. With the image size of 256×256, the proposed method achieves an excellent real-time operation speed of 47.07 frames per second with an average mean dice coefficient score of 0.7786 and mean Intersection over Union of 0.7210 on the out-of-distribution polyp datasets. The results on the publicly available PolypGen dataset suggest that TransRUPNet can give real-time feedback while retaining high accuracy for in-distribution datasets. Furthermore, we demonstrate the generalizability of the proposed method by showing that it significantly improves performance on out-of-distribution dataset compared to the existing methods. The source code of our network is available at https://github.com/DebeshJha/TransRUPNet.
Collapse
|
37
|
Nijjar GS, Aulakh SK, Singh R, Chandi SK. Emerging Technologies in Endoscopy for Gastrointestinal Neoplasms: A Comprehensive Overview. Cureus 2024; 16:e62946. [PMID: 39044885 PMCID: PMC11265259 DOI: 10.7759/cureus.62946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2024] [Indexed: 07/25/2024] Open
Abstract
Gastrointestinal neoplasms are a growing global health concern, requiring prompt identification and treatment. Endoscopic procedures have revolutionized the detection and treatment of gastrointestinal tumors by providing accurate, minimally invasive methods. Early-stage malignancies can be treated with endoscopic excision, leading to improved outcomes and increased survival rates. Precancerous lesions, like adenomatous polyps, can be prevented by removing them, reducing cancer occurrence and death rates. Advanced techniques like chromoendoscopy, narrow-band imaging, and confocal laser endomicroscopy improve the ability to see the mucosa surface and diagnose conditions. Artificial Intelligence (AI) applications in endoscopy can enhance diagnostic accuracy and predict histology outcomes. However, challenges remain in accurately defining lesions and ensuring precise diagnosis and treatment selection. Molecular imaging approaches and therapeutic modalities like photodynamic therapy and endoscopic ultrasonography-guided therapies hold potential but require further study and clinical confirmation. This study examines the future prospects and obstacles in endoscopic procedures for the timely identification and treatment of gastrointestinal cancers. The focus is on developing technology, limits, and prospective effects on clinical practice.
Collapse
Affiliation(s)
| | - Smriti Kaur Aulakh
- Internal Medicine, Sri Guru Ram Das University of Health Science and Research, Amritsar, IND
| | | | | |
Collapse
|
38
|
Wang Z, Liu Z, Yu J, Gao Y, Liu M. Multi-scale nested UNet with transformer for colorectal polyp segmentation. J Appl Clin Med Phys 2024; 25:e14351. [PMID: 38551396 PMCID: PMC11163511 DOI: 10.1002/acm2.14351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/13/2024] [Accepted: 02/19/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Polyp detection and localization are essential tasks for colonoscopy. U-shape network based convolutional neural networks have achieved remarkable segmentation performance for biomedical images, but lack of long-range dependencies modeling limits their receptive fields. PURPOSE Our goal was to develop and test a novel architecture for polyp segmentation, which takes advantage of learning local information with long-range dependencies modeling. METHODS A novel architecture combining with multi-scale nested UNet structure integrated transformer for polyp segmentation was developed. The proposed network takes advantage of both CNN and transformer to extract distinct feature information. The transformer layer is embedded between the encoder and decoder of a U-shape net to learn explicit global context and long-range semantic information. To address the challenging of variant polyp sizes, a MSFF unit was proposed to fuse features with multiple resolution. RESULTS Four public datasets and one in-house dataset were used to train and test the model performance. Ablation study was also conducted to verify each component of the model. For dataset Kvasir-SEG and CVC-ClinicDB, the proposed model achieved mean dice score of 0.942 and 0.950 respectively, which were more accurate than the other methods. To show the generalization of different methods, we processed two cross dataset validations, the proposed model achieved the highest mean dice score. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods. CONCLUSIONS The proposed model produced more accurate polyp segmentation than current methods on four different public and one in-house datasets. Its capability of polyps segmentation in different sizes shows the potential clinical application.
Collapse
Affiliation(s)
- Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Zhen Liu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Jianfeng Yu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Yingxin Gao
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard ExplorationChangshaChina
| |
Collapse
|
39
|
Yao L, Li S, Tao Q, Mao Y, Dong J, Lu C, Han C, Qiu B, Huang Y, Huang X, Liang Y, Lin H, Guo Y, Liang Y, Chen Y, Lin J, Chen E, Jia Y, Chen Z, Zheng B, Ling T, Liu S, Tong T, Cao W, Zhang R, Chen X, Liu Z. Deep learning for colorectal cancer detection in contrast-enhanced CT without bowel preparation: a retrospective, multicentre study. EBioMedicine 2024; 104:105183. [PMID: 38848616 PMCID: PMC11192791 DOI: 10.1016/j.ebiom.2024.105183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 04/30/2024] [Accepted: 05/21/2024] [Indexed: 06/09/2024] Open
Abstract
BACKGROUND Contrast-enhanced CT scans provide a means to detect unsuspected colorectal cancer. However, colorectal cancers in contrast-enhanced CT without bowel preparation may elude detection by radiologists. We aimed to develop a deep learning (DL) model for accurate detection of colorectal cancer, and evaluate whether it could improve the detection performance of radiologists. METHODS We developed a DL model using a manually annotated dataset (1196 cancer vs 1034 normal). The DL model was tested using an internal test set (98 vs 115), two external test sets (202 vs 265 in 1, and 252 vs 481 in 2), and a real-world test set (53 vs 1524). We compared the detection performance of the DL model with radiologists, and evaluated its capacity to enhance radiologists' detection performance. FINDINGS In the four test sets, the DL model had the area under the receiver operating characteristic curves (AUCs) ranging between 0.957 and 0.994. In both the internal test set and external test set 1, the DL model yielded higher accuracy than that of radiologists (97.2% vs 86.0%, p < 0.0001; 94.9% vs 85.3%, p < 0.0001), and significantly improved the accuracy of radiologists (93.4% vs 86.0%, p < 0.0001; 93.6% vs 85.3%, p < 0.0001). In the real-world test set, the DL model delivered sensitivity comparable to that of radiologists who had been informed about clinical indications for most cancer cases (94.3% vs 96.2%, p > 0.99), and it detected 2 cases that had been missed by radiologists. INTERPRETATION The developed DL model can accurately detect colorectal cancer and improve radiologists' detection performance, showing its potential as an effective computer-aided detection tool. FUNDING This study was supported by National Science Fund for Distinguished Young Scholars of China (No. 81925023); Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345); National Natural Science Foundation of China (No. 82072090 and No. 82371954); Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); High-level Hospital Construction Project (No. DFJHBF202105).
Collapse
Affiliation(s)
- Lisha Yao
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; School of Medicine, South China University of Technology, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China
| | - Suyun Li
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; School of Medicine, South Medical University, Guangzhou, China
| | - Quan Tao
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yun Mao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jie Dong
- Department of Radiology, Shanxi Bethune Hospital (Shanxi Academy of Medical Sciences), The Third Affiliated Hospital of Shanxi Medical University, Taiyuan, China
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Bingjiang Qiu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Sciences), Guangzhou, China
| | - Yanqi Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China
| | - Xin Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; School of Medicine, Shantou University Medical College, Shantou, China
| | - Yanting Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China; School of Medicine, South Medical University, Guangzhou, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; School of Medicine, South China University of Technology, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China
| | - Yongmei Guo
- Department of Radiology, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China
| | - Yingying Liang
- Department of Radiology, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China
| | - Yizhou Chen
- Department of Radiology, Puning People's Hospital, Southern Medical University, Jieyang, China
| | - Jie Lin
- Department of Radiology, Puning People's Hospital, Southern Medical University, Jieyang, China
| | - Enyan Chen
- Department of Radiology, Puning People's Hospital, Southern Medical University, Jieyang, China
| | - Yanlian Jia
- Department of Radiology, Liaobu Hospital of Guangdong, Dongguan, China
| | - Zhihong Chen
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China
| | - Bochi Zheng
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Tong Ling
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China
| | - Shunli Liu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wuteng Cao
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Ruiping Zhang
- Department of Radiology, Shanxi Bethune Hospital (Shanxi Academy of Medical Sciences), The Third Affiliated Hospital of Shanxi Medical University, Taiyuan, China.
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; School of Medicine, South China University of Technology, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China.
| |
Collapse
|
40
|
Wang X, Yang YQ, Cai S, Li JC, Wang HY. Deep-learning-based sampling position selection on color Doppler sonography images during renal artery ultrasound scanning. Sci Rep 2024; 14:11768. [PMID: 38782971 PMCID: PMC11116437 DOI: 10.1038/s41598-024-60355-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
Accurate selection of sampling positions is critical in renal artery ultrasound examinations, and the potential of utilizing deep learning (DL) for assisting in this selection has not been previously evaluated. This study aimed to evaluate the effectiveness of DL object detection technology applied to color Doppler sonography (CDS) images in assisting sampling position selection. A total of 2004 patients who underwent renal artery ultrasound examinations were included in the study. CDS images from these patients were categorized into four groups based on the scanning position: abdominal aorta (AO), normal renal artery (NRA), renal artery stenosis (RAS), and intrarenal interlobular artery (IRA). Seven object detection models, including three two-stage models (Faster R-CNN, Cascade R-CNN, and Double Head R-CNN) and four one-stage models (RetinaNet, YOLOv3, FoveaBox, and Deformable DETR), were trained to predict the sampling position, and their predictive accuracies were compared. The Double Head R-CNN model exhibited significantly higher average accuracies on both parameter optimization and validation datasets (89.3 ± 0.6% and 88.5 ± 0.3%, respectively) compared to other methods. On clinical validation data, the predictive accuracies of the Double Head R-CNN model for all four types of images were significantly higher than those of the other methods. The DL object detection model shows promise in assisting inexperienced physicians in improving the accuracy of sampling position selection during renal artery ultrasound examinations.
Collapse
Affiliation(s)
- Xin Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Yu-Qing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Sheng Cai
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Jian-Chu Li
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| | - Hong-Yan Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| |
Collapse
|
41
|
Zhao L, Wang N, Zhu X, Wu Z, Shen A, Zhang L, Wang R, Wang D, Zhang S. Establishment and validation of an artificial intelligence-based model for real-time detection and classification of colorectal adenoma. Sci Rep 2024; 14:10750. [PMID: 38729988 PMCID: PMC11087479 DOI: 10.1038/s41598-024-61342-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/05/2024] [Indexed: 05/12/2024] Open
Abstract
Colorectal cancer (CRC) prevention requires early detection and removal of adenomas. We aimed to develop a computational model for real-time detection and classification of colorectal adenoma. Computationally constrained background based on real-time detection, we propose an improved adaptive lightweight ensemble model for real-time detection and classification of adenomas and other polyps. Firstly, we devised an adaptive lightweight network modification and effective training strategy to diminish the computational requirements for real-time detection. Secondly, by integrating the adaptive lightweight YOLOv4 with the single shot multibox detector network, we established the adaptive small object detection ensemble (ASODE) model, which enhances the precision of detecting target polyps without significantly increasing the model's memory footprint. We conducted simulated training using clinical colonoscopy images and videos to validate the method's performance, extracting features from 1148 polyps and employing a confidence threshold of 0.5 to filter out low-confidence sample predictions. Finally, compared to state-of-the-art models, our ASODE model demonstrated superior performance. In the test set, the sensitivity of images and videos reached 87.96% and 92.31%, respectively. Additionally, the ASODE model achieved an accuracy of 92.70% for adenoma detection with a false positive rate of 8.18%. Training results indicate the effectiveness of our method in classifying small polyps. Our model exhibits remarkable performance in real-time detection of colorectal adenomas, serving as a reliable tool for assisting endoscopists.
Collapse
Affiliation(s)
- Luqing Zhao
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China
| | - Nan Wang
- School of Mathematics and Statistics, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing, 100081, China
| | - Xihan Zhu
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China
| | - Zhenyu Wu
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China
| | - Aihua Shen
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China
| | - Lihong Zhang
- Shunyi Hospital, Beijing Traditional Chinese Medicine Hospital, Beijing, China
| | - Ruixin Wang
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China
| | - Dianpeng Wang
- School of Mathematics and Statistics, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing, 100081, China.
| | - Shengsheng Zhang
- Digestive Disease Center, Beijing Hospital of Traditional Chinese Medicine, Capital Medical University, No. 23, Back Street of Art Museum, Dongcheng District, Beijing, 100010, China.
| |
Collapse
|
42
|
Daneshpajooh V, Ahmad D, Toth J, Bascom R, Higgins WE. Automatic lesion detection for narrow-band imaging bronchoscopy. J Med Imaging (Bellingham) 2024; 11:036002. [PMID: 38827776 PMCID: PMC11138083 DOI: 10.1117/1.jmi.11.3.036002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 04/04/2024] [Accepted: 05/14/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution. Approach Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions. Results Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach. Conclusion The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.
Collapse
Affiliation(s)
- Vahid Daneshpajooh
- The Pennsylvania State University, School of Electrical Engineering and Computer Science, University Park, Pennsylvania, United States
| | - Danish Ahmad
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - Jennifer Toth
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - Rebecca Bascom
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - William E. Higgins
- The Pennsylvania State University, School of Electrical Engineering and Computer Science, University Park, Pennsylvania, United States
| |
Collapse
|
43
|
Okumura T, Imai K, Misawa M, Kudo SE, Hotta K, Ito S, Kishida Y, Takada K, Kawata N, Maeda Y, Yoshida M, Yamamoto Y, Minamide T, Ishiwatari H, Sato J, Matsubayashi H, Ono H. Evaluating false-positive detection in a computer-aided detection system for colonoscopy. J Gastroenterol Hepatol 2024; 39:927-934. [PMID: 38273460 DOI: 10.1111/jgh.16491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/21/2023] [Accepted: 01/03/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND AIM Computer-aided detection (CADe) systems can efficiently detect polyps during colonoscopy. However, false-positive (FP) activation is a major limitation of CADe. We aimed to compare the rate and causes of FP using CADe before and after an update designed to reduce FP. METHODS We analyzed CADe-assisted colonoscopy videos recorded between July 2022 and October 2022. The number and causes of FPs and excessive time spent by the endoscopist on FP (ET) were compared pre- and post-update using 1:1 propensity score matching. RESULTS During the study period, 191 colonoscopy videos (94 and 97 in the pre- and post-update groups, respectively) were recorded. Propensity score matching resulted in 146 videos (73 in each group). The mean number of FPs and median ET per colonoscopy were significantly lower in the post-update group than those in the pre-update group (4.2 ± 3.7 vs 18.1 ± 11.1; P < 0.001 and 0 vs 16 s; P < 0.001, respectively). Mucosal tags, bubbles, and folds had the strongest association with decreased FP post-update (pre-update vs post-update: 4.3 ± 3.6 vs 0.4 ± 0.8, 0.32 ± 0.70 vs 0.04 ± 0.20, and 8.6 ± 6.7 vs 1.6 ± 1.7, respectively). There was no significant decrease in the true positive rate (post-update vs pre-update: 95.0% vs 99.2%; P = 0.09) or the adenoma detection rate (post-update vs pre-update: 52.1% vs 49.3%; P = 0.87). CONCLUSIONS The updated CADe can reduce FP without impairing polyp detection. A reduction in FP may help relieve the burden on endoscopists.
Collapse
Affiliation(s)
- Taishi Okumura
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Kenichiro Imai
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Kinichi Hotta
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Sayo Ito
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Kazunori Takada
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Noboru Kawata
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yuki Maeda
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masao Yoshida
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yoichi Yamamoto
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | | | - Junya Sato
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Hiroyuki Ono
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| |
Collapse
|
44
|
Guo F, Meng H. Application of artificial intelligence in gastrointestinal endoscopy. Arab J Gastroenterol 2024; 25:93-96. [PMID: 38228443 DOI: 10.1016/j.ajg.2023.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/06/2023] [Accepted: 12/30/2023] [Indexed: 01/18/2024]
Abstract
Endoscopy is an important method for diagnosing gastrointestinal (GI) diseases. In this study, we provide an overview of the advances in artificial intelligence (AI) technology in the field of GI endoscopy over recent years, including esophagus, stomach, large intestine, and capsule endoscopy (small intestine). AI-assisted endoscopy shows high accuracy, sensitivity, and specificity in the detection and diagnosis of GI diseases at all levels. Hence, AI will make a breakthrough in the field of GI endoscopy in the near future. However, AI technology currently has some limitations and is still in the preclinical stages.
Collapse
Affiliation(s)
- Fujia Guo
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China
| | - Hua Meng
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China.
| |
Collapse
|
45
|
Alsubai S. Transfer learning based approach for lung and colon cancer detection using local binary pattern features and explainable artificial intelligence (AI) techniques. PeerJ Comput Sci 2024; 10:e1996. [PMID: 38660170 PMCID: PMC11042027 DOI: 10.7717/peerj-cs.1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cancer, a life-threatening disorder caused by genetic abnormalities and metabolic irregularities, is a substantial health danger, with lung and colon cancer being major contributors to death. Histopathological identification is critical in directing effective treatment regimens for these cancers. The earlier these disorders are identified, the lesser the risk of death. The use of machine learning and deep learning approaches has the potential to speed up cancer diagnosis processes by allowing researchers to analyse large patient databases quickly and affordably. This study introduces the Inception-ResNetV2 model with strategically incorporated local binary patterns (LBP) features to improve diagnostic accuracy for lung and colon cancer identification. The model is trained on histopathological images, and the integration of deep learning and texture-based features has demonstrated its exceptional performance with 99.98% accuracy. Importantly, the study employs explainable artificial intelligence (AI) through SHapley Additive exPlanations (SHAP) to unravel the complex inner workings of deep learning models, providing transparency in decision-making processes. This study highlights the potential to revolutionize cancer diagnosis in an era of more accurate and reliable medical assessments.
Collapse
Affiliation(s)
- Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
46
|
Rogers MP, Janjua HM, Walczak S, Baker M, Read M, Cios K, Velanovich V, Pietrobon R, Kuo PC. Artificial Intelligence in Surgical Research: Accomplishments and Future Directions. Am J Surg 2024; 230:82-90. [PMID: 37981516 DOI: 10.1016/j.amjsurg.2023.10.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/22/2023] [Indexed: 11/21/2023]
Abstract
MINI-ABSTRACT The study introduces various methods of performing conventional ML and their implementation in surgical areas, and the need to move beyond these traditional approaches given the advent of big data. OBJECTIVE Investigate current understanding and future directions of machine learning applications, such as risk stratification, clinical data analytics, and decision support, in surgical practice. SUMMARY BACKGROUND DATA The advent of the electronic health record, near unlimited computing, and open-source computational packages have created an environment for applying artificial intelligence, machine learning, and predictive analytic techniques to healthcare. The "hype" phase has passed, and algorithmic approaches are being developed for surgery patients through all stages of care, involving preoperative, intraoperative, and postoperative components. Surgeons must understand and critically evaluate the strengths and weaknesses of these methodologies. METHODS The current body of AI literature was reviewed, emphasizing on contemporary approaches important in the surgical realm. RESULTS AND CONCLUSIONS The unrealized impacts of AI on clinical surgery and its subspecialties are immense. As this technology continues to pervade surgical literature and clinical applications, knowledge of its inner workings and shortcomings is paramount in determining its appropriate implementation.
Collapse
Affiliation(s)
- Michael P Rogers
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Haroon M Janjua
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Steven Walczak
- School of Information & Florida Center for Cybersecurity, University of South Florida, Tampa, FL, USA
| | - Marshall Baker
- Department of Surgery, Loyola University Medical Center, Maywood, IL, USA
| | - Meagan Read
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Konrad Cios
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Vic Velanovich
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | | | - Paul C Kuo
- Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA.
| |
Collapse
|
47
|
Yao P, Witte D, German A, Periyakoil P, Kim YE, Gimonet H, Sulica L, Born H, Elemento O, Barnes J, Rameau A. A deep learning pipeline for automated classification of vocal fold polyps in flexible laryngoscopy. Eur Arch Otorhinolaryngol 2024; 281:2055-2062. [PMID: 37695363 DOI: 10.1007/s00405-023-08190-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 08/12/2023] [Indexed: 09/12/2023]
Abstract
PURPOSE To develop and validate a deep learning model for distinguishing healthy vocal folds (HVF) and vocal fold polyps (VFP) on laryngoscopy videos, while demonstrating the ability of a previously developed informative frame classifier in facilitating deep learning development. METHODS Following retrospective extraction of image frames from 52 HVF and 77 unilateral VFP videos, two researchers manually labeled each frame as informative or uninformative. A previously developed informative frame classifier was used to extract informative frames from the same video set. Both sets of videos were independently divided into training (60%), validation (20%), and test (20%) by patient. Machine-labeled frames were independently verified by two researchers to assess the precision of the informative frame classifier. Two models, pre-trained on ResNet18, were trained to classify frames as containing HVF or VFP. The accuracy of the polyp classifier trained on machine-labeled frames was compared to that of the classifier trained on human-labeled frames. The performance was measured by accuracy and area under the receiver operating characteristic curve (AUROC). RESULTS When evaluated on a hold-out test set, the polyp classifier trained on machine-labeled frames achieved an accuracy of 85% and AUROC of 0.84, whereas the classifier trained on human-labeled frames achieved an accuracy of 69% and AUROC of 0.66. CONCLUSION An accurate deep learning classifier for vocal fold polyp identification was developed and validated with the assistance of a peer-reviewed informative frame classifier for dataset assembly. The classifier trained on machine-labeled frames demonstrates improved performance compared to the classifier trained on human-labeled frames. LEVEL OF EVIDENCE: 4
Collapse
Affiliation(s)
- Peter Yao
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Dan Witte
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Alexander German
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Preethi Periyakoil
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Yeo Eun Kim
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Hortense Gimonet
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Lucian Sulica
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Hayley Born
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Olivier Elemento
- Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, NY, USA
| | - Josue Barnes
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, 240 East 59th St, New York, NY, 10022, USA.
| |
Collapse
|
48
|
Sheikh TS, Cho M. Segmentation of Variants of Nuclei on Whole Slide Images by Using Radiomic Features. Bioengineering (Basel) 2024; 11:252. [PMID: 38534526 DOI: 10.3390/bioengineering11030252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/10/2024] [Accepted: 02/26/2024] [Indexed: 03/28/2024] Open
Abstract
The histopathological segmentation of nuclear types is a challenging task because nuclei exhibit distinct morphologies, textures, and staining characteristics. Accurate segmentation is critical because it affects the diagnostic workflow for patient assessment. In this study, a framework was proposed for segmenting various types of nuclei from different organs of the body. The proposed framework improved the segmentation performance for each nuclear type using radiomics. First, we used distinct radiomic features to extract and analyze quantitative information about each type of nucleus and subsequently trained various classifiers based on the best input sub-features of each radiomic feature selected by a LASSO operator. Second, we inputted the outputs of the best classifier to various segmentation models to learn the variants of nuclei. Using the MoNuSAC2020 dataset, we achieved state-of-the-art segmentation performance for each category of nuclei type despite the complexity, overlapping, and obscure regions. The generalized adaptability of the proposed framework was verified by the consistent performance obtained in whole slide images of different organs of the body and radiomic features.
Collapse
Affiliation(s)
- Taimoor Shakeel Sheikh
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| | - Migyung Cho
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| |
Collapse
|
49
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
50
|
Soo JMP, Koh FHX. Detection of sessile serrated adenoma using artificial intelligence-enhanced endoscopy: an Asian perspective. ANZ J Surg 2024; 94:362-365. [PMID: 38149749 DOI: 10.1111/ans.18785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 11/04/2023] [Indexed: 12/28/2023]
Abstract
BACKGROUND As the serrated pathway has gained prominence as an alternative colorectal carcinogenesis pathway, sessile serrated adenomas or polyps (SSA/P) have been highlighted as lesions to rule out during colonoscopy. These lesions are however morphologically difficult to detect on endoscopy and can be mistaken for hyperplastic polyps due to similar endoscopic features. With the underlying nature of rapid progression and malignant transformation, interval cancer is a likely consequence of undetected or overlooked SSA/P. Real-time artificial intelligence (AI)-assisted colonoscopy via the computer-assisted detection system (CADe) is an increasingly useful tool in improving adenoma detection rate by providing a second eye during the procedure. In this article, we describe a guide through a video to illustrate the detection of SSA/P during AI-assisted colonoscopy. METHODS Consultant-grade endoscopists utilized real-time AI-assisted colonoscopy device, as part of a larger prospective study, to detect suspicious lesions which were later histopathologically confirmed to be SSA/P. RESULTS All lesions were picked up by the CADe where a real-time green box highlighted suspicious polyps to the clinician. Three SSA/P of varying morphology are described with reference to classical SSA/P features and with comparison to the features of the hyperplastic polyp found in our study. All three SSA/P observed are in keeping with the JNET Classification (Type 1). CONCLUSION In conclusion, CADe is a most useful aid to clinicians during endoscopy in the detection of SSA/P but must be complemented with factors such as good endoscopy skill and bowel prep for effective detection, and biopsy coupled with subsequent accurate histological diagnosis.
Collapse
Affiliation(s)
- Joycelyn Mun-Peng Soo
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Frederick Hong-Xiang Koh
- Colorectal Service, Department of General Surgery, Sengkang General Hospital, SingHealth Services, Singapore, Singapore
| |
Collapse
|