1
|
Bangolo A, Wadhwani N, Nagesh VK, Dey S, Tran HHV, Aguilar IK, Auda A, Sidiqui A, Menon A, Daoud D, Liu J, Pulipaka SP, George B, Furman F, Khan N, Plumptre A, Sekhon I, Lo A, Weissman S. Impact of artificial intelligence in the management of esophageal, gastric and colorectal malignancies. Artif Intell Gastrointest Endosc 2024; 5:90704. [DOI: 10.37126/aige.v5.i2.90704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/28/2024] [Accepted: 03/04/2024] [Indexed: 05/11/2024] Open
Abstract
The incidence of gastrointestinal malignancies has increased over the past decade at an alarming rate. Colorectal and gastric cancers are the third and fifth most commonly diagnosed cancers worldwide but are cited as the second and third leading causes of mortality. Early institution of appropriate therapy from timely diagnosis can optimize patient outcomes. Artificial intelligence (AI)-assisted diagnostic, prognostic, and therapeutic tools can assist in expeditious diagnosis, treatment planning/response prediction, and post-surgical prognostication. AI can intercept neoplastic lesions in their primordial stages, accurately flag suspicious and/or inconspicuous lesions with greater accuracy on radiologic, histopathological, and/or endoscopic analyses, and eliminate over-dependence on clinicians. AI-based models have shown to be on par, and sometimes even outperformed experienced gastroenterologists and radiologists. Convolutional neural networks (state-of-the-art deep learning models) are powerful computational models, invaluable to the field of precision oncology. These models not only reliably classify images, but also accurately predict response to chemotherapy, tumor recurrence, metastasis, and survival rates post-treatment. In this systematic review, we analyze the available evidence about the diagnostic, prognostic, and therapeutic utility of artificial intelligence in gastrointestinal oncology.
Collapse
Affiliation(s)
- Ayrton Bangolo
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nikita Wadhwani
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Vignesh K Nagesh
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Shraboni Dey
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Hadrian Hoang-Vu Tran
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Izage Kianifar Aguilar
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Auda Auda
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aman Sidiqui
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aiswarya Menon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Deborah Daoud
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - James Liu
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Sai Priyanka Pulipaka
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Blessy George
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Flor Furman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nareeman Khan
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Adewale Plumptre
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Imranjot Sekhon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Abraham Lo
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Simcha Weissman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| |
Collapse
|
2
|
Nguyen HT, Pietraszek N, Shelton SE, Arthur K, Kamm RD. Utilizing convolutional neural networks for discriminating cancer and stromal cells in three-dimensional cell culture images with nuclei counterstain. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S22710. [PMID: 39184400 PMCID: PMC11344342 DOI: 10.1117/1.jbo.29.s2.s22710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 05/09/2024] [Accepted: 05/23/2024] [Indexed: 08/27/2024]
Abstract
Significance Accurate cell segmentation and classification in three-dimensional (3D) images are vital for studying live cell behavior and drug responses in 3D tissue culture. Evaluating diverse cell populations in 3D cell culture over time necessitates non-toxic staining methods, as specific fluorescent tags may not be suitable, and immunofluorescence staining can be cytotoxic for prolonged live cell cultures. Aim We aim to perform machine learning-based cell classification within a live heterogeneous cell culture population grown in a 3D tissue culture relying only on reflectance, transmittance, and nuclei counterstained images obtained by confocal microscopy. Approach In this study, we employed a supervised convolutional neural network (CNN) to classify tumor cells and fibroblasts within 3D-grown spheroids. These cells are first segmented using the marker-controlled watershed image processing method. Training data included nuclei counterstaining, reflectance, and transmitted light images, with stained fibroblast and tumor cells as ground-truth labels. Results Our results demonstrate the successful marker-controlled watershed segmentation of 84% of spheroid cells into single cells. We achieved a median accuracy of 67% (95% confidence interval of the median is 65-71%) in identifying cell types. We also recapitulate the original 3D images using the CNN-classified cells to visualize the original 3D-stained image's cell distribution. Conclusion This study introduces a non-invasive toxicity-free approach to 3D cell culture evaluation, combining machine learning with confocal microscopy, opening avenues for advanced cell studies.
Collapse
Affiliation(s)
- Huu Tuan Nguyen
- Massachusetts Institute of Technology (MIT), Department of Mechanical Engineering and Department of Biological Engineering, Cambridge, Massachusetts, United States
| | - Nicholas Pietraszek
- Massachusetts Institute of Technology (MIT), Department of Mechanical Engineering and Department of Biological Engineering, Cambridge, Massachusetts, United States
| | - Sarah E. Shelton
- Massachusetts Institute of Technology (MIT), Department of Mechanical Engineering and Department of Biological Engineering, Cambridge, Massachusetts, United States
| | - Kwabena Arthur
- Massachusetts Institute of Technology (MIT), Department of Mechanical Engineering and Department of Biological Engineering, Cambridge, Massachusetts, United States
| | - Roger D. Kamm
- Massachusetts Institute of Technology (MIT), Department of Mechanical Engineering and Department of Biological Engineering, Cambridge, Massachusetts, United States
| |
Collapse
|
3
|
Rogasch JMM, Shi K, Kersting D, Seifert R. Methodological evaluation of original articles on radiomics and machine learning for outcome prediction based on positron emission tomography (PET). Nuklearmedizin 2023; 62:361-369. [PMID: 37995708 PMCID: PMC10667066 DOI: 10.1055/a-2198-0545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 10/25/2023] [Indexed: 11/25/2023]
Abstract
AIM Despite a vast number of articles on radiomics and machine learning in positron emission tomography (PET) imaging, clinical applicability remains limited, partly owing to poor methodological quality. We therefore systematically investigated the methodology described in publications on radiomics and machine learning for PET-based outcome prediction. METHODS A systematic search for original articles was run on PubMed. All articles were rated according to 17 criteria proposed by the authors. Criteria with >2 rating categories were binarized into "adequate" or "inadequate". The association between the number of "adequate" criteria per article and the date of publication was examined. RESULTS One hundred articles were identified (published between 07/2017 and 09/2023). The median proportion of articles per criterion that were rated "adequate" was 65% (range: 23-98%). Nineteen articles (19%) mentioned neither a test cohort nor cross-validation to separate training from testing. The median number of criteria with an "adequate" rating per article was 12.5 out of 17 (range, 4-17), and this did not increase with later dates of publication (Spearman's rho, 0.094; p = 0.35). In 22 articles (22%), less than half of the items were rated "adequate". Only 8% of articles published the source code, and 10% made the dataset openly available. CONCLUSION Among the articles investigated, methodological weaknesses have been identified, and the degree of compliance with recommendations on methodological quality and reporting shows potential for improvement. Better adherence to established guidelines could increase the clinical significance of radiomics and machine learning for PET-based outcome prediction and finally lead to the widespread use in routine clinical practice.
Collapse
Affiliation(s)
- Julian Manuel Michael Rogasch
- Department of Nuclear Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Berlin
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital University Hospital Bern, Bern, Switzerland
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
| |
Collapse
|
4
|
Yang KY, Mukundan A, Tsao YM, Shi XH, Huang CW, Wang HC. Assessment of hyperspectral imaging and CycleGAN-simulated narrowband techniques to detect early esophageal cancer. Sci Rep 2023; 13:20502. [PMID: 37993660 PMCID: PMC10665456 DOI: 10.1038/s41598-023-47833-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/19/2023] [Indexed: 11/24/2023] Open
Abstract
The clinical signs and symptoms of esophageal cancer (EC) are often not discernible until the intermediate or advanced phases. The detection of EC in advanced stages significantly decreases the survival rate to below 20%. This study conducts a comparative analysis of the efficacy of several imaging techniques, including white light image (WLI), narrowband imaging (NBI), cycle-consistent adversarial network simulated narrowband image (CNBI), and hyperspectral imaging simulated narrowband image (HNBI), in the early detection of esophageal cancer (EC). In conjunction with Kaohsiung Armed Forces General Hospital, a dataset consisting of 1000 EC pictures was used, including 500 images captured using WLI and 500 images captured using NBI. The CycleGAN model was used to generate the CNBI dataset. Additionally, a novel method for HSI imaging was created with the objective of generating HNBI pictures. The evaluation of the efficacy of these four picture types in early detection of EC was conducted using three indicators: CIEDE2000, entropy, and the structural similarity index measure (SSIM). Results of the CIEDE2000, entropy, and SSIM analyses suggest that using CycleGAN to generate CNBI images and HSI model for creating HNBI images is superior in detecting early esophageal cancer compared to the use of conventional WLI and NBI techniques.
Collapse
Affiliation(s)
- Kai-Yao Yang
- Department of Gastroenterology, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st Rd., Lingya District, Kaohsiung, 80284, Taiwan
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, 62102, Chiayi, Taiwan
| | - Yu-Ming Tsao
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, 62102, Chiayi, Taiwan
| | - Xian-Hong Shi
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, 62102, Chiayi, Taiwan
| | - Chien-Wei Huang
- Department of Gastroenterology, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st Rd., Lingya District, Kaohsiung, 80284, Taiwan.
- Department of Nursing, Tajen University, 20, Weixin Rd., Yanpu, 90741, Pingtung, Taiwan.
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, 62102, Chiayi, Taiwan.
- Hitspectra Intelligent Technology Co., Ltd., 4F., No. 2, Fuxing 4th Rd., Qianzhen District, Kaohsiung, 80661, Taiwan.
- Department of Medical Research, Dalin Tzu Chi General Hospital, 2, Min-Sheng Rd., Dalin, 62247, Chiayi, Taiwan.
| |
Collapse
|
5
|
Liao WC, Mukundan A, Sadiaza C, Tsao YM, Huang CW, Wang HC. Systematic meta-analysis of computer-aided detection to detect early esophageal cancer using hyperspectral imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:4383-4405. [PMID: 37799695 PMCID: PMC10549751 DOI: 10.1364/boe.492635] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 10/07/2023]
Abstract
One of the leading causes of cancer deaths is esophageal cancer (EC) because identifying it in early stage is challenging. Computer-aided diagnosis (CAD) could detect the early stages of EC have been developed in recent years. Therefore, in this study, complete meta-analysis of selected studies that only uses hyperspectral imaging to detect EC is evaluated in terms of their diagnostic test accuracy (DTA). Eight studies are chosen based on the Quadas-2 tool results for systematic DTA analysis, and each of the methods developed in these studies is classified based on the nationality of the data, artificial intelligence, the type of image, the type of cancer detected, and the year of publishing. Deeks' funnel plot, forest plot, and accuracy charts were made. The methods studied in these articles show the automatic diagnosis of EC has a high accuracy, but external validation, which is a prerequisite for real-time clinical applications, is lacking.
Collapse
Affiliation(s)
- Wei-Chih Liao
- Department of Internal Medicine, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan
- Graduate Institute of Epidemiology and Preventive Medicine, National Taiwan University, Taipei, Taiwan
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
| | - Cleorita Sadiaza
- Department of Mechanical Engineering, Far Eastern University, P. Paredes St., Sampaloc, Manila, 1015, Philippines
| | - Yu-Ming Tsao
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
| | - Chien-Wei Huang
- Department of Gastroenterology, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st.Rd., Lingya District, Kaohsiung City 80284, Taiwan
- Department of Nursing, Tajen University, 20, Weixin Rd., Yanpu Township, Pingtung County 90741, Taiwan
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
- Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chiayi, 62247, Taiwan
- Director of Technology Development, Hitspectra Intelligent Technology Co., Ltd., 4F., No. 2, Fuxing 4th Rd., Qianzhen Dist., Kaohsiung City 80661, Taiwan
| |
Collapse
|
6
|
Hellström H, Liedes J, Rainio O, Malaspina S, Kemppainen J, Klén R. Classification of head and neck cancer from PET images using convolutional neural networks. Sci Rep 2023; 13:10528. [PMID: 37386289 PMCID: PMC10310830 DOI: 10.1038/s41598-023-37603-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 06/23/2023] [Indexed: 07/01/2023] Open
Abstract
The aim of this study was to develop a convolutional neural network (CNN) for classifying positron emission tomography (PET) images of patients with and without head and neck squamous cell carcinoma (HNSCC) and other types of head and neck cancer. A PET/magnetic resonance imaging scan with 18F-fluorodeoxyglucose (18F-FDG) was performed for 200 head and neck cancer patients, 182 of which were diagnosed with HNSCC, and the location of cancer tumors was marked to the images with a binary mask by a medical doctor. The models were trained and tested with five-fold cross-validation with the primary data set of 1990 2D images obtained by dividing the original 3D images of 178 HNSCC patients into transaxial slices and with an additional test set with 238 images from the patients with head and neck cancer other than HNSCC. A shallow and a deep CNN were built by using the U-Net architecture for classifying the data into two groups based on whether an image contains cancer or not. The impact of data augmentation on the performance of the two CNNs was also considered. According to our results, the best model for this task in terms of area under receiver operator characteristic curve (AUC) is a deep augmented model with a median AUC of 85.1%. The four models had highest sensitivity for HNSCC tumors on the root of the tongue (median sensitivities of 83.3-97.7%), in fossa piriformis (80.2-93.3%), and in the oral cavity (70.4-81.7%). Despite the fact that the models were trained with only HNSCC data, they had also very good sensitivity for detecting follicular and papillary carcinoma of thyroid gland and mucoepidermoid carcinoma of the parotid gland (91.7-100%).
Collapse
Affiliation(s)
- Henri Hellström
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| | - Joonas Liedes
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| | - Oona Rainio
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland.
| | - Simona Malaspina
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
- Department of Clinical Physiology and Nuclear Medicine, Turku University Hospital, Turku, Finland
| | - Jukka Kemppainen
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
- Department of Clinical Physiology and Nuclear Medicine, Turku University Hospital, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
7
|
Zhang ST, Wang SY, Zhang J, Dong D, Mu W, Xia XE, Fu FF, Lu YN, Wang S, Tang ZC, Li P, Qu JR, Wang MY, Tian J, Liu JH. Artificial intelligence-based computer-aided diagnosis system supports diagnosis of lymph node metastasis in esophageal squamous cell carcinoma: A multicenter study. Heliyon 2023; 9:e14030. [PMID: 36923854 PMCID: PMC10009687 DOI: 10.1016/j.heliyon.2023.e14030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/06/2023] Open
Abstract
Background This study aimed to develop an artificial intelligence-based computer-aided diagnosis system (AI-CAD) emulating the diagnostic logic of radiologists for lymph node metastasis (LNM) in esophageal squamous cell carcinoma (ESCC) patients, which contributed to clinical treatment decision-making. Methods A total of 689 ESCC patients with PET/CT images were enrolled from three hospitals and divided into a training cohort and two external validation cohorts. 452 CT images from three publicly available datasets were also included for pretraining the model. Anatomic information from CT images was first obtained automatically using a U-Net-based multi-organ segmentation model, and metabolic information from PET images was subsequently extracted using a gradient-based approach. AI-CAD was developed in the training cohort and externally validated in two validation cohorts. Results The AI-CAD achieved an accuracy of 0.744 for predicting pathological LNM in the external cohort and a good agreement with a human expert in two external validation cohorts (kappa = 0.674 and 0.587, p < 0.001). With the aid of AI-CAD, the human expert's diagnostic performance for LNM was significantly improved (accuracy [95% confidence interval]: 0.712 [0.669-0.758] vs. 0.833 [0.797-0.865], specificity [95% confidence interval]: 0.697 [0.636-0.753] vs. 0.891 [0.851-0.928]; p < 0.001) among patients underwent lymphadenectomy in the external validation cohorts. Conclusions The AI-CAD could aid in preoperative diagnosis of LNM in ESCC patients and thereby support clinical treatment decision-making.
Collapse
Key Words
- 18F-FDG PET/CT, 18-fluorine-fluorodeoxyglucose positron-emission tomography/computed tomography
- AI, Artificial intelligence
- AI-CAD, Artificial intelligence-based computer-aided diagnosis
- Artificial intelligence
- CI, Confidence interval
- CT, Computed tomography
- ESCC, Esophageal squamous cell carcinoma
- Esophageal squamous cell carcinoma
- LNM, Lymph node metastasis
- Lymph node metastasis
- OS, Overall survival
- PET/CT
- PFS, Progression-free survival
- SD, Standard deviation
- SLR, Ratio of the SUV value to liver uptake
- SUV, Standardized uptake value
- cN, Clinical N stage
- nCRT, Neoadjuvant chemoradiotherapy
- pN, Pathological N stage
Collapse
Affiliation(s)
- Shuai-Tong Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Si-Yun Wang
- Department of PET Center, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Zhang
- Department of Radiology, Zhuhai City People's Hospital/Zhuhai Hospital Affiliated to Jinan University, Zhuhai, Guangdong, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Wei Mu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Xue-Er Xia
- Department of Gastrointestinal Surgery, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Fang-Fang Fu
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Ya-Nan Lu
- Department of Radiology, Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, Henan, China
| | - Shuo Wang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Zhen-Chao Tang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Peng Li
- Department of PET Center, Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, Henan, China
| | - Jin-Rong Qu
- Department of Radiology, Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, Henan, China
| | - Mei-Yun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Jian-Hua Liu
- Department of Oncology, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
8
|
Thavanesan N, Vigneswaran G, Bodala I, Underwood TJ. The Oesophageal Cancer Multidisciplinary Team: Can Machine Learning Assist Decision-Making? J Gastrointest Surg 2023; 27:807-822. [PMID: 36689150 PMCID: PMC10073064 DOI: 10.1007/s11605-022-05575-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 12/10/2022] [Indexed: 01/24/2023]
Abstract
BACKGROUND The complexity of the upper gastrointestinal (UGI) multidisciplinary team (MDT) is continually growing, leading to rising clinician workload, time pressures, and demands. This increases heterogeneity or 'noise' within decision-making for patients with oesophageal cancer (OC) and may lead to inconsistent treatment decisions. In recent decades, the application of artificial intelligence (AI) and more specifically the branch of machine learning (ML) has led to a paradigm shift in the perceived utility of statistical modelling within healthcare. Within oesophageal cancer (OC) care, ML techniques have already been applied with early success to the analyses of histological samples and radiology imaging; however, it has not yet been applied to the MDT itself where such models are likely to benefit from incorporating information-rich, diverse datasets to increase predictive model accuracy. METHODS This review discusses the current role the MDT plays in modern UGI cancer care as well as the utilisation of ML techniques to date using histological and radiological data to predict treatment response, prognostication, nodal disease evaluation, and even resectability within OC. RESULTS The review finds that an emerging body of evidence is growing in support of ML tools within multiple domains relevant to decision-making within OC including automated histological analysis and radiomics. However, to date, no specific application has been directed to the MDT itself which routinely assimilates this information. CONCLUSIONS The authors feel the UGI MDT offers an information-rich, diverse array of data from which ML offers the potential to standardise, automate, and produce more consistent, data-driven MDT decisions.
Collapse
Affiliation(s)
- Navamayooran Thavanesan
- School of Cancer Sciences, Faculty of Medicine, University of Southampton, University Hospitals Southampton, Southampton, UK.
| | - Ganesh Vigneswaran
- School of Cancer Sciences, Faculty of Medicine, University of Southampton, University Hospitals Southampton, Southampton, UK
| | - Indu Bodala
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Timothy J Underwood
- School of Cancer Sciences, Faculty of Medicine, University of Southampton, University Hospitals Southampton, Southampton, UK
| |
Collapse
|
9
|
Wong PK, Chan IN, Yan HM, Gao S, Wong CH, Yan T, Yao L, Hu Y, Wang ZR, Yu HH. Deep learning based radiomics for gastrointestinal cancer diagnosis and treatment: A minireview. World J Gastroenterol 2022; 28:6363-6379. [PMID: 36533112 PMCID: PMC9753055 DOI: 10.3748/wjg.v28.i45.6363] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/25/2022] [Accepted: 11/16/2022] [Indexed: 12/02/2022] Open
Abstract
Gastrointestinal (GI) cancers are the major cause of cancer-related mortality globally. Medical imaging is an important auxiliary means for the diagnosis, assessment and prognostic prediction of GI cancers. Radiomics is an emerging and effective technology to decipher the encoded information within medical images, and traditional machine learning is the most commonly used tool. Recent advances in deep learning technology have further promoted the development of radiomics. In the field of GI cancer, although there are several surveys on radiomics, there is no specific review on the application of deep-learning-based radiomics (DLR). In this review, a search was conducted on Web of Science, PubMed, and Google Scholar with an emphasis on the application of DLR for GI cancers, including esophageal, gastric, liver, pancreatic, and colorectal cancers. Besides, the challenges and recommendations based on the findings of the review are comprehensively analyzed to advance DLR.
Collapse
Affiliation(s)
- Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - In Neng Chan
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - Hao-Ming Yan
- School of Clinical Medicine, China Medical University, Shenyang 110013, Liaoning Province, China
| | - Shan Gao
- Department of Gastroenterology, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang 441021, Hubei Province, China
| | - Chi Hong Wong
- Faculty of Medicine, Macau University of Science and Technology, Taipa 999078, Macau, China
| | - Tao Yan
- School of Mechanical Engineering, Hubei University of Arts and Science, Xiangyang 441053, Hubei Province, China
| | - Liang Yao
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, Guangdong Province, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, Guangdong Province, China
| | - Zhong-Ren Wang
- School of Mechanical Engineering, Hubei University of Arts and Science, Xiangyang 441053, Hubei Province, China
| | - Hon Ho Yu
- Department of Gastroenterology, Kiang Wu Hospital, Macau 999078, China
| |
Collapse
|
10
|
Xie C, Hu Y, Han L, Fu J, Vardhanabhuti V, Yang H. Prediction of Individual Lymph Node Metastatic Status in Esophageal Squamous Cell Carcinoma Using Routine Computed Tomography Imaging: Comparison of Size-Based Measurements and Radiomics-Based Models. Ann Surg Oncol 2022; 29:8117-8126. [PMID: 36018524 DOI: 10.1245/s10434-022-12207-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 06/08/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Lymph node status is vital for prognosis and treatment decisions for esophageal squamous cell carcinoma (ESCC). This study aimed to construct and evaluate an optimal radiomics-based method for a more accurate evaluation of individual regional lymph node status in ESCC and to compare it with traditional size-based measurements. METHODS The study consecutively collected 3225 regional lymph nodes from 530 ESCC patients receiving upfront surgery from January 2011 to October 2015. Computed tomography (CT) scans for individual lymph nodes were analyzed. The study evaluated the predictive performance of machine-learning models trained on features extracted from two-dimensional (2D) and three-dimensional (3D) radiomics by different contouring methods. Robust and important radiomics features were selected, and classification models were further established and validated. RESULTS The lymph node metastasis rate was 13.2% (427/3225). The average short-axis diameter was 6.4 mm for benign lymph nodes and 7.9 mm for metastatic lymph nodes. The division of lymph node stations into five regions according to anatomic lymph node drainage (cervical, upper mediastinal, middle mediastinal, lower mediastinal, and abdominal regions) improved the predictive performance. The 2D radiomics method showed optimal diagnostic results, with more efficient segmentation of nodal lesions. In the test set, this optimal model achieved an area under the receiver operating characteristic curve of 0.841-0.891, an accuracy of 84.2-94.7%, a sensitivity of 65.7-83.3%, and a specificity of 84.4-96.7%. CONCLUSIONS The 2D radiomics-based models noninvasively predicted the metastatic status of an individual lymph node in ESCC and outperformed the conventional size-based measurement. The 2D radiomics-based model could be incorporated into the current clinical workflow to enable better decision-making for treatment strategies.
Collapse
Affiliation(s)
- Chenyi Xie
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China.,Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Pok Fu Lam, Hong Kong SAR, China
| | - Yihuai Hu
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Lujun Han
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianhua Fu
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Pok Fu Lam, Hong Kong SAR, China.
| | - Hong Yang
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China.
| |
Collapse
|
11
|
Wang R, Guo J, Zhou Z, Wang K, Gou S, Xu R, Sher D, Wang J. Locoregional recurrence prediction in head and neck cancer based on multi-modality and multi-view feature expansion. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac72f0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 05/24/2022] [Indexed: 12/09/2022]
Abstract
Abstract
Objective. Locoregional recurrence (LRR) is one of the leading causes of treatment failure in head and neck (H&N) cancer. Accurately predicting LRR after radiotherapy is essential to achieving better treatment outcomes for patients with H&N cancer through developing personalized treatment strategies. We aim to develop an end-to-end multi-modality and multi-view feature extension method (MMFE) to predict LRR in H&N cancer. Approach. Deep learning (DL) has been widely used for building prediction models and has achieved great success. Nevertheless, 2D-based DL models inherently fail to utilize the contextual information from adjacent slices, while complicated 3D models have a substantially larger number of parameters, which require more training samples, memory and computing resources. In the proposed MMFE scheme, through the multi-view feature expansion and projection dimension reduction operations, we are able to reduce the model complexity while preserving volumetric information. Additionally, we designed a multi-modality convolutional neural network that can be trained in an end-to-end manner and can jointly optimize the use of deep features of CT, PET and clinical data to improve the model’s prediction ability. Main results. The dataset included 206 eligible patients, of which, 49 had LRR while 157 did not. The proposed MMFE method obtained a higher AUC value than the other four methods. The best prediction result was achieved when using all three modalities, which yielded an AUC value of 0.81. Significance. Comparison experiments demonstrated the superior performance of the MMFE as compared to other 2D/3D-DL-based methods. By combining CT, PET and clinical features, the MMFE could potentially identify H&N cancer patients at high risk for LRR such that personalized treatment strategy can be developed accordingly.
Collapse
|
12
|
Xu J, Wang L, Sun H, Liu S. Evaluation of the Effect of Comprehensive Nursing Interventions on Plaque Control in Patients with Periodontal Disease in the Context of Artificial Intelligence. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6505672. [PMID: 35368922 PMCID: PMC8967516 DOI: 10.1155/2022/6505672] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 12/23/2022]
Abstract
Plaque is a bacterial biofilm that adheres to each other and exists on the tooth surface, and new plaque can continuously reform after removing it from the tooth surface. The pathogenesis of periodontal disease is related to the bacteria, the host and the environment, with the bacteria and bacterial products in plaque being the main initiators of periodontal disease. The effective control of plaque is an effective method for the treatment and prevention of periodontal disease and is often underappreciated in clinical practice. For the traditional diagnostic method through experience and visual observation, it may lead to misdiagnosis and underdiagnosis. In order to accurately diagnose plaque disease, this study designed a convolutional neural network-based oral dental disease diagnosis system for oral care interventions to improve oral health awareness. Thus motivate patients to implement proper oral health care measures, and continuously and lifelong insist on thorough daily plaque removal to improve patients' physical health and quality of life in periodontal disease patients.
Collapse
Affiliation(s)
- Juan Xu
- Department of Stomatology, First People's Hospital of Yongkang City, Yongkang City, Zhejiang Province, China
| | - Lingling Wang
- Department of Internal Medicine-Cardiovascular Department Xiangyang No. 1 People'sHospital, Hubei University of Medicine, Xiangyang 441000, China
| | - Hongxia Sun
- Qingdao Jimo District Tongji Health Center and Medical Nursing, Qingdao, Shandong 266228, China
| | - Shanshan Liu
- Department of Stomatology, Fourth Affiliated Hospital, Hebei Medical University, Shijiazhuang 050017, Hebei, China
| |
Collapse
|
13
|
Xie CY, Pang CL, Chan B, Wong EYY, Dou Q, Vardhanabhuti V. Machine Learning and Radiomics Applications in Esophageal Cancers Using Non-Invasive Imaging Methods-A Critical Review of Literature. Cancers (Basel) 2021; 13:2469. [PMID: 34069367 PMCID: PMC8158761 DOI: 10.3390/cancers13102469] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 05/12/2021] [Accepted: 05/15/2021] [Indexed: 11/16/2022] Open
Abstract
Esophageal cancer (EC) is of public health significance as one of the leading causes of cancer death worldwide. Accurate staging, treatment planning and prognostication in EC patients are of vital importance. Recent advances in machine learning (ML) techniques demonstrate their potential to provide novel quantitative imaging markers in medical imaging. Radiomics approaches that could quantify medical images into high-dimensional data have been shown to improve the imaging-based classification system in characterizing the heterogeneity of primary tumors and lymph nodes in EC patients. In this review, we aim to provide a comprehensive summary of the evidence of the most recent developments in ML application in imaging pertinent to EC patient care. According to the published results, ML models evaluating treatment response and lymph node metastasis achieve reliable predictions, ranging from acceptable to outstanding in their validation groups. Patients stratified by ML models in different risk groups have a significant or borderline significant difference in survival outcomes. Prospective large multi-center studies are suggested to improve the generalizability of ML techniques with standardized imaging protocols and harmonization between different centers.
Collapse
Affiliation(s)
- Chen-Yi Xie
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China;
| | - Chun-Lap Pang
- Department of Radiology, The Christies’ Hospital, Manchester M20 4BX, UK;
- Division of Dentistry, School of Medical Sciences, University of Manchester, Manchester M15 6FH, UK
| | - Benjamin Chan
- Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (B.C.); (E.Y.-Y.W.)
| | - Emily Yuen-Yuen Wong
- Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (B.C.); (E.Y.-Y.W.)
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China;
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China;
| |
Collapse
|
14
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
15
|
Ding S, Huang H, Li Z, Liu X, Yang S. SCNET: A Novel UGI Cancer Screening Framework Based on Semantic-Level Multimodal Data Fusion. IEEE J Biomed Health Inform 2021; 25:143-151. [PMID: 32224471 DOI: 10.1109/jbhi.2020.2983126] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Upper gastrointestinal (UGI) cancer has been identified as one of the ten most common causes of cancer deaths globally. UGI cancer screening is critical to improving the survival rate of UGI cancer patients. While many approaches to UGI cancer screening rely on single-modality data such as gastroscope imaging, limited studies have been dedicated to UGI cancer screening exploiting multisource and multimodal medical data, which could potentially lead to improved screening results. In this paper, we propose semantic-level cancer-screening network (SCNET), a framework for UGI cancer screening based on semantic-level multimodal upper gastrointestinal data fusion. Specifically, the proposed SCNET consists of a gastrointestinal image recognition flow and a textual medical record processing flow. High-level features of upper gastrointestinal data are extracted by identifying effective feature channels according to the correlation between the textual features and the spatial structure of the image features. The final screening results are obtained after the data fusion step. The experimental results show that the improvement of our approach over the state-of-the-art ones reached 4.01% in average. The source code of SCNET is available at https://github.com/netflymachine/SCNET.
Collapse
|
16
|
Yeh JCY, Yu WH, Yang CK, Chien LI, Lin KH, Huang WS, Hsu PK. Predicting aggressive histopathological features in esophageal cancer with positron emission tomography using a deep convolutional neural network. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:37. [PMID: 33553330 PMCID: PMC7859760 DOI: 10.21037/atm-20-1419] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Background The presence of lymphovascular invasion (LVI) and perineural invasion (PNI) are of great prognostic importance in esophageal squamous cell carcinoma. Currently, positron emission tomography (PET) scans are the only means of functional assessment prior to treatment. We aimed to predict the presence of LVI and PNI in esophageal squamous cell carcinoma using PET imaging data by training a three-dimensional convolution neural network (3D-CNN). Methods Seven hundred and ninety-eight PET scans of patients with esophageal squamous cell carcinoma and 309 PET scans of patients with stage I lung cancer were collected. In the first part of this study, we built a 3D-CNN based on a residual network, ResNet, for a task to classify the scans into esophageal cancer or lung cancer. In the second stage, we collected the PET scans of 278 patients undergoing esophagectomy for a task to classify and predict the presence of LVI/PNI. Results In the first part, the model performance attained an area under the receiver operating characteristic curve (AUC) of 0.860. In the second part, we randomly split 80%, 10%, and 10% of our dataset into training set, validation set and testing set, respectively, for a task to classify the scans into the presence of LVI/PNI and evaluated the model performance on the testing set. Our 3D-CNN model attained an AUC of 0.668 in the testing set, which shows a better discriminative ability than random guessing. Conclusions A 3D-CNN can be trained, using PET imaging datasets, to predict LNV/PNI in esophageal cancer with acceptable accuracy.
Collapse
Affiliation(s)
| | | | | | - Ling-I Chien
- Department of Nursing, Taipei Veterans General Hospital, Taipei
| | - Ko-Han Lin
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Wen-Sheng Huang
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Po-Kuei Hsu
- Division of Thoracic Surgery, Department of Surgery, Taipei Veterans General Hospital and School of Medicine, National Yang-Ming University, Taipei
| |
Collapse
|
17
|
Cherezov D, Paul R, Fetisov N, Gillies RJ, Schabath MB, Goldgof DB, Hall LO. Lung Nodule Sizes Are Encoded When Scaling CT Image for CNN's. ACTA ACUST UNITED AC 2020; 6:209-215. [PMID: 32548298 PMCID: PMC7289250 DOI: 10.18383/j.tom.2019.00024] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Noninvasive diagnosis of lung cancer in early stages is one task where radiomics helps. Clinical practice shows that the size of a nodule has high predictive power for malignancy. In the literature, convolutional neural networks (CNNs) have become widely used in medical image analysis. We study the ability of a CNN to capture nodule size in computed tomography images after images are resized for CNN input. For our experiments, we used the National Lung Screening Trial data set. Nodules were labeled into 2 categories (small/large) based on the original size of a nodule. After all extracted patches were re-sampled into 100-by-100-pixel images, a CNN was able to successfully classify test nodules into small- and large-size groups with high accuracy. To show the generality of our discovery, we repeated size classification experiments using Common Objects in Context (COCO) data set. From the data set, we selected 3 categories of images, namely, bears, cats, and dogs. For all 3 categories a 5- × 2-fold cross-validation was performed to put them into small and large classes. The average area under receiver operating curve is 0.954, 0.952, and 0.979 for the bear, cat, and dog categories, respectively. Thus, camera image rescaling also enables a CNN to discover the size of an object. The source code for experiments with the COCO data set is publicly available in Github (https://github.com/VisionAI-USF/COCO_Size_Decoding/).
Collapse
Affiliation(s)
- Dmitry Cherezov
- Department of Computer Sciences and Engineering, University of South Florida, Tampa, FL
| | - Rahul Paul
- Department of Computer Sciences and Engineering, University of South Florida, Tampa, FL
| | - Nikolai Fetisov
- Department of Computer Sciences and Engineering, University of South Florida, Tampa, FL
| | | | - Matthew B Schabath
- Cancer Epidemiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL
| | - Dmitry B Goldgof
- Department of Computer Sciences and Engineering, University of South Florida, Tampa, FL
| | - Lawrence O Hall
- Department of Computer Sciences and Engineering, University of South Florida, Tampa, FL
| |
Collapse
|
18
|
Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y. Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:486. [PMID: 32395530 PMCID: PMC7210177 DOI: 10.21037/atm.2020.03.24] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background Using deep learning techniques in image analysis is a dynamically emerging field. This study aims to use a convolutional neural network (CNN), a deep learning approach, to automatically classify esophageal cancer (EC) and distinguish it from premalignant lesions. Methods A total of 1,272 white-light images were adopted from 748 subjects, including normal cases, premalignant lesions, and cancerous lesions; 1,017 images were used to train the CNN, and another 255 images were examined to evaluate the CNN architecture. Our proposed CNN structure consists of two subnetworks (O-stream and P-stream). The original images were used as the inputs of the O-stream to extract the color and global features, and the pre-processed esophageal images were used as the inputs of the P-stream to extract the texture and detail features. Results The CNN system we developed achieved an accuracy of 85.83%, a sensitivity of 94.23%, and a specificity of 94.67% after the fusion of the 2 streams was accomplished. The classification accuracy of normal esophagus, premalignant lesion, and EC were 94.23%, 82.5%, and 77.14%, respectively, which shows a better performance than the Local Binary Patterns (LBP) + Support Vector Machine (SVM) and Histogram of Gradient (HOG) + SVM methods. A total of 8 of the 35 (22.85%) EC lesions were categorized as premalignant lesions because of their slightly reddish and flat lesions. Conclusions The CNN system, with 2 streams, demonstrated high sensitivity and specificity with the endoscopic images. It obtained better detection performance than the currently used methods based on the same datasets and has great application prospects in assisting endoscopists to distinguish esophageal lesion subclasses.
Collapse
Affiliation(s)
- Gaoshuang Liu
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Jie Hua
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Zhan Wu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China
| | - Tianfang Meng
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China
| | - Mengxue Sun
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Peiyun Huang
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Xiaopu He
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Weihao Sun
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Xueliang Li
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Yang Chen
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China.,Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France
| |
Collapse
|