1
|
Huang J, Saw SN, He T, Yang R, Qin Y, Chen Y, Kiong LC. DCNNLFS: A Dilated Convolutional Neural Network With Late Fusion Strategy for Intelligent Classification of Gastric Histopathology Images. IEEE J Biomed Health Inform 2024; 28:4534-4543. [PMID: 37983160 DOI: 10.1109/jbhi.2023.3334709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Gastric cancer has a high incidence rate, significantly threatening patients' health. Gastric histopathology images can reliably diagnose related diseases. Still, the data volume of histopathology images is too large, making misdiagnosis or missed diagnosis easy. The classification model based on deep learning has made some progress on gastric histopathology images. However, traditional convolutional neural networks (CNNs) generally use pooling operations, which will reduce the spatial resolution of the image, resulting in poor prediction results. The image feature in previous CNN has a poor perception of details. Therefore, we design a dilated CNN with a late fusion strategy (DCNNLFS) for gastric histopathology image classification. The DCNNLFS model utilizes dilated convolutions, enabling it to expand the receptive field. The dilated convolutions can learn the different contextual information by adjusting the dilation rate. The DCNNLFS model uses a late fusion strategy to enhance the classification ability of DCNNLFS. We run related experiments on a gastric histopathology image dataset to verify the excellence of the DCNNLFS model, where the three metrics Precision, Accuracy, and F1-Score are 0.938, 0.935, and 0.959.
Collapse
|
2
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
3
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
4
|
Deng Y, Qin HY, Zhou YY, Liu HH, Jiang Y, Liu JP, Bao J. Artificial intelligence applications in pathological diagnosis of gastric cancer. Heliyon 2022; 8:e12431. [PMID: 36619448 PMCID: PMC9816967 DOI: 10.1016/j.heliyon.2022.e12431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/29/2022] [Accepted: 12/09/2022] [Indexed: 12/23/2022] Open
Abstract
Globally, gastric cancer is the third leading cause of death from tumors. Prevention and individualized treatment are considered to be the best options for reducing the mortality rate of gastric cancer. Artificial intelligence (AI) technology has been widely used in the field of gastric cancer, including diagnosis, prognosis, and image analysis. Eligible papers were identified from PubMed and IEEE up to April 13, 2022. Through the comparison of these articles, the application status of AI technology in the diagnosis of gastric cancer was summarized, including application types, application scenarios, advantages and limitations. This review presents the current state and role of AI in the diagnosis of gastric cancer based on four aspects: 1) accurate sampling from early diagnosis (endoscopy), 2) digital pathological diagnosis, 3) molecules and genes, and 4) clinical big data analysis and prognosis prediction. AI plays a very important role in facilitating the diagnosis of gastric cancer; however, it also has shortcomings such as interpretability. The purpose of this review is to provide assistance to researchers working in this domain.
Collapse
Affiliation(s)
- Yang Deng
- Institute of Clinical Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hang-Yu Qin
- Institute of Clinical Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Yan-Yan Zhou
- Institute of Clinical Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hong-Hong Liu
- Institute of Clinical Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Yong Jiang
- Department of Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Jian-Ping Liu
- Department of Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Ji Bao
- Institute of Clinical Pathology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China,Corresponding author.
| |
Collapse
|
5
|
Herbsthofer L, Tomberger M, Smolle MA, Prietl B, Pieber TR, López-García P. Cell2Grid: an efficient, spatial, and convolutional neural network-ready representation of cell segmentation data. J Med Imaging (Bellingham) 2022; 9:067501. [PMID: 36466076 PMCID: PMC9709305 DOI: 10.1117/1.jmi.9.6.067501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose Cell segmentation algorithms are commonly used to analyze large histologic images as they facilitate interpretation, but on the other hand they complicate hypothesis-free spatial analysis. Therefore, many applications train convolutional neural networks (CNNs) on high-resolution images that resolve individual cells instead, but their practical application is severely limited by computational resources. In this work, we propose and investigate an alternative spatial data representation based on cell segmentation data for direct training of CNNs. Approach We introduce and analyze the properties of Cell2Grid, an algorithm that generates compact images from cell segmentation data by placing individual cells into a low-resolution grid and resolves possible cell conflicts. For evaluation, we present a case study on colorectal cancer relapse prediction using fluorescent multiplex immunohistochemistry images. Results We could generate Cell2Grid images at 5 - μ m resolution that were 100 times smaller than the original ones. Cell features, such as phenotype counts and nearest-neighbor cell distances, remain similar to those of original cell segmentation tables ( p < 0.0001 ). These images could be directly fed to a CNN for predicting colon cancer relapse. Our experiments showed that test set error rate was reduced by 25% compared with CNNs trained on images rescaled to 5 μ m with bilinear interpolation. Compared with images at 1 - μ m resolution (bilinear rescaling), our method reduced CNN training time by 85%. Conclusions Cell2Grid is an efficient spatial data representation algorithm that enables the use of conventional CNNs on cell segmentation data. Its cell-based representation additionally opens a door for simplified model interpretation and synthetic image generation.
Collapse
Affiliation(s)
- Laurin Herbsthofer
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
| | - Martina Tomberger
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
| | - Maria A. Smolle
- Medical University of Graz, Department of Orthopaedics and Trauma, Graz, Austria
| | - Barbara Prietl
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
| | - Thomas R. Pieber
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
- Health Institute for Biomedicine and Health Sciences, Joanneum Research Forschungsgesellschaft mbH, Graz, Austria
| | | |
Collapse
|
6
|
Primary Investigation of Deep Learning Models for Japanese “Group Classification” of Whole-Slide Images of Gastric Endoscopic Biopsy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6899448. [PMID: 36199768 PMCID: PMC9529421 DOI: 10.1155/2022/6899448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/08/2022] [Indexed: 11/19/2022]
Abstract
Background Accurate pathological diagnosis of gastric endoscopic biopsy could greatly improve the opportunity of early diagnosis and treatment of gastric cancer. The Japanese “Group classification” of gastric biopsy corresponds well with the endoscopic diagnostic system and can guide clinical treatment. However, severe shortage of pathologists and their heavy workload limit the diagnostic accuracy. This study presents the first attempt to investigate the applicability and effectiveness of AI-aided system for automated Japanese “Group classification” of gastric endoscopic biopsy. Methods In total, 260 whole-slide images of gastric endoscopic biopsy were collected from Dalian Municipal Central Hospital from January 2015 to January 2021. These images were annotated by experienced pathologists according to the Japanese “Group classification.” Five popular convolutional neural networks, i.e., VGG16, VGG19, ResNet50, Xception, and InceptionV3 were trained and tested. The performance of the models was compared in terms of widely used metrics, namely, AUC (area under the receiver operating characteristic curve, i.e., ROC curve), accuracy, recall, precision, and F1 score. Results Results showed that ResNet50 achieved the best performance with accuracy 93.16% and AUC 0.994. Conclusion Our results demonstrated the applicability and effectiveness of DL-based system for automated Japanese “Group classification” of gastric endoscopic biopsy.
Collapse
|
7
|
Shi X, Wang L, Li Y, Wu J, Huang H. GCLDNet: Gastric cancer lesion detection network combining level feature aggregation and attention feature fusion. Front Oncol 2022; 12:901475. [PMID: 36106104 PMCID: PMC9464831 DOI: 10.3389/fonc.2022.901475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Analysis of histopathological slices of gastric cancer is the gold standard for diagnosing gastric cancer, while manual identification is time-consuming and highly relies on the experience of pathologists. Artificial intelligence methods, particularly deep learning, can assist pathologists in finding cancerous tissues and realizing automated detection. However, due to the variety of shapes and sizes of gastric cancer lesions, as well as many interfering factors, GCHIs have a high level of complexity and difficulty in accurately finding the lesion region. Traditional deep learning methods cannot effectively extract discriminative features because of their simple decoding method so they cannot detect lesions accurately, and there is less research dedicated to detecting gastric cancer lesions. Methods We propose a gastric cancer lesion detection network (GCLDNet). At first, GCLDNet designs a level feature aggregation structure in decoder, which can effectively fuse deep and shallow features of GCHIs. Second, an attention feature fusion module is introduced to accurately locate the lesion area, which merges attention features of different scales and obtains rich discriminative information focusing on lesion. Finally, focal Tversky loss (FTL) is employed as a loss function to depress false-negative predictions and mine difficult samples. Results Experimental results on two GCHI datasets of SEED and BOT show that DSCs of the GCLDNet are 0.8265 and 0.8991, ACCs are 0.8827 and 0.8949, JIs are 0.7092 and 0.8182, and PREs are 0.7820 and 0.8763, respectively. Conclusions Experimental results demonstrate the effectiveness of GCLDNet in the detection of gastric cancer lesions. Compared with other state-of-the-art (SOTA) detection methods, the GCLDNet obtains a more satisfactory performance. This research can provide good auxiliary support for pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Long Wang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Yu Li
- Department of Pathology, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
| | - Jian Wu
- Head and Neck Cancer Center, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| |
Collapse
|
8
|
Zaalouk AM, Ebrahim GA, Mohamed HK, Hassan HM, Zaalouk MMA. A Deep Learning Computer-Aided Diagnosis Approach for Breast Cancer. Bioengineering (Basel) 2022; 9:bioengineering9080391. [PMID: 36004916 PMCID: PMC9405040 DOI: 10.3390/bioengineering9080391] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 07/19/2022] [Accepted: 08/12/2022] [Indexed: 11/16/2022] Open
Abstract
Breast cancer is a gigantic burden on humanity, causing the loss of enormous numbers of lives and amounts of money. It is the world’s leading type of cancer among women and a leading cause of mortality and morbidity. The histopathological examination of breast tissue biopsies is the gold standard for diagnosis. In this paper, a computer-aided diagnosis (CAD) system based on deep learning is developed to ease the pathologist’s mission. For this target, five pre-trained convolutional neural network (CNN) models are analyzed and tested—Xception, DenseNet201, InceptionResNetV2, VGG19, and ResNet152—with the help of data augmentation techniques, and a new approach is introduced for transfer learning. These models are trained and tested with histopathological images obtained from the BreakHis dataset. Multiple experiments are performed to analyze the performance of these models through carrying out magnification-dependent and magnification-independent binary and eight-class classifications. The Xception model has shown promising performance through achieving the highest classification accuracies for all the experiments. It has achieved a range of classification accuracies from 93.32% to 98.99% for magnification-independent experiments and from 90.22% to 100% for magnification-dependent experiments.
Collapse
Affiliation(s)
- Ahmed M. Zaalouk
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
- School of Computing, Coventry University—Egypt Branch, Hosted at the Knowledge Hub Universities, Cairo, Egypt
- Correspondence: (A.M.Z.); (G.A.E.)
| | - Gamal A. Ebrahim
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
- Correspondence: (A.M.Z.); (G.A.E.)
| | - Hoda K. Mohamed
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
| | - Hoda Mamdouh Hassan
- Department of Information Sciences and Technology, College of Engineering and Computing, George Mason University, Fairfax, VA 22030, USA
| | | |
Collapse
|
9
|
A multi-view deep learning model for pathology image diagnosis. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Li X, Cen M, Xu J, Zhang H, Xu XS. Improving feature extraction from histopathological images through a fine-tuning ImageNet model. J Pathol Inform 2022; 13:100115. [PMID: 36268072 PMCID: PMC9577036 DOI: 10.1016/j.jpi.2022.100115] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 06/05/2022] [Accepted: 06/24/2022] [Indexed: 11/04/2022] Open
Abstract
Background Due to lack of annotated pathological images, transfer learning has been the predominant approach in the field of digital pathology. Pre-trained neural networks based on ImageNet database are often used to extract "off-the-shelf" features, achieving great success in predicting tissue types, molecular features, and clinical outcomes, etc. We hypothesize that fine-tuning the pre-trained models using histopathological images could further improve feature extraction, and downstream prediction performance. Methods We used 100 000 annotated H&E image patches for colorectal cancer (CRC) to fine-tune a pre-trained Xception model via a 2-step approach. The features extracted from fine-tuned Xception (FTX-2048) model and Image-pretrained (IMGNET-2048) model were compared through: (1) tissue classification for H&E images from CRC, same image type that was used for fine-tuning; (2) prediction of immune-related gene expression, and (3) gene mutations for lung adenocarcinoma (LUAD). Five-fold cross validation was used for model performance evaluation. Each experiment was repeated 50 times. Findings The extracted features from the fine-tuned FTX-2048 exhibited significantly higher accuracy (98.4%) for predicting tissue types of CRC compared to the "off-the-shelf" features directly from Xception based on ImageNet database (96.4%) (P value = 2.2 × 10-6). Particularly, FTX-2048 markedly improved the accuracy for stroma from 87% to 94%. Similarly, features from FTX-2048 boosted the prediction of transcriptomic expression of immune-related genes in LUAD. For the genes that had significant relationships with image features (P < 0.05, n = 171), the features from the fine-tuned model improved the prediction for the majority of the genes (139; 81%). In addition, features from FTX-2048 improved prediction of mutation for 5 out of 9 most frequently mutated genes (STK11, TP53, LRP1B, NF1, and FAT1) in LUAD. Conclusions We proved the concept that fine-tuning the pretrained ImageNet neural networks with histopathology images can produce higher quality features and better prediction performance for not only the same-cancer tissue classification where similar images from the same cancer are used for fine-tuning, but also cross-cancer prediction for gene expression and mutation at patient level.
Collapse
Affiliation(s)
- Xingyu Li
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Min Cen
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Jinfeng Xu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong
| | - Hong Zhang
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Xu Steven Xu
- Clinical Pharmacology and Quantitative Science, Genmab Inc., Princeton, New Jersey, USA
| |
Collapse
|
11
|
Tung CL, Chang HC, Yang BZ, Hou KJ, Tsai HH, Tsai CY, Yu PT. Identifying pathological slices of gastric cancer via deep learning. J Formos Med Assoc 2022; 121:2457-2464. [PMID: 35667953 DOI: 10.1016/j.jfma.2022.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/26/2022] [Accepted: 05/10/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND The accuracy of histopathology diagnosis largely depends on the pathologist's experience. It usually takes over 10 years to cultivate a senior pathologist, and small numbers of them lead to a high workload for those available. Meanwhile, inconsistent diagnostic results may arise among different pathologists, especially in complex cases, because diagnosis based on morphology is subjective. Computerized analysis based on deep learning has shown potential benefits as a diagnostic strategy. METHODS This research aims to automatically determine the location of gastric cancer (GC) in the images of GC slices through artificial intelligence. We use image data from a regional teaching hospital in Taiwan for training. We collect images of patients diagnosed with GC from January 1, 2019 to December 31, 2020. In this study, scanned images are used to dissect 13,600 images from 50 different patients with GC sections whose GC sections are stained with hematoxylin and eosin (H&E stained) through a whole slide scanner, the scanned images from 50 different GC slice patients are divided into 80% training combinations, 2200 images of 40 patients are trained. The remaining 20%, totaling 10 people, are validated from a test set of 550 images. RESULTS The validation results show that 91% of the correct rates are interpreted as GC images through deep learning. The sensitivity, specificity, PPV, and NPV were 84.9%, 94%, 87.7%, and 92.5%, respectively. After creating a 3D model through the grayscale value, the position of the GC is completely marked by the 3D model. The purpose of this research is to use artificial intelligence (AI) to determine the location of the GC in the image of GC slice. CONCLUSION In patients undergoing pancreatectomy for pancreatic cancer, intraoperative infusion of lidocaine did not improve overall or disease-free survival. Reduced formation of circulating NETs was absent in pancreatic tumour tissue. CONCLUSION For AI to assist pathologists in daily practice, to help a pathologist making a definite diagnosis is not the prime purpose at present time. The benefits could come from cancer screening and double-check quality control in a heavy workload which could distract the attention of pathologist during the time constraint examination process. We propose a two-steps method to identify cancerous areas in endoscopic gastric biopsy slices via deep learning. Then a 3D model is used to further mark all the positions of GC in the picture, and the model overcomes the problem that deep learning can't catch all GC.
Collapse
Affiliation(s)
- Chun-Liang Tung
- Department of Pathology, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi, Taiwan; Department of Health and Nutrition Biotechnology, Asia University, Taichung, Taiwan
| | - Han-Cheng Chang
- Department of Computer Science & Information Engineering, National Chung Cheng University, Chiayi, Taiwan; Information Technology Department, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi, Taiwan
| | - Bo-Zhi Yang
- Department of Computer Science & Information Engineering, National Chung Cheng University, Chiayi, Taiwan; Information Technology Department, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi, Taiwan
| | - Keng-Jen Hou
- Information Technology Department, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi, Taiwan
| | - Hung-Hsu Tsai
- Department of Applied Mathematics, Institute of Data Science and Information Computing National Chung Hsing University
| | - Cheng-Yu Tsai
- Department of Computer Science & Information Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - Pao-Ta Yu
- Department of Computer Science & Information Engineering, National Chung Cheng University, Chiayi, Taiwan.
| |
Collapse
|
12
|
Fu B, Zhang M, He J, Cao Y, Guo Y, Wang R. StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106924. [PMID: 35671603 DOI: 10.1016/j.cmpb.2022.106924] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/28/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Gastric cancer has high morbidity and mortality compared to other cancers. Accurate histopathological diagnosis has great significance for the treatment of gastric cancer. With the development of artificial intelligence, many researchers have applied deep learning for the classification of gastric cancer pathological images. However, most studies have used binary classification on pathological images of gastric cancer, which is insufficient with respect to the clinical requirements. Therefore, we proposed a multi-classification method based on deep learning with more practical clinical value. METHODS In this study, we developed a novel multi-scale model called StoHisNet based on Transformer and the convolutional neural network (CNN) for the multi-classification task. StoHisNet adopts Transformer to learn global features to alleviate the inherent limitations of the convolution operation. The proposed StoHisNet can classify the publicly available pathological images of a gastric dataset into four categories -normal tissue, tubular adenocarcinoma, mucinous adenocarcinoma, and papillary adenocarcinoma. RESULTS The accuracy, F1-score, recall, and precision of the proposed model in the public gastric pathological image dataset were 94.69%, 94.96%, 94.95%, and 94.97%, respectively. We conducted additional experiments using two other public datasets to verify the generalization ability of the model. On the BreakHis dataset, our model performed better compared with other classification models, and the accuracy was 91.64%. Similarly, on the four-classification task on the Endometrium dataset, our model showed better classification ability than others with accuracy of 81.74%. These experiments showed that the proposed model has excellent ability of classification and generalization. CONCLUSION The StoHisNet model had high performance in the multi-classification on gastric histopathological images and showed strong generalization ability on other pathological datasets. This model may be a potential tool to assist pathologists in the analysis of gastric histopathological images.
Collapse
Affiliation(s)
- Bangkang Fu
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Mudan Zhang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Junjie He
- College of Computer Science and Technology, Guizhou University, Guizhou 550025, China
| | - Ying Cao
- Medical College, Guizhou University, Guizhou 550000, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100192, China
| | - Rongpin Wang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China.
| |
Collapse
|
13
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
14
|
Ashraf M, Robles WRQ, Kim M, Ko YS, Yi MY. A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network. Sci Rep 2022; 12:1392. [PMID: 35082315 PMCID: PMC8791954 DOI: 10.1038/s41598-022-05001-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 01/05/2022] [Indexed: 12/24/2022] Open
Abstract
This paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pathologists annotate the region of interest by marking malignant areas, which pose a high risk of introducing patch-based label noise by involving benign regions that are typically small in size within the malignant annotations, resulting in low classification accuracy with many Type-II errors. To overcome this critical problem, this paper presents a simple yet effective method for noisy patch classification. The proposed method, validated using stomach cancer images, provides a significant improvement compared to other existing methods in patch-based cancer classification, with accuracies of 98.81%, 97.30% and 89.47% for binary, ternary, and quaternary classes, respectively. Moreover, we conduct several experiments at different noise levels using a publicly available dataset to further demonstrate the robustness of the proposed method. Given the high cost of producing explicit annotations for whole-slide images and the unavoidable error-prone nature of the human annotation of medical images, the proposed method has practical implications for whole-slide image annotation and automated cancer diagnosis.
Collapse
|
15
|
Alpsoy A, Yavuz A, Elpek GO. Artificial intelligence in pathological evaluation of gastrointestinal cancers. Artif Intell Gastroenterol 2021; 2:141-156. [DOI: 10.35712/aig.v2.i6.141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/19/2021] [Accepted: 12/27/2021] [Indexed: 02/06/2023] Open
Abstract
The integration of artificial intelligence (AI) has shown promising benefits in many fields of diagnostic histopathology, including for gastrointestinal cancers (GCs), such as tumor identification, classification, and prognosis prediction. In parallel, recent evidence suggests that AI may help reduce the workload in gastrointestinal pathology by automatically detecting tumor tissues and evaluating prognostic parameters. In addition, AI seems to be an attractive tool for biomarker/genetic alteration prediction in GC, as it can contain a massive amount of information from visual data that is complex and partially understandable by pathologists. From this point of view, it is suggested that advances in AI could lead to revolutionary changes in many fields of pathology. Unfortunately, these findings do not exclude the possibility that there are still many hurdles to overcome before AI applications can be safely and effectively applied in actual pathology practice. These include a broad spectrum of challenges from needs identification to cost-effectiveness. Therefore, unlike other disciplines of medicine, no histopathology-based AI application, including in GC, has ever been approved either by a regulatory authority or approved for public reimbursement. The purpose of this review is to present data related to the applications of AI in pathology practice in GC and present the challenges that need to be overcome for their implementation.
Collapse
Affiliation(s)
- Anil Alpsoy
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Aysen Yavuz
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Gulsum Ozlem Elpek
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| |
Collapse
|
16
|
Menegotto AB, Becker CDL, Cazella SC. Computer-aided diagnosis of hepatocellular carcinoma fusing imaging and structured health data. Health Inf Sci Syst 2021; 9:20. [PMID: 33968399 PMCID: PMC8096870 DOI: 10.1007/s13755-021-00151-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 04/20/2021] [Indexed: 12/21/2022] Open
Abstract
INTRODUCTION Hepatocellular carcinoma is the prevalent primary liver cancer, a silent disease that killed 782,000 worldwide in 2018. Multimodal deep learning is the application of deep learning techniques, fusing more than one data modality as the model's input. PURPOSE A computer-aided diagnosis system for hepatocellular carcinoma developed with multimodal deep learning approaches could use multiple data modalities as recommended by clinical guidelines, and enhance the robustness and the value of the second-opinion given to physicians. This article describes the process of creation and evaluation of an algorithm for computer-aided diagnosis of hepatocellular carcinoma developed with multimodal deep learning techniques fusing preprocessed computed-tomography images with structured data from patient Electronic Health Records. RESULTS The classification performance achieved by the proposed algorithm in the test dataset was: accuracy = 86.9%, precision = 89.6%, recall = 86.9% and F-Score = 86.7%. These classification performance metrics are closer to the state-of-the-art in this area and were achieved with data modalities which are cheaper than traditional Magnetic Resonance Imaging approaches, enabling the use of the proposed algorithm by low and mid-sized healthcare institutions. CONCLUSION The classification performance achieved with the multimodal deep learning algorithm is higher than human specialists diagnostic performance using only CT for diagnosis. Even though the results are promising, the multimodal deep learning architecture used for hepatocellular carcinoma prediction needs more training and test processes using different datasets before the use of the proposed algorithm by physicians in real healthcare routines. The additional training aims to confirm the classification performance achieved and enhance the model's robustness.
Collapse
Affiliation(s)
- Alan Baronio Menegotto
- Universidade Federal de Ciências da Saúde de Porto Alegre, Rua Sarmento Leite, 245-Porto Alegre, Rio Grande do Sul, Brazil
| | - Carla Diniz Lopes Becker
- Universidade Federal de Ciências da Saúde de Porto Alegre, Rua Sarmento Leite, 245-Porto Alegre, Rio Grande do Sul, Brazil
| | - Silvio Cesar Cazella
- Universidade Federal de Ciências da Saúde de Porto Alegre, Rua Sarmento Leite, 245-Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
17
|
Goyal H, Sherazi SAA, Mann R, Gandhi Z, Perisetti A, Aziz M, Chandan S, Kopel J, Tharian B, Sharma N, Thosani N. Scope of Artificial Intelligence in Gastrointestinal Oncology. Cancers (Basel) 2021; 13:5494. [PMID: 34771658 PMCID: PMC8582733 DOI: 10.3390/cancers13215494] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 10/27/2021] [Indexed: 12/12/2022] Open
Abstract
Gastrointestinal cancers are among the leading causes of death worldwide, with over 2.8 million deaths annually. Over the last few decades, advancements in artificial intelligence technologies have led to their application in medicine. The use of artificial intelligence in endoscopic procedures is a significant breakthrough in modern medicine. Currently, the diagnosis of various gastrointestinal cancer relies on the manual interpretation of radiographic images by radiologists and various endoscopic images by endoscopists. This can lead to diagnostic variabilities as it requires concentration and clinical experience in the field. Artificial intelligence using machine or deep learning algorithms can provide automatic and accurate image analysis and thus assist in diagnosis. In the field of gastroenterology, the application of artificial intelligence can be vast from diagnosis, predicting tumor histology, polyp characterization, metastatic potential, prognosis, and treatment response. It can also provide accurate prediction models to determine the need for intervention with computer-aided diagnosis. The number of research studies on artificial intelligence in gastrointestinal cancer has been increasing rapidly over the last decade due to immense interest in the field. This review aims to review the impact, limitations, and future potentials of artificial intelligence in screening, diagnosis, tumor staging, treatment modalities, and prediction models for the prognosis of various gastrointestinal cancers.
Collapse
Affiliation(s)
- Hemant Goyal
- Department of Internal Medicine, The Wright Center for Graduate Medical Education, 501 S. Washington Avenue, Scranton, PA 18505, USA
| | - Syed A. A. Sherazi
- Department of Medicine, John H Stroger Jr Hospital of Cook County, 1950 W Polk St, Chicago, IL 60612, USA;
| | - Rupinder Mann
- Department of Medicine, Saint Agnes Medical Center, 1303 E. Herndon Ave, Fresno, CA 93720, USA;
| | - Zainab Gandhi
- Department of Medicine, Geisinger Wyoming Valley Medical Center, 1000 E Mountain Dr, Wilkes-Barre, PA 18711, USA;
| | - Abhilash Perisetti
- Division of Interventional Oncology & Surgical Endoscopy (IOSE), Parkview Cancer Institute, 11050 Parkview Circle, Fort Wayne, IN 46845, USA; (A.P.); (N.S.)
| | - Muhammad Aziz
- Department of Gastroenterology and Hepatology, University of Toledo Medical Center, 3000 Arlington Avenue, Toledo, OH 43614, USA;
| | - Saurabh Chandan
- Division of Gastroenterology and Hepatology, CHI Health Creighton University Medical Center, 7500 Mercy Rd, Omaha, NE 68124, USA;
| | - Jonathan Kopel
- Department of Medicine, Texas Tech University Health Sciences Center, 3601 4th St, Lubbock, TX 79430, USA;
| | - Benjamin Tharian
- Department of Gastroenterology and Hepatology, The University of Arkansas for Medical Sciences, 4301 W Markham St, Little Rock, AR 72205, USA;
| | - Neil Sharma
- Division of Interventional Oncology & Surgical Endoscopy (IOSE), Parkview Cancer Institute, 11050 Parkview Circle, Fort Wayne, IN 46845, USA; (A.P.); (N.S.)
| | - Nirav Thosani
- Division of Gastroenterology, Hepatology & Nutrition, McGovern Medical School, UTHealth, 6410 Fannin, St #1014, Houston, TX 77030, USA;
| |
Collapse
|
18
|
A State-of-the-Art Review for Gastric Histopathology Image Analysis Approaches and Future Development. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6671417. [PMID: 34258279 PMCID: PMC8257332 DOI: 10.1155/2021/6671417] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 05/09/2021] [Accepted: 05/25/2021] [Indexed: 02/08/2023]
Abstract
Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.
Collapse
|
19
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
20
|
Yoshida H, Kiyuna T. Requirements for implementation of artificial intelligence in the practice of gastrointestinal pathology. World J Gastroenterol 2021; 27:2818-2833. [PMID: 34135556 PMCID: PMC8173389 DOI: 10.3748/wjg.v27.i21.2818] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 03/16/2021] [Accepted: 04/28/2021] [Indexed: 02/06/2023] Open
Abstract
Tremendous advances in artificial intelligence (AI) in medical image analysis have been achieved in recent years. The integration of AI is expected to cause a revolution in various areas of medicine, including gastrointestinal (GI) pathology. Currently, deep learning algorithms have shown promising benefits in areas of diagnostic histopathology, such as tumor identification, classification, prognosis prediction, and biomarker/genetic alteration prediction. While AI cannot substitute pathologists, carefully constructed AI applications may increase workforce productivity and diagnostic accuracy in pathology practice. Regardless of these promising advances, unlike the areas of radiology or cardiology imaging, no histopathology-based AI application has been approved by a regulatory authority or for public reimbursement. Thus, implying that there are still some obstacles to be overcome before AI applications can be safely and effectively implemented in real-life pathology practice. The challenges have been identified at different stages of the development process, such as needs identification, data curation, model development, validation, regulation, modification of daily workflow, and cost-effectiveness balance. The aim of this review is to present challenges in the process of AI development, validation, and regulation that should be overcome for its implementation in real-life GI pathology practice.
Collapse
Affiliation(s)
- Hiroshi Yoshida
- Department of Diagnostic Pathology, National Cancer Center Hospital, Tokyo 104-0045, Japan
| | - Tomoharu Kiyuna
- Digital Healthcare Business Development Office, NEC Corporation, Tokyo 108-8556, Japan
| |
Collapse
|
21
|
Cao JS, Lu ZY, Chen MY, Zhang B, Juengpanich S, Hu JH, Li SJ, Topatana W, Zhou XY, Feng X, Shen JL, Liu Y, Cai XJ. Artificial intelligence in gastroenterology and hepatology: Status and challenges. World J Gastroenterol 2021; 27:1664-1690. [PMID: 33967550 PMCID: PMC8072192 DOI: 10.3748/wjg.v27.i16.1664] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 02/11/2021] [Accepted: 03/17/2021] [Indexed: 02/06/2023] Open
Abstract
Originally proposed by John McCarthy in 1955, artificial intelligence (AI) has achieved a breakthrough and revolutionized the processing methods of clinical medicine with the increasing workloads of medical records and digital images. Doctors are paying attention to AI technologies for various diseases in the fields of gastroenterology and hepatology. This review will illustrate AI technology procedures for medical image analysis, including data processing, model establishment, and model validation. Furthermore, we will summarize AI applications in endoscopy, radiology, and pathology, such as detecting and evaluating lesions, facilitating treatment, and predicting treatment response and prognosis with excellent model performance. The current challenges for AI in clinical application include potential inherent bias in retrospective studies that requires larger samples for validation, ethics and legal concerns, and the incomprehensibility of the output results. Therefore, doctors and researchers should cooperate to address the current challenges and carry out further investigations to develop more accurate AI tools for improved clinical applications.
Collapse
Affiliation(s)
- Jia-Sheng Cao
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Zi-Yi Lu
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Ming-Yu Chen
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Bin Zhang
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Sarun Juengpanich
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Jia-Hao Hu
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Shi-Jie Li
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Win Topatana
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Xue-Yin Zhou
- School of Medicine, Wenzhou Medical University, Wenzhou 325035, Zhejiang Province, China
| | - Xu Feng
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Ji-Liang Shen
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Yu Liu
- College of Life Sciences, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Xiu-Jun Cai
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| |
Collapse
|
22
|
Franklin MM, Schultz FA, Tafoya MA, Kerwin AA, Broehm CJ, Fischer EG, Gullapalli RR, Clark DP, Hanson JA, Martin DR. A Deep Learning Convolutional Neural Network Can Differentiate Between Helicobacter Pylori Gastritis and Autoimmune Gastritis With Results Comparable to Gastrointestinal Pathologists. Arch Pathol Lab Med 2021; 146:117-122. [PMID: 33861314 DOI: 10.5858/arpa.2020-0520-oa] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/20/2021] [Indexed: 12/16/2022]
Abstract
CONTEXT.— Pathology studies using convolutional neural networks (CNNs) have focused on neoplasms, while studies in inflammatory pathology are rare. We previously demonstrated a CNN differentiates reactive gastropathy, Helicobacter pylori gastritis (HPG), and normal gastric mucosa. OBJECTIVE.— To determine whether a CNN can differentiate the following 2 gastric inflammatory patterns: autoimmune gastritis (AG) and HPG. DESIGN.— Gold standard diagnoses were blindly established by 2 gastrointestinal (GI) pathologists. One hundred eighty-seven cases were scanned for analysis by HALO-AI. All levels and tissue fragments per slide were included for analysis. The cases were randomized, 112 (60%; 60 HPG, 52 AG) in the training set and 75 (40%; 40 HPG, 35 AG) in the test set. A HALO-AI correct area distribution (AD) cutoff of 50% or more was required to credit the CNN with the correct diagnosis. The test set was blindly reviewed by pathologists with different levels of GI pathology expertise as follows: 2 GI pathologists, 2 general surgical pathologists, and 2 residents. Each pathologist rendered their preferred diagnosis, HPG or AG. RESULTS.— At the HALO-AI AD percentage cutoff of 50% or more, the CNN results were 100% concordant with the gold standard diagnoses. On average, autoimmune gastritis cases had 84.7% HALO-AI autoimmune gastritis AD and HP cases had 87.3% HALO-AI HP AD. The GI pathologists, general anatomic pathologists, and residents were on average, 100%, 86%, and 57% concordant with the gold standard diagnoses, respectively. CONCLUSIONS.— A CNN can distinguish between cases of HPG and autoimmune gastritis with accuracy equal to GI pathologists.
Collapse
Affiliation(s)
- Michael M Franklin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Fred A Schultz
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Marissa A Tafoya
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Audra A Kerwin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Cory J Broehm
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Edgar G Fischer
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Rama R Gullapalli
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Douglas P Clark
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Joshua A Hanson
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - David R Martin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| |
Collapse
|
23
|
Ahmad Z, Rahim S, Zubair M, Abdul-Ghafar J. Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagn Pathol 2021; 16:24. [PMID: 33731170 PMCID: PMC7971952 DOI: 10.1186/s13000-021-01085-4] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/04/2021] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND The role of Artificial intelligence (AI) which is defined as the ability of computers to perform tasks that normally require human intelligence is constantly expanding. Medicine was slow to embrace AI. However, the role of AI in medicine is rapidly expanding and promises to revolutionize patient care in the coming years. In addition, it has the ability to democratize high level medical care and make it accessible to all parts of the world. MAIN TEXT Among specialties of medicine, some like radiology were relatively quick to adopt AI whereas others especially pathology (and surgical pathology in particular) are only just beginning to utilize AI. AI promises to play a major role in accurate diagnosis, prognosis and treatment of cancers. In this paper, the general principles of AI are defined first followed by a detailed discussion of its current role in medicine. In the second half of this comprehensive review, the current and future role of AI in surgical pathology is discussed in detail including an account of the practical difficulties involved and the fear of pathologists of being replaced by computer algorithms. A number of recent studies which demonstrate the usefulness of AI in the practice of surgical pathology are highlighted. CONCLUSION AI has the potential to transform the practice of surgical pathology by ensuring rapid and accurate results and enabling pathologists to focus on higher level diagnostic and consultative tasks such as integrating molecular, morphologic and clinical information to make accurate diagnosis in difficult cases, determine prognosis objectively and in this way contribute to personalized care.
Collapse
Affiliation(s)
- Zubair Ahmad
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Shabina Rahim
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Maha Zubair
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Jamshid Abdul-Ghafar
- Department of Pathology and Clinical Laboratory, French Medical Institute for Mothers and Children (FMIC), Kabul, Afghanistan.
| |
Collapse
|
24
|
Yu C, Helwig EJ. Artificial intelligence in gastric cancer: a translational narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:269. [PMID: 33708896 PMCID: PMC7940908 DOI: 10.21037/atm-20-6337] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Increasing clinical contributions and novel techniques have been made by artificial intelligence (AI) during the last decade. The role of AI is increasingly recognized in cancer research and clinical application. Cancers like gastric cancer, or stomach cancer, are ideal testing grounds to see if early undertakings of applying AI to medicine can yield valuable results. There are numerous concepts derived from AI, including machine learning (ML) and deep learning (DL). ML is defined as the ability to learn data features without being explicitly programmed. It arises at the intersection of data science and computer science and aims at the efficiency of computing algorithms. In cancer research, ML has been increasingly used in predictive prognostic models. DL is defined as a subset of ML targeting multilayer computation processes. DL is less dependent on the understanding of data features than ML. Therefore, the algorithms of DL are much more difficult to interpret than ML, even potentially impossible. This review discussed the role of AI in the diagnostic, therapeutic and prognostic advances of gastric cancer. Models like convolutional neural networks (CNNs) or artificial neural networks (ANNs) achieved significant praise in their application. There is much more to be fully covered across the clinical administration of gastric cancer. Despite growing efforts, adapting AI to improving diagnoses for gastric cancer is a worthwhile venture. The information yield can revolutionize how we approach gastric cancer problems. Though integration might be slow and labored, it can be given the ability to enhance diagnosing through visual modalities and augment treatment strategies. It can grow to become an invaluable tool for physicians. AI not only benefits diagnostic and therapeutic outcomes, but also reshapes perspectives over future medical trajectory.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Fudan University Shanghai Cancer Center, Shanghai, China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
25
|
Alinsaif S, Lang J. Texture features in the Shearlet domain for histopathological image classification. BMC Med Inform Decis Mak 2020; 20:312. [PMID: 33323118 PMCID: PMC7739509 DOI: 10.1186/s12911-020-01327-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Background A various number of imaging modalities are available (e.g., magnetic resonance, x-ray, ultrasound, and biopsy) where each modality can reveal different structural aspects of tissues. However, the analysis of histological slide images that are captured using a biopsy is considered the gold standard to determine whether cancer exists. Furthermore, it can reveal the stage of cancer. Therefore, supervised machine learning can be used to classify histopathological tissues. Several computational techniques have been proposed to study histopathological images with varying levels of success. Often handcrafted techniques based on texture analysis are proposed to classify histopathological tissues which can be used with supervised machine learning.
Methods In this paper, we construct a novel feature space to automate the classification of tissues in histology images. Our feature representation is to integrate various features sets into a new texture feature representation. All of our descriptors are computed in the complex Shearlet domain. With complex coefficients, we investigate not only the use of magnitude coefficients, but also study the effectiveness of incorporating the relative phase (RP) coefficients to create the input feature vector. In our study, four texture-based descriptors are extracted from the Shearlet coefficients: co-occurrence texture features, Local Binary Patterns, Local Oriented Statistic Information Booster, and segmentation-based Fractal Texture Analysis. Each set of these attributes captures significant local and global statistics. Therefore, we study them individually, but additionally integrate them to boost the accuracy of classifying the histopathology tissues while being fed to classical classifiers. To tackle the problem of high-dimensionality, our proposed feature space is reduced using principal component analysis. In our study, we use two classifiers to indicate the success of our proposed feature representation: Support Vector Machine (SVM) and Decision Tree Bagger (DTB). Results Our feature representation delivered high performance when used on four public datasets. As such, the best achieved accuracy: multi-class Kather (i.e., 92.56%), BreakHis (i.e., 91.73%), Epistroma (i.e., 98.04%), Warwick-QU (i.e., 96.29%). Conclusions Our proposed method in the Shearlet domain for the classification of histopathological images proved to be effective when it was investigated on four different datasets that exhibit different levels of complexity.
Collapse
|
26
|
Ikeda A, Nosato H, Kochi Y, Negoro H, Kojima T, Sakanashi H, Murakawa M, Nishiyama H. Cystoscopic Imaging for Bladder Cancer Detection Based on Stepwise Organic Transfer Learning with a Pretrained Convolutional Neural Network. J Endourol 2020; 35:1030-1035. [PMID: 33148020 DOI: 10.1089/end.2020.0919] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Background: Nonmuscle-invasive bladder cancer is diagnosed, treated, and monitored using cystoscopy. Artificial intelligence (AI) is increasingly used to augment tumor detection, but its performance is hindered by the limited availability of cystoscopic images required to form a large training data set. This study aimed to determine whether stepwise transfer learning with general images followed by gastroscopic images can improve the accuracy of bladder tumor detection on cystoscopic imaging. Materials and Methods: We trained a convolutional neural network with 1.2 million general images, followed by 8728 gastroscopic images. In the final step of the transfer learning process, the model was additionally trained with 2102 cystoscopic images of normal bladder tissue and bladder tumors collected at the University of Tsukuba Hospital. The diagnostic accuracy was evaluated using a receiver operating characteristic curve. The diagnostic performance of the models trained with cystoscopic images with or without stepwise organic transfer learning was compared with that of medical students and urologists with varying levels of experience. Results: The model developed by stepwise organic transfer learning had 95.4% sensitivity and 97.6% specificity. This performance was better than that of the other models and comparable with that of expert urologists. Notably, it showed superior diagnostic accuracy when tumors occupied >10% of the image. Conclusions: Our findings demonstrate the value of stepwise organic transfer learning in applications with limited data sets for training and further confirm the value of AI in medical diagnostics. Here, we applied deep learning to develop a tool to detect bladder tumors with an accuracy comparable with that of a urologist. To address the limitation that few bladder tumor images are available to train the model, we demonstrate that pretraining with general and gastroscopic images yields superior results.
Collapse
Affiliation(s)
- Atsushi Ikeda
- Department of Urology, University of Tsukuba Hospital, Tsukuba, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| | - Yuta Kochi
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Hiromitsu Negoro
- Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| | - Takahiro Kojima
- Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| | - Hidenori Sakanashi
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Masahiro Murakawa
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Hiroyuki Nishiyama
- Department of Urology, University of Tsukuba Hospital, Tsukuba, Japan.,Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
27
|
Kudou M, Kosuga T, Otsuji E. Artificial intelligence in gastrointestinal cancer: Recent advances and future perspectives. Artif Intell Gastroenterol 2020; 1:71-85. [DOI: 10.35712/aig.v1.i4.71] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 10/28/2020] [Accepted: 11/12/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) using machine or deep learning algorithms is attracting increasing attention because of its more accurate image recognition ability and prediction performance than human-aid analyses. The application of AI models to gastrointestinal (GI) clinical oncology has been investigated for the past decade. AI has the capacity to automatically detect and diagnose GI tumors with similar diagnostic accuracy to expert clinicians. AI may also predict malignant potential, such as tumor histology, metastasis, patient survival, resistance to cancer treatments and the molecular biology of tumors, through image analyses of radiological or pathological imaging data using complex deep learning models beyond human cognition. The introduction of AI-assisted diagnostic systems into clinical settings is expected in the near future. However, limitations associated with the evaluation of GI tumors by AI models have yet to be resolved. Recent studies on AI-assisted diagnostic models of gastric and colorectal cancers in the endoscopic, pathological, and radiological fields were herein reviewed. The limitations and future perspectives for the application of AI systems in clinical settings have also been discussed. With the establishment of a multidisciplinary team containing AI experts in each medical institution and prospective studies, AI-assisted medical systems will become a promising tool for GI cancer.
Collapse
Affiliation(s)
- Michihiro Kudou
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
- Department of Surgery, Kyoto Okamoto Memorial Hospital, Kyoto 613-0034, Japan
| | - Toshiyuki Kosuga
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
- Department of Surgery, Saiseikai Shiga Hospital, Ritto 520-3046, Japan
| | - Eigo Otsuji
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
| |
Collapse
|
28
|
Sun C, Li C, Zhang J, Rahaman MM, Ai S, Chen H, Kulwa F, Li Y, Li X, Jiang T. Gastric histopathology image segmentation using a hierarchical conditional random field. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.09.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
29
|
Niu PH, Zhao LL, Wu HL, Zhao DB, Chen YT. Artificial intelligence in gastric cancer: Application and future perspectives. World J Gastroenterol 2020; 26:5408-5419. [PMID: 33024393 PMCID: PMC7520602 DOI: 10.3748/wjg.v26.i36.5408] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 08/02/2020] [Accepted: 08/29/2020] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer is the fourth leading cause of cancer-related mortality across the globe, with a 5-year survival rate of less than 40%. In recent years, several applications of artificial intelligence (AI) have emerged in the gastric cancer field based on its efficient computational power and learning capacities, such as image-based diagnosis and prognosis prediction. AI-assisted diagnosis includes pathology, endoscopy, and computerized tomography, while researchers in the prognosis circle focus on recurrence, metastasis, and survival prediction. In this review, a comprehensive literature search was performed on articles published up to April 2020 from the databases of PubMed, Embase, Web of Science, and the Cochrane Library. Thereby the current status of AI-applications was systematically summarized in gastric cancer. Moreover, future directions that target this field were also analyzed to overcome the risk of overfitting AI models and enhance their accuracy as well as the applicability in clinical practice.
Collapse
Affiliation(s)
- Peng-Hui Niu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Lu-Lu Zhao
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Hong-Liang Wu
- Department of Anesthesiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Dong-Bing Zhao
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ying-Tai Chen
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
30
|
Application of artificial intelligence in the diagnosis and prediction of gastric cancer. Artif Intell Gastroenterol 2020. [DOI: 10.35712/wjg.v1.i1.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
31
|
Qie YY, Xue XF, Wang XG, Dang SC. Application of artificial intelligence in the diagnosis and prediction of gastric cancer. Artif Intell Gastroenterol 2020; 1:12-18. [DOI: 10.35712/aig.v1.i1.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer is the second leading cause of cancer deaths worldwide. Despite the great progress in the diagnosis and treatment of gastric cancer, the incidence and mortality rate of the disease in China are still relatively high. The high mortality rate of gastric cancer may be related to its low early diagnosis rate and poor prognosis. Much research has been focused on improving the sensitivity and specificity of diagnostic tools for gastric cancer, in order to more accurately predict the survival times of gastric cancer patients. Taking appropriate treatment measures is the key to reducing the mortality rate of gastric cancer. In the past decade, artificial intelligence technology has been applied to various fields of medicine as a branch of computer science. This article discusses the application and research status of artificial intelligence in gastric cancer diagnosis and survival prediction.
Collapse
Affiliation(s)
- Yin-Yin Qie
- Department of General Surgery, The Affiliated Hospital, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Xiao-Fei Xue
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| | - Xiao-Gang Wang
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| | - Sheng-Chun Dang
- Department of General Surgery, the Affiliated Hospital, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| |
Collapse
|
32
|
Jin P, Ji X, Kang W, Li Y, Liu H, Ma F, Ma S, Hu H, Li W, Tian Y. Artificial intelligence in gastric cancer: a systematic review. J Cancer Res Clin Oncol 2020; 146:2339-2350. [PMID: 32613386 DOI: 10.1007/s00432-020-03304-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 06/26/2020] [Indexed: 02/08/2023]
Abstract
OBJECTIVE This study aims to systematically review the application of artificial intelligence (AI) techniques in gastric cancer and to discuss the potential limitations and future directions of AI in gastric cancer. METHODS A systematic review was performed that follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Pubmed, EMBASE, the Web of Science, and the Cochrane Library were used to search for gastric cancer publications with an emphasis on AI that were published up to June 2020. The terms "artificial intelligence" and "gastric cancer" were used to search for the publications. RESULTS A total of 64 articles were included in this review. In gastric cancer, AI is mainly used for molecular bio-information analysis, endoscopic detection for Helicobacter pylori infection, chronic atrophic gastritis, early gastric cancer, invasion depth, and pathology recognition. AI may also be used to establish predictive models for evaluating lymph node metastasis, response to drug treatments, and prognosis. In addition, AI can be used for surgical training, skill assessment, and surgery guidance. CONCLUSIONS In the foreseeable future, AI applications can play an important role in gastric cancer management in the era of precision medicine.
Collapse
Affiliation(s)
- Peng Jin
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Xiaoyan Ji
- Department of Emergency Ward, First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin, 300193, China
| | - Wenzhe Kang
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yang Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Hao Liu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Fuhai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shuai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Haitao Hu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Weikun Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yantao Tian
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
33
|
Gonçalves WGE, dos Santos MHDP, Lobato FMF, Ribeiro-dos-Santos Â, de Araújo GS. Deep learning in gastric tissue diseases: a systematic review. BMJ Open Gastroenterol 2020; 7:e000371. [PMID: 32337060 PMCID: PMC7170401 DOI: 10.1136/bmjgast-2019-000371] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 02/14/2020] [Accepted: 02/24/2020] [Indexed: 12/24/2022] Open
Abstract
Background In recent years, deep learning has gained remarkable attention in medical image analysis due to its capacity to provide results comparable to specialists and, in some cases, surpass them. Despite the emergence of deep learning research on gastric tissues diseases, few intensive reviews are addressing this topic. Method We performed a systematic review related to applications of deep learning in gastric tissue disease analysis by digital histology, endoscopy and radiology images. Conclusions This review highlighted the high potential and shortcomings in deep learning research studies applied to gastric cancer, ulcer, gastritis and non-malignant diseases. Our results demonstrate the effectiveness of gastric tissue analysis by deep learning applications. Moreover, we also identified gaps of evaluation metrics, and image collection availability, therefore, impacting experimental reproducibility.
Collapse
Affiliation(s)
- Wanderson Gonçalves e Gonçalves
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
- Núcleo de Pesquisas em Oncologia, Universidade Federal do Pará, Belém, Pará, Brazil
| | | | | | - Ândrea Ribeiro-dos-Santos
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
- Núcleo de Pesquisas em Oncologia, Universidade Federal do Pará, Belém, Pará, Brazil
| | - Gilderlanio Santana de Araújo
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
| |
Collapse
|
34
|
Rączkowska A, Możejko M, Zambonelli J, Szczurek E. ARA: accurate, reliable and active histopathological image classification framework with Bayesian deep learning. Sci Rep 2019; 9:14347. [PMID: 31586139 PMCID: PMC6778075 DOI: 10.1038/s41598-019-50587-1] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023] Open
Abstract
Machine learning algorithms hold the promise to effectively automate the analysis of histopathological images that are routinely generated in clinical practice. Any machine learning method used in the clinical diagnostic process has to be extremely accurate and, ideally, provide a measure of uncertainty for its predictions. Such accurate and reliable classifiers need enough labelled data for training, which requires time-consuming and costly manual annotation by pathologists. Thus, it is critical to minimise the amount of data needed to reach the desired accuracy by maximising the efficiency of training. We propose an accurate, reliable and active (ARA) image classification framework and introduce a new Bayesian Convolutional Neural Network (ARA-CNN) for classifying histopathological images of colorectal cancer. The model achieves exceptional classification accuracy, outperforming other models trained on the same dataset. The network outputs an uncertainty measurement for each tested image. We show that uncertainty measures can be used to detect mislabelled training samples and can be employed in an efficient active learning workflow. Using a variational dropout-based entropy measure of uncertainty in the workflow speeds up the learning process by roughly 45%. Finally, we utilise our model to segment whole-slide images of colorectal tissue and compute segmentation-based spatial statistics.
Collapse
Affiliation(s)
- Alicja Rączkowska
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
| | - Marcin Możejko
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
| | - Joanna Zambonelli
- Department of Pathology, Medical University of Warsaw, Warsaw, Poland
| | - Ewa Szczurek
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland.
| |
Collapse
|
35
|
Jones AD, Graff JP, Darrow M, Borowsky A, Olson KA, Gandour-Edwards R, Datta Mitra A, Wei D, Gao G, Durbin-Johnson B, Rashidi HH. Impact of pre-analytical variables on deep learning accuracy in histopathology. Histopathology 2019; 75:39-53. [PMID: 30801768 DOI: 10.1111/his.13844] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 02/20/2019] [Indexed: 02/06/2023]
Abstract
AIMS Machine learning (ML) binary classification in diagnostic histopathology is an area of intense investigation. Several assumptions, including training image quality/format and the number of training images required, appear to be similar in many studies irrespective of the paucity of supporting evidence. We empirically compared training image file type, training set size, and two common convolutional neural networks (CNNs) using transfer learning (ResNet50 and SqueezeNet). METHODS AND RESULTS Thirty haematoxylin and eosin (H&E)-stained slides with carcinoma or normal tissue from three tissue types (breast, colon, and prostate) were photographed, generating 3000 partially overlapping images (1000 per tissue type). These lossless Portable Networks Graphics (PNGs) images were converted to lossy Joint Photographic Experts Group (JPG) images. Tissue type-specific binary classification ML models were developed by the use of all PNG or JPG images, and repeated with a subset of 500, 200, 100, 50, 30 and 10 images. Eleven models were generated for each tissue type, at each quantity of training images, for each file type, and for each CNN, resulting in 924 models. Internal accuracies and generalisation accuracies were compared. There was no meaningful significant difference in accuracies between PNG and JPG models. Models trained with more images did not invariably perform better. ResNet50 typically outperformed SqueezeNet. Models were generalisable within a tissue type but not across tissue types. CONCLUSIONS Lossy JPG images were not inferior to lossless PNG images in our models. Large numbers of unique H&E-stained slides were not required for training optimal ML models. This reinforces the need for an evidence-based approach to best practices for histopathological ML.
Collapse
Affiliation(s)
- Andrew D Jones
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - John Paul Graff
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Morgan Darrow
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Alexander Borowsky
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Kristin A Olson
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Regina Gandour-Edwards
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Ananya Datta Mitra
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Dongguang Wei
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | - Guofeng Gao
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| | | | - Hooman H Rashidi
- Department of Pathology and Laboratory Medicine, University of California Davis Health, Sacramento, CA, USA
| |
Collapse
|