201
|
Li S, Deng YQ, Hua HL, Li SL, Chen XX, Xie BJ, Zhu Z, Liu R, Huang J, Tao ZZ. Deep learning for locally advanced nasopharyngeal carcinoma prognostication based on pre- and post-treatment MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106785. [PMID: 35397409 DOI: 10.1016/j.cmpb.2022.106785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 03/07/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE We aimed to predict the prognosis of advanced nasopharyngeal carcinoma (stage Ⅲ-Ⅳa) using Pre- and Post-treatment MR images based on deep learning (DL). METHODS A total of 206 patients with primary nasopharyngeal carcinoma who were diagnosed and treated at the Renmin Hospital of Wuhan University between June 2012 and January 2018 were retrospectively selected. A rectangular region of interest (ROI), which included the tumor area, surrounding tissues and organs, was delineated on each Pre- and Post-treatment MR image. Two Inception-Resnet-V2 based transfer learning models, named Pre-model and Post-model, were trained with the Pre-treatment images and the Post-treatment images, respectively. In addition, an ensemble learning model based on the Pre-model and Post-models was established. The three established models were evaluated by receiver operating characteristic curve (ROC), confusion matrix, and Harrell's concordance indices (C-index). High-risk-related gradient-weighted class activation mapping (Grad-CAM) images were developed according to the DL models. RESULTS The Pre-model, Post-model, and ensemble model displayed a C-index of 0.717 (95% CI: 0.639 to 0.795), 0.811 (95% CI: 0.745-0.877), 0.830 (95% CI: 0.767-0.893), and AUC of 0.741 (95% CI: 0.584-0.900), 0.806 (95% CI: 0.670-0.942), and 0.842 (95% CI: 0.718-0.967) for the test cohort, respectively. In comparison with the models, the performance of Post-model was better than the performance of Pre-model, which indicated the importance of Post-treatment images for prognosis prediction. All three DL models performed better than the TNM staging system (0.723, 95% CI: 0.567-0.879). The captured features presented on Grad-CAM images suggested that the areas around the tumor and lymph nodes were related to the prognosis of the tumor. CONCLUSIONS The three established DL models based on Pre- and Post-treatment MR images have a better performance than TNM staging. Post-treatment MR images are of great significance for prognosis prediction and could contribute to clinical decision-making.
Collapse
Affiliation(s)
- Song Li
- Department of of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Yu-Qin Deng
- Department of of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Hong-Li Hua
- Department of of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Sheng-Lan Li
- Department of of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Xi-Xiang Chen
- Department of of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Bao-Jun Xie
- Department of of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China
| | - Zhiling Zhu
- Department of of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430030, PR China
| | - Ruoyun Liu
- College of Mathematics and Computer Science, Wuhan Textile University, No.1 Fangzhi road, Wuhan, Hubei 430200, PR China
| | - Jin Huang
- College of Mathematics and Computer Science, Wuhan Textile University, No.1 Fangzhi road, Wuhan, Hubei 430200, PR China.
| | - Ze-Zhang Tao
- Department of of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China; Department of Otolaryngology-Head and Neck Surgery, Central Laboratory, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060, PR China.
| |
Collapse
|
202
|
Cheng N, Ren Y, Zhou J, Zhang Y, Wang D, Zhang X, Chen B, Liu F, Lv J, Cao Q, Chen S, Du H, Hui D, Weng Z, Liang Q, Su B, Tang L, Han L, Chen J, Shao C. Deep Learning-Based Classification of Hepatocellular Nodular Lesions on Whole-Slide Histopathologic Images. Gastroenterology 2022; 162:1948-1961.e7. [PMID: 35202643 DOI: 10.1053/j.gastro.2022.02.025] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 12/21/2021] [Accepted: 02/15/2022] [Indexed: 12/11/2022]
Abstract
BACKGROUND & AIMS Hepatocellular nodular lesions (HNLs) constitute a heterogeneous group of disorders. Differential diagnosis among these lesions, especially high-grade dysplastic nodules (HGDNs) and well-differentiated hepatocellular carcinoma (WD-HCC), can be challenging, let alone biopsy specimens. We aimed to develop a deep learning system to solve these puzzles, improving the histopathologic diagnosis of HNLs (WD-HCC, HGDN, low-grade DN, focal nodular hyperplasia, hepatocellular adenoma), and background tissues (nodular cirrhosis, normal liver tissue). METHODS The samples consisting of surgical and biopsy specimens were collected from 6 hospitals. Each specimen was reviewed by 2 to 3 subspecialists. Four deep neural networks (ResNet50, InceptionV3, Xception, and the Ensemble) were used. Their performances were evaluated by confusion matrix, receiver operating characteristic curve, classification map, and heat map. The predictive efficiency of the optimal model was further verified by comparing with that of 9 pathologists. RESULTS We obtained 213,280 patches from 1115 whole-slide images of 738 patients. An optimal model was finally chosen based on F1 score and area under the curve value, named hepatocellular-nodular artificial intelligence model (HnAIM), with the overall 7-category area under the curve of 0.935 in the independent external validation cohort. For biopsy specimens, the agreement rate with subspecialists' majority opinion was higher for HnAIM than 9 pathologists on both patch level and whole-slide images level. CONCLUSIONS We first developed a deep learning diagnostic model for HNLs, which performed well and contributed to enhancing the diagnosis rate of early HCC and risk stratification of patients with HNLs. Furthermore, HnAIM had significant advantages in patch-level recognition, with important diagnostic implications for fragmentary or scarce biopsy specimens.
Collapse
Affiliation(s)
- Na Cheng
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yong Ren
- Digestive Diseases Center, The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China; Center for Artificial Intelligence in Medicine, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China
| | - Jing Zhou
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yiwang Zhang
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Deyu Wang
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiaofang Zhang
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bing Chen
- Department of Pathology, The Third Affiliated Hospital of Sun Yat-sen University Yuedong Hospital, Meizhou, China
| | - Fang Liu
- Department of Pathology, FoShan First People's Hospital, Foshan, China
| | - Jin Lv
- Department of Pathology, FoShan First People's Hospital, Foshan, China
| | - Qinghua Cao
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Sijin Chen
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hong Du
- Department of Pathology, GuangZhou First People's Hospital, Guangzhou, China
| | - Dayang Hui
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zijin Weng
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qiong Liang
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bojin Su
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Luying Tang
- Department of Pathology, The Third Affiliated Hospital of Sun Yat-sen University Lingnan Hospital, Guangzhou, China
| | - Lanqing Han
- Center for Artificial Intelligence in Medicine, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China.
| | - Jianning Chen
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Chunkui Shao
- Department of Pathology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
203
|
Liu Z, Liu Y, Zhang W, Hong Y, Meng J, Wang J, Zheng S, Xu X. Deep learning for prediction of hepatocellular carcinoma recurrence after resection or liver transplantation: a discovery and validation study. Hepatol Int 2022; 16:577-589. [PMID: 35352293 PMCID: PMC9174321 DOI: 10.1007/s12072-022-10321-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 02/18/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND There is a growing need for new improved classifiers of prognosis in hepatocellular carcinoma (HCC) patients to stratify them effectively. METHODS A deep learning model was developed on a total of 1118 patients from 4 independent cohorts. A nucleus map set (n = 120) was used to train U-net to capture the nuclear architecture. The training set (n = 552) included HCC patients that had been treated by resection. The liver transplantation (LT) set (n = 144) contained patients with HCC that had been treated by LT. The train set and its nuclear architectural information extracted by U-net were used to train the MobileNet V2-based classifier (MobileNetV2_HCC_class). The classifier was then independently tested on the LT set and externally validated on the TCGA set (n = 302). The primary outcome was recurrence free survival (RFS). RESULTS The MobileNetV2_HCC_class was a strong predictor of RFS in both LT set and TCGA set. The classifier provided a hazard ratio of 3.44 (95% CI 2.01-5.87, p < 0.001) for high risk versus low risk in the LT set, and 2.55 (95% CI 1.64-3.99, p < 0.001) when known prognostic factors, remarkable in univariable analyses on the same cohort, were adjusted. The MobileNetV2_HCC_class maintained a relatively higher discriminatory power [time-dependent accuracy and area under curve (AUC)] than other factors after LT or resection in the independent validation set (LT and TCGA set). Net reclassification improvement (NRI) analysis indicated MobileNetV2_HCC_class exhibited better net benefits for the Stage_AJCC beyond other independent factors. A pathological review demonstrated that tumoral areas with the highest recurrence predictability featured the following features: the presence of stroma, a high degree of cytological atypia, nuclear hyperchromasia, and a lack of immune cell infiltration. CONCLUSION A prognostic classifier for clinical purposes had been proposed based on the use of deep learning on histological slides from HCC patients. This classifier assists in refining the prognostic prediction of HCC patients and identifies patients who have been benefited from more intensive management.
Collapse
Affiliation(s)
- Zhikun Liu
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Yuanpeng Liu
- Department of Electrical Engineering and Computer Science, Syracuse University, 4-206 Center for Science and Technology, Syracuse, NY, 13244-4100, USA
| | - Wenhui Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Yuan Hong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, 310058, China
| | - Jinwen Meng
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Jianguo Wang
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Shusen Zheng
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Zhejiang University School of Medicine, 79 Qingchun Road, Hangzhou, 310003, China
- NHC Key Laboratory of Combined Multi-organ Transplantation, Hangzhou, 310003, China
| | - Xiao Xu
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China.
- NHC Key Laboratory of Combined Multi-organ Transplantation, Hangzhou, 310003, China.
| |
Collapse
|
204
|
Interpretable tumor differentiation grade and microsatellite instability recognition in gastric cancer using deep learning. J Transl Med 2022; 102:641-649. [PMID: 35177797 DOI: 10.1038/s41374-022-00742-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 01/18/2022] [Accepted: 01/22/2022] [Indexed: 12/13/2022] Open
Abstract
Gastric cancer possesses great histological and molecular diversity, which creates obstacles for rapid and efficient diagnoses. Classic diagnoses either depend on the pathologist's judgment, which relies heavily on subjective experience, or time-consuming molecular assays for subtype diagnosis. Here, we present a deep learning (DL) system to achieve interpretable tumor differentiation grade and microsatellite instability (MSI) recognition in gastric cancer directly using hematoxylin-eosin (HE) staining whole-slide images (WSIs). WSIs from 467 patients were divided into three cohorts: the training cohort with 348 annotated WSIs, the testing cohort with 88 annotated WSIs, and the integration testing cohort with 31 original WSIs without tumor contour annotation. First, the DL models comprehensibly achieved tumor differentiation recognition with an F1 values of 0.8615 and 0.8977 for poorly differentiated adenocarcinoma (PDA) and well-differentiated adenocarcinoma (WDA) classes. Its ability to extract pathological features about the glandular structure formation, which is the key to distinguishing between PDA and WDA, increased the interpretability of the DL models. Second, the DL models achieved MSI status recognition with a patient-level accuracy of 86.36% directly from HE-stained WSIs in the testing cohort. Finally, the integrated end-to-end system achieved patient-level MSI recognition from original HE staining WSIs with an accuracy of 83.87% in the integration testing cohort with no tumor contour annotation. The proposed system, therefore, demonstrated high accuracy and interpretability, which can potentially promote the implementation of artificial intelligence healthcare.
Collapse
|
205
|
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Med Image Anal 2022; 80:102487. [PMID: 35671591 DOI: 10.1016/j.media.2022.102487] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 05/07/2022] [Accepted: 05/20/2022] [Indexed: 01/15/2023]
Abstract
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.
Collapse
|
206
|
Federated Learning with Dynamic Model Exchange. ELECTRONICS 2022. [DOI: 10.3390/electronics11101530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Large amounts of data are needed to train accurate robust machine learning models, but the acquisition of these data is complicated due to strict regulations. While many business sectors often have unused data silos, researchers face the problem of not being able to obtain a large amount of real-world data. This is especially true in the healthcare sector, since transferring these data is often associated with bureaucratic overhead because of, for example, increased security requirements and privacy laws. Federated Learning should circumvent this problem and allow training to take place directly on the data owner’s side without sending them to a central location such as a server. Currently, there exist several frameworks for this purpose such as TensorFlow Federated, Flower, or PySyft/PyGrid. These frameworks define models for both the server and client since the coordination of the training is performed by a server. Here, we present a practical method that contains a dynamic exchange of the model, so that the model is not statically stored in source code. During this process, the model architecture and training configuration are defined by the researchers and sent to the server, which passes the settings to the clients. In addition, the model is transformed by the data owner to incorporate Differential Privacy. To trace a comparison between central learning and the impact of Differential Privacy, performance and security evaluation experiments were conducted. It was found that Federated Learning can achieve results on par with centralised learning and that the use of Differential Privacy can improve the robustness of the model against Membership Inference Attacks in an honest-but-curious setting.
Collapse
|
207
|
Lin A, Qi C, Li M, Guan R, Imyanitov EN, Mitiushkina NV, Cheng Q, Liu Z, Wang X, Lyu Q, Zhang J, Luo P. Deep Learning Analysis of the Adipose Tissue and the Prediction of Prognosis in Colorectal Cancer. Front Nutr 2022; 9:869263. [PMID: 35634419 PMCID: PMC9131178 DOI: 10.3389/fnut.2022.869263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/11/2022] [Indexed: 11/18/2022] Open
Abstract
Research has shown that the lipid microenvironment surrounding colorectal cancer (CRC) is closely associated with the occurrence, development, and metastasis of CRC. According to pathological images from the National Center for Tumor diseases (NCT), the University Medical Center Mannheim (UMM) database and the ImageNet data set, a model called VGG19 was pre-trained. A deep convolutional neural network (CNN), VGG19CRC, was trained by the migration learning method. According to the VGG19CRC model, adipose tissue scores were calculated for TCGA-CRC hematoxylin and eosin (H&E) images and images from patients at Zhujiang Hospital of Southern Medical University and First People's Hospital of Chenzhou. Kaplan-Meier (KM) analysis was used to compare the overall survival (OS) of patients. The XCell and MCP-Counter algorithms were used to evaluate the immune cell scores of the patients. Gene set enrichment analysis (GSEA) and single-sample GSEA (ssGSEA) were used to analyze upregulated and downregulated pathways. In TCGA-CRC, patients with high-adipocytes (high-ADI) CRC had significantly shorter OS times than those with low-ADI CRC. In a validation queue from Zhujiang Hospital of Southern Medical University (Local-CRC1), patients with high-ADI had worse OS than CRC patients with low-ADI. In another validation queue from First People's Hospital of Chenzhou (Local-CRC2), patients with low-ADI CRC had significantly longer OS than patients with high-ADI CRC. We developed a deep convolution network to segment various tissues from pathological H&E images of CRC and automatically quantify ADI. This allowed us to further analyze and predict the survival of CRC patients according to information from their segmented pathological tissue images, such as tissue components and the tumor microenvironment.
Collapse
Affiliation(s)
- Anqi Lin
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Chang Qi
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Mujiao Li
- College of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Rui Guan
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Evgeny N. Imyanitov
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Natalia V. Mitiushkina
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Quan Cheng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China
| | - Zaoqu Liu
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Xiaojun Wang
- First People's Hospital of Chenzhou City, Chenzhou, China
| | - Qingwen Lyu
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- *Correspondence: Qingwen Lyu
| | - Jian Zhang
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Jian Zhang
| | - Peng Luo
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Peng Luo
| |
Collapse
|
208
|
Chen S, Zhang M, Wang J, Xu M, Hu W, Wee L, Dekker A, Sheng W, Zhang Z. Automatic Tumor Grading on Colorectal Cancer Whole-Slide Images: Semi-Quantitative Gland Formation Percentage and New Indicator Exploration. Front Oncol 2022; 12:833978. [PMID: 35646672 PMCID: PMC9130480 DOI: 10.3389/fonc.2022.833978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 03/21/2022] [Indexed: 01/14/2023] Open
Abstract
Tumor grading is an essential factor for cancer staging and survival prognostication. The widely used the WHO grading system defines the histological grade of CRC adenocarcinoma based on the density of glandular formation on whole-slide images (WSIs). We developed a fully automated approach for stratifying colorectal cancer (CRC) patients' risk of mortality directly from histology WSI relating to gland formation. A tissue classifier was trained to categorize regions on WSI as glands, stroma, immune cells, background, and other tissues. A gland formation classifier was trained on expert annotations to categorize regions as different degrees of tumor gland formation versus normal tissues. The glandular formation density can thus be estimated using the aforementioned tissue categorization and gland formation information. This estimation was called a semi-quantitative gland formation ratio (SGFR), which was used as a prognostic factor in survival analysis. We evaluated gland formation percentage and validated it by comparing it against the WHO cutoff point. Survival data and gland formation maps were then used to train a spatial pyramid pooling survival network (SPPSN) as a deep survival model. We compared the survival prediction performance of estimated gland formation percentage and the SPPSN deep survival grade and found that the deep survival grade had improved discrimination. A univariable Cox model for survival yielded moderate discrimination with SGFR (c-index 0.62) and deep survival grade (c-index 0.64) in an independent institutional test set. Deep survival grade also showed better discrimination performance in multivariable Cox regression. The deep survival grade significantly increased the c-index of the baseline Cox model in both validation set and external test set, but the inclusion of SGFR can only improve the Cox model less in external test and is unable to improve the Cox model in the validation set.
Collapse
Affiliation(s)
- Shenlun Chen
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- MAASTRO (Department of Radiotherapy), GROW School of Oncology and Developmental Biology, Maastricht University and Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Meng Zhang
- Department of Pathology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Midie Xu
- Department of Pathology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Weigang Hu
- Department of Pathology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Leonard Wee
- MAASTRO (Department of Radiotherapy), GROW School of Oncology and Developmental Biology, Maastricht University and Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Andre Dekker
- MAASTRO (Department of Radiotherapy), GROW School of Oncology and Developmental Biology, Maastricht University and Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Weiqi Sheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
209
|
Few-Shot Learning with Collateral Location Coding and Single-Key Global Spatial Attention for Medical Image Classification. ELECTRONICS 2022. [DOI: 10.3390/electronics11091510] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Humans are born with the ability to learn quickly by discerning objects from a few samples, to acquire new skills in a short period of time, and to make decisions based on limited prior experience and knowledge. The existing deep learning models for medical image classification often rely on a large number of labeled training samples, whereas the fast learning ability of deep neural networks has failed to develop. In addition, it requires a large amount of time and computing resource to retrain the model when the deep model encounters classes it has never seen before. However, for healthcare applications, enabling a model to generalize new clinical scenarios is of great importance. The existing image classification methods cannot explicitly use the location information of the pixel, making them insensitive to cues related only to the location. Besides, they also rely on local convolution and cannot properly utilize global information, which is essential for image classification. To alleviate these problems, we propose a collateral location coding to help the network explicitly exploit the location information of each pixel to make it easier for the network to recognize cues related to location only, and a single-key global spatial attention is designed to make the pixels at each location perceive the global spatial information in a low-cost way. Experimental results on three medical image benchmark datasets demonstrate that our proposed algorithm outperforms the state-of-the-art approaches in both effectiveness and generalization ability.
Collapse
|
210
|
Abbet C, Studer L, Fischer A, Dawson H, Zlobec I, Bozorgtabar B, Thiran JP. Self-Rule to Multi-Adapt: Generalized Multi-source Feature Learning Using Unsupervised Domain Adaptation for Colorectal Cancer Tissue Detection. Med Image Anal 2022; 79:102473. [DOI: 10.1016/j.media.2022.102473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 03/07/2022] [Accepted: 05/03/2022] [Indexed: 10/18/2022]
|
211
|
Xiang F, Xu X. CirRNA F-circEA-2a Suppresses the Role of miR-3613-3p in Colorectal Cancer by Direct Sponging and Predicts Poor Survival. Cancer Manag Res 2022; 14:1825-1833. [PMID: 35652063 PMCID: PMC9148923 DOI: 10.2147/cmar.s351518] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 04/27/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose CirRNA F-circEA-2a and miR-3613-3p are two recently identified novel cancer-related RNAs. To date, their participation in colorectal cancer (CRC) is unknown. This research was therefore conducted to analyze their participation in CRC. Patients and Methods Plasma and paired CRC and non-tumor tissues from CRC patients (n=64) and plasma samples from healthy controls (HCs, n=64) were collected. F-circEA-2a and miR-3613-3p levels in these samples were analyzed using RT-qPCR. The 64 CRC patients were followed up for five years to analyze the prognostic value of plasma F-circEA-2a for CRC. The direct interaction between wild type F-circEA-2a (F-circEA-2a-wt) or mutant F-circEA-2a (F-circEA-2a-mut) and miR-3613-3p was analyzed through RNA-RNA pulldown assay. The role of F-circEA-2a and miR-3613-3p in regulating each other’s expression was analyzed through overexpression assay. Their roles in cell proliferation were analyzed using BrdU assay. The role of F-circEA-2a in regulating EZH2 expression was analyzed by RT-qPCR and Western blot. Results CircEA-2a was overexpressed in CRC, while miR-3613-3p was under-expressed in CRC. Most patients who died during the follow-up had high F-circEA-2a levels. F-circEA-2a-wt, but not F-circEA-2a-mut, directly interacted with miR-3613-3p. F-circEA-2a and miR-3613-3p showed no role in regulating each other’s expression. F-circEA-2a reduced the inhibitory effects of miR-3613-3p on cell proliferation. F-circEA-2a upregulated EZH2 at both mRNA and protein levels. Conclusion F-circEA-2a may suppress the role of miR-3613-3p in CRC by direct sponging and predicts poor survival.
Collapse
Affiliation(s)
- Fu Xiang
- Department of General Surgery, The First Affiliated Hospital of Dalian Medical University, Dalian City, Liaoning Province, People’s Republic of China
| | - Xuedong Xu
- Department of General Surgery, The First Affiliated Hospital of Dalian Medical University, Dalian City, Liaoning Province, People’s Republic of China
- Correspondence: Xuedong Xu, Department of General Surgery, The First Affiliated Hospital of Dalian Medical University, No. 5 Longbin Road, Dalian City, Liaoning Province, 116000, People’s Republic of China, Tel +86-83635963-7098, Email
| |
Collapse
|
212
|
Hassan T, Javed S, Mahmood A, Qaiser T, Werghi N, Rajpoot N. Nucleus Classification in Histology Images Using Message Passing Network. Med Image Anal 2022; 79:102480. [DOI: 10.1016/j.media.2022.102480] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 03/07/2022] [Accepted: 05/10/2022] [Indexed: 01/18/2023]
|
213
|
Predicting Colorectal Cancer Using Residual Deep Learning with Nursing Care. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:7996195. [PMID: 35291423 PMCID: PMC8898865 DOI: 10.1155/2022/7996195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 12/02/2021] [Accepted: 02/03/2022] [Indexed: 02/02/2023]
Abstract
Presently, colorectal cancer is the second most dangerous cancer; around 13% of people have been affected; and it requires an effective image analysis and earlier cancer prediction (IAECP) system for reducing the mortality rate. Here, the IAECP system uses MRI radio imaging for predicting colorectal cancer. During this process, high- and low-level features are required to examine cancer in an earlier stage. Due to the limitation of the conventional feature extraction process, both features are difficult to extract from cancer suffered locations. Hence, a deep learning system (DLS) is used to examine the entire bowel MRI image to identify the cancer-affected location, feature extraction, and feature training process. Furthermore, the DLS-based IAECP system helps improve the overall colorectal cancer identification accuracy for further process. The derived bowel features are trained by applying the residual convolution network, which minimizes the error between predicted and actual values. Finally, the test query images are compared with the trained image by applying the sum, which is more absolute to the cross-correlation template feature matching (SACC) algorithm. The experimental process is performed using 100,000 histological data sets, which is considered a publicly available data set. Moreover, the introduced method does not use generic features, whereas the deep learning features help improve the overall IAECP prediction rate (99.8%) ratio as predicted at lab-scale analysis.
Collapse
|
214
|
Sandeman K, Blom S, Koponen V, Manninen A, Juhila J, Rannikko A, Ropponen T, Mirtti T. AI Model for Prostate Biopsies Predicts Cancer Survival. Diagnostics (Basel) 2022; 12:diagnostics12051031. [PMID: 35626187 PMCID: PMC9139241 DOI: 10.3390/diagnostics12051031] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 04/12/2022] [Accepted: 04/17/2022] [Indexed: 02/04/2023] Open
Abstract
An artificial intelligence (AI) algorithm for prostate cancer detection and grading was developed for clinical diagnostics on biopsies. The study cohort included 4221 scanned slides from 872 biopsy sessions at the HUS Helsinki University Hospital during 2016–2017 and a subcohort of 126 patients treated by robot-assisted radical prostatectomy (RALP) during 2016–2019. In the validation cohort (n = 391), the model detected cancer with a sensitivity of 98% and specificity of 98% (weighted kappa 0.96 compared with the pathologist’s diagnosis). Algorithm-based detection of the grade area recapitulated the pathologist’s grade group. The area of AI-detected cancer was associated with extra-prostatic extension (G5 OR: 48.52; 95% CI 1.11–8.33), seminal vesicle invasion (cribriform G4 OR: 2.46; 95% CI 0.15–1.7; G5 OR: 5.58; 95% CI 0.45–3.42), and lymph node involvement (cribriform G4 OR: 2.66; 95% CI 0.2–1.8; G5 OR: 4.09; 95% CI 0.22–3). Algorithm-detected grade group 3–5 prostate cancer depicted increased risk for biochemical recurrence compared with grade groups 1–2 (HR: 5.91; 95% CI 1.96–17.83). This study showed that a deep learning model not only can find and grade prostate cancer on biopsies comparably with pathologists but also can predict adverse staging and probability for recurrence after surgical treatment.
Collapse
Affiliation(s)
- Kevin Sandeman
- Medicum and Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, P.O. Box 63, 00014 Helsinki, Finland; (A.R.); (T.M.)
- Department of Pathology, Division of Laboratory Medicine, Skåne University Hospital, Jan Waldenström Gata 59, 20502 Malmö, Sweden
- Correspondence:
| | - Sami Blom
- Aiforia Technologies Plc., Tukholmankatu 8, 00290 Helsinki, Finland; (S.B.); (V.K.); (A.M.); (J.J.); (T.R.)
| | - Ville Koponen
- Aiforia Technologies Plc., Tukholmankatu 8, 00290 Helsinki, Finland; (S.B.); (V.K.); (A.M.); (J.J.); (T.R.)
| | - Anniina Manninen
- Aiforia Technologies Plc., Tukholmankatu 8, 00290 Helsinki, Finland; (S.B.); (V.K.); (A.M.); (J.J.); (T.R.)
| | - Juuso Juhila
- Aiforia Technologies Plc., Tukholmankatu 8, 00290 Helsinki, Finland; (S.B.); (V.K.); (A.M.); (J.J.); (T.R.)
| | - Antti Rannikko
- Medicum and Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, P.O. Box 63, 00014 Helsinki, Finland; (A.R.); (T.M.)
- Department of Urology, Helsinki University Hospital, P.O. Box 340, 00029 Helsinki, Finland
| | - Tuomas Ropponen
- Aiforia Technologies Plc., Tukholmankatu 8, 00290 Helsinki, Finland; (S.B.); (V.K.); (A.M.); (J.J.); (T.R.)
| | - Tuomas Mirtti
- Medicum and Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, P.O. Box 63, 00014 Helsinki, Finland; (A.R.); (T.M.)
- Department of Pathology, HUSLAB Laboratory Services, Helsinki University Hospital, P.O. Box 720, 00029 Helsinki, Finland
| |
Collapse
|
215
|
Communicator-Driven Data Preprocessing Improves Deep Transfer Learning of Histopathological Prediction of Pancreatic Ductal Adenocarcinoma. Cancers (Basel) 2022; 14:cancers14081964. [PMID: 35454869 PMCID: PMC9031738 DOI: 10.3390/cancers14081964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 12/02/2022] Open
Abstract
Simple Summary Pancreatic cancer has a dismal prognosis and its diagnosis can be challenging. Histopathological slides can be digitalized and their analysis can then be supported by computer algorithms. For this purpose, computer algorithms (neural networks) need to be trained to detect the desired tissue type (e.g., pancreatic cancer). However, raw training data often contain many different tissue types. Here we show a preprocessing step using two communicators that sort unfitting tissue tiles into a new dataset class. Using the improved dataset neural networks distinguished pancreatic cancer from other tissue types on digitalized histopathological slides including lymph node metastases. Abstract Pancreatic cancer is a fatal malignancy with poor prognosis and limited treatment options. Early detection in primary and secondary locations is critical, but fraught with challenges. While digital pathology can assist with the classification of histopathological images, the training of such networks always relies on a ground truth, which is frequently compromised as tissue sections contain several types of tissue entities. Here we show that pancreatic cancer can be detected on hematoxylin and eosin (H&E) sections by convolutional neural networks using deep transfer learning. To improve the ground truth, we describe a preprocessing data clean-up process using two communicators that were generated through existing and new datasets. Specifically, the communicators moved image tiles containing adipose tissue and background to a new data class. Hence, the original dataset exhibited improved labeling and, consequently, a higher ground truth accuracy. Deep transfer learning of a ResNet18 network resulted in a five-class accuracy of about 94% on test data images. The network was validated with independent tissue sections composed of healthy pancreatic tissue, pancreatic ductal adenocarcinoma, and pancreatic cancer lymph node metastases. The screening of different models and hyperparameter fine tuning were performed to optimize the performance with the independent tissue sections. Taken together, we introduce a step of data preprocessing via communicators as a means of improving the ground truth during deep transfer learning and hyperparameter tuning to identify pancreatic ductal adenocarcinoma primary tumors and metastases in histological tissue sections.
Collapse
|
216
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 89] [Impact Index Per Article: 44.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
217
|
Heinz CN, Echle A, Foersch S, Bychkov A, Kather JN. The future of artificial intelligence in digital pathology - results of a survey across stakeholder groups. Histopathology 2022; 80:1121-1127. [PMID: 35373378 DOI: 10.1111/his.14659] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 03/17/2022] [Accepted: 04/02/2022] [Indexed: 11/30/2022]
Abstract
AIMS Artificial intelligence (AI) provides a powerful tool to extract information from digitized histopathology whole slide images. In the last five years, academic and commercial actors have developed new technical solutions for a diverse set of tasks, including tissue segmentation, cell detection, mutation prediction, prognostication and prediction of treatment response. In the light of limited overall resources, it is presently unclear for researchers, practitioners and policymakers which of these topics are stable enough for clinical use in the near future and which topics are still experimental, but worth investing time and effort into. METHODS To identify potentially promising applications of AI in pathology, we performed an anonymous online survey of 75 computational pathology domain experts from academia and industry. Participants enrolled in 2021 were queried about their subjective opinion on promising and appealing sub-fields of computational pathology with a focus on solid tumors. RESULTS The results of this survey indicate that the prediction of treatment response directly from routine pathology slides is regarded as the most promising future application. This item was ranked highest in the overall analysis and in sub-groups by age and professional background. Furthermore, prediction of genetic alterations, gene expression and survival directly from routine pathology images scored consistently high across subgroups. CONCLUSIONS Together, these data demonstrate a possible direction for the development of computational pathology systems in clinical, academic and industrial research in the near future.
Collapse
Affiliation(s)
- Céline N Heinz
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Amelie Echle
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Sebastian Foersch
- Department of Pathology, University Medical Center Mainz, Mainz, Germany
| | - Andrey Bychkov
- Department of Pathology, Kameda Medical Center, Kamogawa, Chiba, Japan
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.,Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.,Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| |
Collapse
|
218
|
Javed S, Mahmood A, Dias J, Werghi N. Multi-level feature fusion for nucleus detection in histology images using correlation filters. Comput Biol Med 2022; 143:105281. [PMID: 35139456 DOI: 10.1016/j.compbiomed.2022.105281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/14/2022] [Accepted: 01/30/2022] [Indexed: 11/29/2022]
Abstract
Nucleus detection is an important step for the analysis of histology images in the field of computational pathology. Pathologists use quantitative nuclear morphology for better cancer grading and prognostication. The nucleus detection becomes very challenging because of the large morphological variations across different types of nuclei, nuclei clutter, and heterogeneity. To address these challenges, we aim to improve the nucleus detection using multi-level feature fusion based on discriminative correlation filters. The proposed algorithm employs multiple features pool, based on varying features combinations. Early fusion is employed to integrate multi-feature information within a pool and inter-pool fusion is proposed to fuse information across multiple pools. Inter-pool consistency is proposed to find the pools which are consistent and complement each other to improve performance. For this purpose, the relative standard deviation is used as an inter-pool consistency measure. Pool robustness to noise is also estimated using relative standard deviation as a robustness measure. High-level pool fusion is proposed using inter-pool consistency and pool-robustness scores. The proposed algorithm facilitates a robust and reliable appearance model for nucleus detection. The proposed algorithm is evaluated on three publicly available datasets and compared with several existing state-of-the-art methods. Our proposed algorithm has consistently outperformed existing methods on a wide range of experiments.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan.
| | - Jorge Dias
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| | - Naoufel Werghi
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| |
Collapse
|
219
|
Chen X, Li Y, Yao L, Adeli E, Zhang Y, Wang X. Generative Adversarial U-Net for Domain-free Few-shot Medical Diagnosis. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.03.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
220
|
Chen H, Li C, Li X, Rahaman MM, Hu W, Li Y, Liu W, Sun C, Sun H, Huang X, Grzegorzek M. IL-MCAM: An interactive learning and multi-channel attention mechanism-based weakly supervised colorectal histopathology image classification approach. Comput Biol Med 2022; 143:105265. [PMID: 35123138 DOI: 10.1016/j.compbiomed.2022.105265] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 01/21/2022] [Accepted: 01/22/2022] [Indexed: 12/24/2022]
Abstract
In recent years, colorectal cancer has become one of the most significant diseases that endanger human health. Deep learning methods are increasingly important for the classification of colorectal histopathology images. However, existing approaches focus more on end-to-end automatic classification using computers rather than human-computer interaction. In this paper, we propose an IL-MCAM framework. It is based on attention mechanisms and interactive learning. The proposed IL-MCAM framework includes two stages: automatic learning (AL) and interactivity learning (IL). In the AL stage, a multi-channel attention mechanism model containing three different attention mechanism channels and convolutional neural networks is used to extract multi-channel features for classification. In the IL stage, the proposed IL-MCAM framework continuously adds misclassified images to the training set in an interactive approach, which improves the classification ability of the MCAM model. We carried out a comparison experiment on our dataset and an extended experiment on the HE-NCT-CRC-100K dataset to verify the performance of the proposed IL-MCAM framework, achieving classification accuracies of 98.98% and 99.77%, respectively. In addition, we conducted an ablation experiment and an interchangeability experiment to verify the ability and interchangeability of the three channels. The experimental results show that the proposed IL-MCAM framework has excellent performance in the colorectal histopathological image classification tasks.
Collapse
Affiliation(s)
- Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Xiaoyan Li
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, China.
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Yixin Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Shenyang Institute of Automation, Chinese Academy of Sciences, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, China
| | - Xinyu Huang
- Institute of Medical Informatics, University of Luebeck, Germany
| | | |
Collapse
|
221
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
222
|
A retrospective analysis using deep-learning models for prediction of survival outcome and benefit of adjuvant chemotherapy in stage II/III colorectal cancer. J Cancer Res Clin Oncol 2022; 148:1955-1963. [PMID: 35332389 DOI: 10.1007/s00432-022-03976-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 03/02/2022] [Indexed: 10/18/2022]
Abstract
PURPOSE Most of Stage II/III colorectal cancer (CRC) patients can be cured by surgery alone, and only certain CRC patients benefit from adjuvant chemotherapy. Risk stratification based on deep-learning from haematoxylin and eosin (H&E) images has been postulated as a potential predictive biomarker for benefit from adjuvant chemotherapy. However, very limited success has been achieved in using biomarkers, including deep-learning-based markers, to facilitate the decision for adjuvant chemotherapy despite recent advances of artificial intelligence. METHODS We trained and internally validated CRCNet using 780 Stage II/III CRC patients from Molecular and Cellular Oncology. Independent external validation of the model was performed using 337 Stage II/III CRC patients from The Cancer Genome Atlas (TCGA). RESULTS CRCNet stratified the patients into high, medium, and low-risk subgroups. Multivariate Cox regression analyses confirmed that CRCNet risk groups are statistically significant after adjusting for existing risk factors. The high-risk subgroup significantly benefits from adjuvant chemotherapy. A hazard ratio (chemo-treated vs untreated) of 0.2 (95% Confidence Interval (CI), 0.05-0.65; P = 0.009) and 0.6 (95% CI 0.42-0.98; P = 0.038) are observed in the TCGA and MCO Fluorouracil-treated patients, respectively. Conversely, no significant benefit from chemotherapy is observed in the low- and medium-risk groups (P = 0.2-1). CONCLUSION The retrospective analysis provides further evidence that H&E image-based biomarkers may potentially be of great use in delivering treatments following surgery for Stage II/III CRC, improving patient survival, and avoiding unnecessary treatment and associated toxicity, and warrants further validation on other datasets and prospective confirmation in clinical trials.
Collapse
|
223
|
Xu Y, Jiang L, Huang S, Liu Z, Zhang J. Dual resolution deep learning network with self-attention mechanism for classification and localisation of colorectal cancer in histopathological images. J Clin Pathol 2022:jclinpath-2021-208042. [PMID: 35273120 DOI: 10.1136/jclinpath-2021-208042] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 02/09/2022] [Indexed: 12/29/2022]
Abstract
AIMS Microscopic examination is a basic diagnostic technology for colorectal cancer (CRC), but it is very laborious. We developed a dual resolution deep learning network with self-attention mechanism (DRSANet) which combines context and details for CRC binary classification and localisation in whole slide images (WSIs), and as a computer-aided diagnosis (CAD) to improve the sensitivity and specificity of doctors' diagnosis. METHODS Representative regions of interest (ROI) of each tissue type were manually delineated in WSIs by pathologists. Based on the same coordinates of centre position, patches were extracted at different magnification levels from the ROI. Specifically, patches from low magnification level contain contextual information, while from high magnification level provide important details. A dual-inputs network was designed to learn context and details simultaneously, and self-attention mechanism was used to selectively learn different positions in the images to enhance the performance. RESULTS In classification task, DRSANet outperformed the benchmark networks which only depended on the high magnification patches on two test set. Furthermore, in localisation task, DRSANet demonstrated a better localisation capability of tumour area in WSI with less areas of misidentification. CONCLUSIONS We compared DRSANet with benchmark networks which only use the patches from high magnification level. Experimental results reveal that the performance of DRSANet is better than the benchmark networks. Both context and details should be considered in deep learning method.
Collapse
Affiliation(s)
- Yan Xu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Liwen Jiang
- Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| | - Shuting Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Zhenyu Liu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jiangyu Zhang
- Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
224
|
Qiu H, Ding S, Liu J, Wang L, Wang X. Applications of Artificial Intelligence in Screening, Diagnosis, Treatment, and Prognosis of Colorectal Cancer. Curr Oncol 2022; 29:1773-1795. [PMID: 35323346 PMCID: PMC8947571 DOI: 10.3390/curroncol29030146] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Accurate early detection and diagnosis, comprehensive assessment of treatment response, and precise prediction of prognosis are essential to improve the patients’ survival rate. In recent years, due to the explosion of clinical and omics data, and groundbreaking research in machine learning, artificial intelligence (AI) has shown a great application potential in clinical field of CRC, providing new auxiliary approaches for clinicians to identify high-risk patients, select precise and personalized treatment plans, as well as to predict prognoses. This review comprehensively analyzes and summarizes the research progress and clinical application value of AI technologies in CRC screening, diagnosis, treatment, and prognosis, demonstrating the current status of the AI in the main clinical stages. The limitations, challenges, and future perspectives in the clinical implementation of AI are also discussed.
Collapse
Affiliation(s)
- Hang Qiu
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Correspondence: (H.Q.); (X.W.)
| | - Shuhan Ding
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA;
| | - Jianbo Liu
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Liya Wang
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
| | - Xiaodong Wang
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
- Correspondence: (H.Q.); (X.W.)
| |
Collapse
|
225
|
Attentive Octave Convolutional Capsule Network for Medical Image Classification. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Medical image classification plays an essential role in disease diagnosis and clinical treatment. More and more research efforts have been dedicated to the design of effective methods for medical image classification. As an effective framework, the capsule network (CapsNet) can realize translation equivariance. Lots of current research applies capsule networks in medical image analysis. In this paper, we propose an attentive octave convolutional capsule network (AOC-Caps) for medical image classification. In AOC-Caps, an AOC module is used to replace the traditional convolution operation. The purpose of the AOC module is to process and fuse the high- and low-frequency information in the input image simultaneously, and weigh the important parts automatically. Following the AOC module, a matrix capsule is used and the expectation maximization (EM) algorithm is applied to update the routing weights. The proposed AOC-Caps and comparative methods are tested on seven datasets, including PathMNIST, DermaMNIST, OCTMNIST, PneumoniaMNIST, OrganMNIST_Axial, OrganMNIST_Coronal, and OrganMNIST_Sagittal, which are from MedMNIST. In the experiments, baselines include the traditional CNN models, automated machine learning (AutoML) methods, and related capsule network methods. The experimental results demonstrate that the proposed AOC-Caps achieves better performance on most of the seven medical image datasets.
Collapse
|
226
|
Fan BE, Wang SSY, Natalie Aw MY, Chia MF, Yi Chen DT, Ramanathan K, Wong MS, Ponnudurai K, Winkler S. Artificial intelligence in peripheral blood films: an evolving landscape. THE LANCET HAEMATOLOGY 2022; 9:e174. [DOI: 10.1016/s2352-3026(22)00029-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 11/29/2022]
|
227
|
Huang W, Randhawa R, Jain P, Hubbard S, Eickhoff J, Kummar S, Wilding G, Basu H, Roy R. A Novel Artificial Intelligence-Powered Method for Prediction of Early Recurrence of Prostate Cancer After Prostatectomy and Cancer Drivers. JCO Clin Cancer Inform 2022; 6:e2100131. [PMID: 35192404 DOI: 10.1200/cci.21.00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE To develop a novel artificial intelligence (AI)-powered method for the prediction of prostate cancer (PCa) early recurrence and identification of driver regions in PCa of all Gleason Grade Group (GGG). MATERIALS AND METHODS Deep convolutional neural networks were used to develop the AI model. The AI model was trained on The Cancer Genome Atlas Prostatic Adenocarcinoma (TCGA-PRAD) whole slide images (WSI) and data set (n = 243) to predict 3-year biochemical recurrence after radical prostatectomy (RP) and was subsequently validated on WSI from patients with PCa (n = 173) from the University of Wisconsin-Madison. RESULTS Our AI-powered platform can extract visual and subvisual morphologic features from WSI to identify driver regions predictive of early recurrence of PCa (regions of interest [ROIs]) after RP. The ROIs were ranked with AI-morphometric scores, which were prognostic for 3-year biochemical recurrence (area under the curve [AUC], 0.78), which is significantly better than the GGG overall (AUC, 0.62). The AI-morphometric scores also showed high accuracy in the prediction of recurrence for low- or intermediate-risk PCa-AUC, 0.76, 0.84, and 0.81 for GGG1, GGG2, and GGG3, respectively. These patients could benefit the most from timely adjuvant therapy after RP. The predictive value of the high-scored ROIs was validated by known PCa biomarkers studied. With this focused biomarker analysis, a potentially new STING pathway-related PCa biomarker-TMEM173-was identified. CONCLUSION Our study introduces a novel approach for identifying patients with PCa at risk for early recurrence regardless of their GGG status and for identifying cancer drivers for focused evolution-aware novel biomarker discovery.
Collapse
Affiliation(s)
- Wei Huang
- Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison School of Medicine and Public Health, Madison, WI.,PathomIQ, Inc, Cupertino, CA
| | - Ramandeep Randhawa
- PathomIQ, Inc, Cupertino, CA.,University of Southern California Marshall School of Business, Los Angeles, CA
| | | | - Samuel Hubbard
- Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison School of Medicine and Public Health, Madison, WI
| | - Jens Eickhoff
- Department of Biostatistics and Informatics, University of Wisconsin-Madison, Madison, WI
| | - Shivaani Kummar
- PathomIQ, Inc, Cupertino, CA.,Division of Hematology & Medical Oncology, Center for Experimental Therapeutics, Knight Cancer Institute, Oregon Health & Science University, Portland, OR
| | | | - Hirak Basu
- Department of Genitourinary Medical Oncology, MD Anderson Cancer Center, Houston, TX
| | | |
Collapse
|
228
|
A promising deep learning-assistive algorithm for histopathological screening of colorectal cancer. Sci Rep 2022; 12:2222. [PMID: 35140318 PMCID: PMC8828883 DOI: 10.1038/s41598-022-06264-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/24/2022] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is one of the most common cancers worldwide, accounting for an annual estimated 1.8 million incident cases. With the increasing number of colonoscopies being performed, colorectal biopsies make up a large proportion of any histopathology laboratory workload. We trained and validated a unique artificial intelligence (AI) deep learning model as an assistive tool to screen for colonic malignancies in colorectal specimens, in order to improve cancer detection and classification; enabling busy pathologists to focus on higher order decision-making tasks. The study cohort consists of Whole Slide Images (WSI) obtained from 294 colorectal specimens. Qritive’s unique composite algorithm comprises both a deep learning model based on a Faster Region Based Convolutional Neural Network (Faster-RCNN) architecture for instance segmentation with a ResNet-101 feature extraction backbone that provides glandular segmentation, and a classical machine learning classifier. The initial training used pathologists’ annotations on a cohort of 66,191 image tiles extracted from 39 WSIs. A subsequent application of a classical machine learning-based slide classifier sorted the WSIs into ‘low risk’ (benign, inflammation) and ‘high risk’ (dysplasia, malignancy) categories. We further trained the composite AI-model’s performance on a larger cohort of 105 resections WSIs and then validated our findings on a cohort of 150 biopsies WSIs against the classifications of two independently blinded pathologists. We evaluated the area under the receiver-operator characteristic curve (AUC) and other performance metrics. The AI model achieved an AUC of 0.917 in the validation cohort, with excellent sensitivity (97.4%) in detection of high risk features of dysplasia and malignancy. We demonstrate an unique composite AI-model incorporating both a glandular segmentation deep learning model and a classical machine learning classifier, with excellent sensitivity in picking up high risk colorectal features. As such, AI plays a role as a potential screening tool in assisting busy pathologists by outlining the dysplastic and malignant glands.
Collapse
|
229
|
Tang H, Li G, Liu C, Huang D, Zhang X, Qiu Y, Liu Y. Diagnosis of lymph node metastasis in head and neck squamous cell carcinoma using deep learning. Laryngoscope Investig Otolaryngol 2022; 7:161-169. [PMID: 35155794 PMCID: PMC8823170 DOI: 10.1002/lio2.742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 01/04/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND To build an automatic pathological diagnosis model to assess the lymph node metastasis status of head and neck squamous cell carcinoma (HNSCC) based on deep learning algorithms. STUDY DESIGN A retrospective study. METHODS A diagnostic model integrating two-step deep learning networks was trained to analyze the metastasis status in 85 images of HNSCC lymph nodes. The diagnostic model was tested in a test set of 21 images with metastasis and 29 images without metastasis. All images were scanned from HNSCC lymph node sections stained with hematoxylin-eosin (HE). RESULTS In the test set, the overall accuracy, sensitivity, and specificity of the diagnostic model reached 86%, 100%, and 75.9%, respectively. CONCLUSIONS Our two-step diagnostic model can be used to automatically assess the status of HNSCC lymph node metastasis with high sensitivity. LEVEL OF EVIDENCE NA.
Collapse
Affiliation(s)
- Haosheng Tang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Guo Li
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| | - Chao Liu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Donghai Huang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Xin Zhang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Yuanzheng Qiu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| | - Yong Liu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| |
Collapse
|
230
|
Alqudah AM, Alqudah A. Deep learning for single-lead ECG beat arrhythmia-type detection using novel iris spectrogram representation. Soft comput 2022. [DOI: 10.1007/s00500-021-06555-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
231
|
Benning L, Peintner A, Peintner L. Advances in and the Applicability of Machine Learning-Based Screening and Early Detection Approaches for Cancer: A Primer. Cancers (Basel) 2022; 14:cancers14030623. [PMID: 35158890 PMCID: PMC8833439 DOI: 10.3390/cancers14030623] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/22/2022] [Accepted: 01/25/2022] [Indexed: 02/07/2023] Open
Abstract
Simple Summary Non-communicable diseases in general, and cancer in particular, contribute greatly to the global burden of disease. Although significant advances have been made to address this burden, cancer is still among the top drivers of mortality, second only to cardiovascular diseases. Consensus has been established that a key factor to reduce the burden of disease from cancer is to improve screening for and the early detection of such conditions. To date, however, most approaches in this field relied on established screening methods, such as a clinical examination, radiographic imaging, tissue staining or biochemical markers. Yet, with the advances of information technology, new data-driven screening and diagnostic tools have been developed. This article provides a brief overview of the theoretical foundations of these data-driven approaches, highlights the promising use cases and underscores the challenges and limitations that come with the introduction of these approaches to the clinical field. Abstract Despite the efforts of the past decades, cancer is still among the key drivers of global mortality. To increase the detection rates, screening programs and other efforts to improve early detection were initiated to cover the populations at a particular risk for developing a specific malignant condition. These diagnostic approaches have, so far, mostly relied on conventional diagnostic methods and have made little use of the vast amounts of clinical and diagnostic data that are routinely being collected along the diagnostic pathway. Practitioners have lacked the tools to handle this ever-increasing flood of data. Only recently, the clinical field has opened up more for the opportunities that come with the systematic utilisation of high-dimensional computational data analysis. We aim to introduce the reader to the theoretical background of machine learning (ML) and elaborate on the established and potential use cases of ML algorithms in screening and early detection. Furthermore, we assess and comment on the relevant challenges and misconceptions of the applicability of ML-based diagnostic approaches. Lastly, we emphasise the need for a clear regulatory framework to responsibly introduce ML-based diagnostics in clinical practice and routine care.
Collapse
Affiliation(s)
- Leo Benning
- Health Care Supply Research and Data Mining Working Group, Emergency Department, University Medical Center Freiburg, 79106 Freiburg, Germany;
| | - Andreas Peintner
- Databases and Information Systems, Department of Computer Science, Leopold-Franzens University of Innsbruck, 6020 Innsbruck, Austria;
| | - Lukas Peintner
- Institute of Molecular Medicine and Cell Research, Albert Ludwigs University of Freiburg, 79085 Freiburg, Germany
- Correspondence: ; Tel.: +49-761-203-9618
| |
Collapse
|
232
|
Di D, Zhang J, Lei F, Tian Q, Gao Y. Big-Hypergraph Factorization Neural Network for Survival Prediction From Whole Slide Image. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1149-1160. [PMID: 34982683 DOI: 10.1109/tip.2021.3139229] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Survival prediction for patients based on histopa- thological whole-slide images (WSIs) has attracted increasing attention in recent years. Due to the massive pixel data in a single WSI, fully exploiting cell-level structural information (e.g., stromal/tumor microenvironment) from the gigapixel WSI is challenging. Most of the current studies resolve the problem by sampling limited image patches to construct a graph-based model (e.g., hypergraph). However, the sampling scale is a critical bottleneck since it is a fundamental obstacle of broadening samples for transductive learning. To overcome the limitation of the sampling scale for constructing a big hypergraph model, we propose a factorization neural network that embeds the correlation among large-scale vertices and hyperedges into two low-dimensional latent semantic spaces separately, empowering the dense sampling. Thanks to the compressed low-dimensional correlation embedding, the hypergraph convolutional layers generate the high-order global representation for each WSI. To minimize the effect of the uncertainty data as well as to achieve the metric-driven learning, we also propose a multi-level ranking supervision to enable the network learning by a queue of patients on the global horizon. Extensive experiments are conducted on three public carcinoma datasets (i.e., LUSC, GBM, and NLST), and the quantitative results demonstrate the proposed method outperforms state-of-the-art methods across-the-board.
Collapse
|
233
|
Fast and scalable search of whole-slide images via self-supervised deep learning. Nat Biomed Eng 2022; 6:1420-1434. [PMID: 36217022 PMCID: PMC9792371 DOI: 10.1038/s41551-022-00929-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 07/15/2022] [Indexed: 01/14/2023]
Abstract
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs). Computationally identifying WSIs with similar morphologic features within large repositories without requiring supervised training can have significant applications. However, the retrieval speeds of algorithms for searching similar WSIs often scale with the repository size, which limits their clinical and research potential. Here we show that self-supervised deep learning can be leveraged to search for and retrieve WSIs at speeds that are independent of repository size. The algorithm, which we named SISH (for self-supervised image search for histology) and provide as an open-source package, requires only slide-level annotations for training, encodes WSIs into meaningful discrete latent representations and leverages a tree data structure for fast searching followed by an uncertainty-based ranking algorithm for WSI retrieval. We evaluated SISH on multiple tasks (including retrieval tasks based on tissue-patch queries) and on datasets spanning over 22,000 patient cases and 56 disease subtypes. SISH can also be used to aid the diagnosis of rare cancer types for which the number of available WSIs is often insufficient to train supervised deep-learning models.
Collapse
|
234
|
McGenity C, Wright A, Treanor D. AIM in Surgical Pathology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
235
|
Minciuna CE, Tanase M, Manuc TE, Tudor S, Herlea V, Dragomir MP, Calin GA, Vasilescu C. The seen and the unseen: Molecular classification and image based-analysis of gastrointestinal cancers. Comput Struct Biotechnol J 2022; 20:5065-5075. [PMID: 36187924 PMCID: PMC9489806 DOI: 10.1016/j.csbj.2022.09.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/07/2022] [Accepted: 09/07/2022] [Indexed: 11/13/2022] Open
Abstract
Gastrointestinal cancers account for 22.5% of cancer related deaths worldwide and represent circa 20% of all cancers. In the last decades, we have witnessed a shift from histology-based to molecular-based classifications using genomic, epigenomic, and transcriptomic data. The molecular based classification revealed new prognostic markers and may aid the therapy selection. Because of the high-costs to perform a molecular classification, in recent years immunohistochemistry-based surrogate classification were developed which permit the stratification of patients, and in parallel multiple groups developed hematoxylin and eosin whole slide image analysis for sub-classifying these entities. Hence, we are witnessing a return to an image-based classification with the purpose to infer hidden information from routine histology images that would permit to detect the patients that respond to specific therapies and would be able to predict their outcome. In this review paper, we will discuss the current histological, molecular, and immunohistochemical classifications of the most common gastrointestinal cancers, gastric adenocarcinoma, and colorectal adenocarcinoma, and will present key aspects for developing a new artificial intelligence aided image-based classification of these malignancies.
Collapse
|
236
|
Musa IH, Afolabi LO, Zamit I, Musa TH, Musa HH, Tassang A, Akintunde TY, Li W. Artificial Intelligence and Machine Learning in Cancer Research: A Systematic and Thematic Analysis of the Top 100 Cited Articles Indexed in Scopus Database. Cancer Control 2022; 29:10732748221095946. [PMID: 35688650 PMCID: PMC9189515 DOI: 10.1177/10732748221095946] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
INTRODUCTION Cancer is a major public health problem and a global leading cause of death where the screening, diagnosis, prediction, survival estimation, and treatment of cancer and control measures are still a major challenge. The rise of Artificial Intelligence (AI) and Machine Learning (ML) techniques and their applications in various fields have brought immense value in providing insights into advancement in support of cancer control. METHODS A systematic and thematic analysis was performed on the Scopus database to identify the top 100 cited articles in cancer research. Data were analyzed using RStudio and VOSviewer.Var1.6.6. RESULTS The top 100 articles in AI and ML in cancer received a 33 920 citation score with a range of 108 to 5758 times. Doi Kunio from the USA was the most cited author with total number of citations (TNC = 663). Out of 43 contributed countries, 30% of the top 100 cited articles originated from the USA, and 10% originated from China. Among the 57 peer-reviewed journals, the "Expert Systems with Application" published 8% of the total articles. The results were presented in highlight technological advancement through AI and ML via the widespread use of Artificial Neural Network (ANNs), Deep Learning or machine learning techniques, Mammography-based Model, Convolutional Neural Networks (SC-CNN), and text mining techniques in the prediction, diagnosis, and prevention of various types of cancers towards cancer control. CONCLUSIONS This bibliometric study provides detailed overview of the most cited empirical evidence in AI and ML adoption in cancer research that could efficiently help in designing future research. The innovations guarantee greater speed by using AI and ML in the detection and control of cancer to improve patient experience.
Collapse
Affiliation(s)
- Ibrahim H. Musa
- Department of Software Engineering, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing, China
| | - Lukman O. Afolabi
- Guangdong Immune Cell Therapy Engineering and Technology Research Center, Center for Protein and Cell-Based Drugs, Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Ibrahim Zamit
- University of Chinese Academy of Sciences, Beijing, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Taha H. Musa
- Biomedical Research Institute, Darfur University College, Nyala, South Darfur, Sudan
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology and Health Statistics, School of Public Health, Southeast University, Nanjing, Jiangsu Province, China
| | - Hassan H. Musa
- Faculty of Medical Laboratory Sciences, University of Khartoum, Khartoum, Sudan
| | - Andrew Tassang
- Faculty of Health Sciences, University of Buea, Cameroon
- Buea Regional Hospital, Annex, Cameroon
| | - Tosin Y. Akintunde
- Department of Sociology, School of Public Administration, Hohai University, Nanjing, China
| | - Wei Li
- Department of quality management, Children’s hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
237
|
Zhang D, Duan Y, Guo J, Wang Y, Yang Y, Li Z, Wang K, Wu L, Yu M. Using Multi-Scale Convolutional Neural Network Based on Multi-Instance Learning to Predict the Efficacy of Neoadjuvant Chemoradiotherapy for Rectal Cancer. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:4300108. [PMID: 35317416 PMCID: PMC8932521 DOI: 10.1109/jtehm.2022.3156851] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/06/2021] [Accepted: 02/28/2022] [Indexed: 02/06/2023]
Abstract
Background: At present, radical total mesorectal excision after neoadjuvant chemoradiotherapy is crucial for locally advanced rectal cancer. Therefore, the use of histopathological images analysis technology to predict the efficacy of neoadjuvant chemoradiotherapy for rectal cancer is of great significance for the subsequent treatment of patients. Methods: In this study, we propose a new pathological images analysis method based on multi-instance learning to predict the efficacy of neoadjuvant chemoradiotherapy for rectal cancer. Specifically, we proposed a gated attention normalization mechanism based on the multilayer perceptron, which accelerates the convergence of stochastic gradient descent optimization and can speed up the training process. We also proposed a bilinear attention multi-scale feature fusion mechanism, which organically fuses the global features of the larger receptive fields and the detailed features of the smaller receptive fields and alleviates the problem of pathological images context information loss caused by block sampling. At the same time, we also designed a weighted loss function to alleviate the problem of imbalance between cancerous instances and normal instances. Results: We evaluated our method on a locally advanced rectal cancer dataset containing 150 whole slide images. In addition, to verify our method’s generalization performance, we also tested on two publicly available datasets, Camelyon16 and MSKCC. The results show that the AUC values of our method on the Camelyon16 and MSKCC datasets reach 0.9337 and 0.9091, respectively. Conclusion: Our method has outstanding performance and advantages in predicting the efficacy of neoadjuvant chemoradiotherapy for rectal cancer. Clinical and Translational Impact Statement—This study aims to predict the efficacy of neoadjuvant chemoradiotherapy for rectal cancer to assist clinicians quickly diagnose and formulate personalized treatment plans for patients.
Collapse
Affiliation(s)
- Dehai Zhang
- School of SoftwareYunnan University Kunming 650106 China
| | - Yongchun Duan
- School of SoftwareYunnan University Kunming 650106 China
| | - Jing Guo
- School of Information Science and EngineeringYunnan University Kunming 650106 China
| | - Yaowei Wang
- School of SoftwareYunnan University Kunming 650106 China
| | - Yun Yang
- Key Laboratory in Software Engineering of Yunnan Province, School of SoftwareYunnan University Kunming 650504 China
| | - Zhenhui Li
- Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center Kunming 650118 China
| | - Kelong Wang
- School of SoftwareYunnan University Kunming 650106 China
| | - Lin Wu
- Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center Kunming 650118 China
| | - Minghao Yu
- School of SoftwareYunnan University Kunming 650106 China
| |
Collapse
|
238
|
Li X, Cen M, Xu J, Zhang H, Xu XS. Improving feature extraction from histopathological images through a fine-tuning ImageNet model. J Pathol Inform 2022; 13:100115. [PMID: 36268072 PMCID: PMC9577036 DOI: 10.1016/j.jpi.2022.100115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 06/05/2022] [Accepted: 06/24/2022] [Indexed: 11/04/2022] Open
Abstract
Background Due to lack of annotated pathological images, transfer learning has been the predominant approach in the field of digital pathology. Pre-trained neural networks based on ImageNet database are often used to extract “off-the-shelf” features, achieving great success in predicting tissue types, molecular features, and clinical outcomes, etc. We hypothesize that fine-tuning the pre-trained models using histopathological images could further improve feature extraction, and downstream prediction performance. Methods We used 100 000 annotated H&E image patches for colorectal cancer (CRC) to fine-tune a pre-trained Xception model via a 2-step approach. The features extracted from fine-tuned Xception (FTX-2048) model and Image-pretrained (IMGNET-2048) model were compared through: (1) tissue classification for H&E images from CRC, same image type that was used for fine-tuning; (2) prediction of immune-related gene expression, and (3) gene mutations for lung adenocarcinoma (LUAD). Five-fold cross validation was used for model performance evaluation. Each experiment was repeated 50 times. Findings The extracted features from the fine-tuned FTX-2048 exhibited significantly higher accuracy (98.4%) for predicting tissue types of CRC compared to the “off-the-shelf” features directly from Xception based on ImageNet database (96.4%) (P value = 2.2 × 10−6). Particularly, FTX-2048 markedly improved the accuracy for stroma from 87% to 94%. Similarly, features from FTX-2048 boosted the prediction of transcriptomic expression of immune-related genes in LUAD. For the genes that had significant relationships with image features (P < 0.05, n = 171), the features from the fine-tuned model improved the prediction for the majority of the genes (139; 81%). In addition, features from FTX-2048 improved prediction of mutation for 5 out of 9 most frequently mutated genes (STK11, TP53, LRP1B, NF1, and FAT1) in LUAD. Conclusions We proved the concept that fine-tuning the pretrained ImageNet neural networks with histopathology images can produce higher quality features and better prediction performance for not only the same-cancer tissue classification where similar images from the same cancer are used for fine-tuning, but also cross-cancer prediction for gene expression and mutation at patient level.
Collapse
|
239
|
Alpsoy A, Yavuz A, Elpek GO. Artificial intelligence in pathological evaluation of gastrointestinal cancers. Artif Intell Gastroenterol 2021; 2:141-156. [DOI: 10.35712/aig.v2.i6.141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/19/2021] [Accepted: 12/27/2021] [Indexed: 02/06/2023] Open
Abstract
The integration of artificial intelligence (AI) has shown promising benefits in many fields of diagnostic histopathology, including for gastrointestinal cancers (GCs), such as tumor identification, classification, and prognosis prediction. In parallel, recent evidence suggests that AI may help reduce the workload in gastrointestinal pathology by automatically detecting tumor tissues and evaluating prognostic parameters. In addition, AI seems to be an attractive tool for biomarker/genetic alteration prediction in GC, as it can contain a massive amount of information from visual data that is complex and partially understandable by pathologists. From this point of view, it is suggested that advances in AI could lead to revolutionary changes in many fields of pathology. Unfortunately, these findings do not exclude the possibility that there are still many hurdles to overcome before AI applications can be safely and effectively applied in actual pathology practice. These include a broad spectrum of challenges from needs identification to cost-effectiveness. Therefore, unlike other disciplines of medicine, no histopathology-based AI application, including in GC, has ever been approved either by a regulatory authority or approved for public reimbursement. The purpose of this review is to present data related to the applications of AI in pathology practice in GC and present the challenges that need to be overcome for their implementation.
Collapse
Affiliation(s)
- Anil Alpsoy
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Aysen Yavuz
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Gulsum Ozlem Elpek
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| |
Collapse
|
240
|
Development and validation of a radiopathomics model to predict pathological complete response to neoadjuvant chemoradiotherapy in locally advanced rectal cancer: a multicentre observational study. Lancet Digit Health 2021; 4:e8-e17. [PMID: 34952679 DOI: 10.1016/s2589-7500(21)00215-6] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 07/15/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Accurate prediction of tumour response to neoadjuvant chemoradiotherapy enables personalised perioperative therapy for locally advanced rectal cancer. We aimed to develop and validate an artificial intelligence radiopathomics integrated model to predict pathological complete response in patients with locally advanced rectal cancer using pretreatment MRI and haematoxylin and eosin (H&E)-stained biopsy slides. METHODS In this multicentre observational study, eligible participants who had undergone neoadjuvant chemoradiotherapy followed by radical surgery were recruited, with their pretreatment pelvic MRI (T2-weighted imaging, contrast-enhanced T1-weighted imaging, and diffusion-weighted imaging) and whole slide images of H&E-stained biopsy sections collected for annotation and feature extraction. The RAdioPathomics Integrated preDiction System (RAPIDS) was constructed by machine learning on the basis of three feature sets associated with pathological complete response: radiomics MRI features, pathomics nucleus features, and pathomics microenvironment features from a retrospective training cohort. The accuracy of RAPIDS for the prediction of pathological complete response in locally advanced rectal cancer was verified in two retrospective external validation cohorts and further validated in a multicentre, prospective observational study (ClinicalTrials.gov, NCT04271657). Model performances were evaluated using area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). FINDINGS Between Sept 25, 2009, and Nov 3, 2017, 303 patients were retrospectively recruited in the training cohort, 480 in validation cohort 1, and 150 in validation cohort 2; 100 eligible patients were enrolled in the prospective study between Jan 10 and June 10, 2020. RAPIDS had favourable accuracy for the prediction of pathological complete response in the training cohort (AUC 0·868 [95% CI 0·825-0·912]), and in validation cohort 1 (0·860 [0·828-0·892]) and validation cohort 2 (0·872 [0·810-0·934]). In the prospective validation study, RAPIDS had an AUC of 0·812 (95% CI 0·717-0·907), sensitivity of 0·888 (0·728-0·999), specificity of 0·740 (0·593-0·886), NPV of 0·929 (0·862-0·995), and PPV of 0·512 (0·313-0·710). RAPIDS also significantly outperformed single-modality prediction models (AUC 0·630 [0·507-0·754] for the pathomics microenvironment model, 0·716 [0·580-0·852] for the radiomics MRI model, and 0·733 [0·620-0·845] for the pathomics nucleus model; all p<0·0001). INTERPRETATION RAPIDS was able to predict pathological complete response to neoadjuvant chemoradiotherapy based on pretreatment radiopathomics images with high accuracy and robustness and could therefore provide a novel tool to assist in individualised management of locally advanced rectal cancer. FUNDING National Natural Science Foundation of China; Youth Innovation Promotion Association of the Chinese Academy of Sciences.
Collapse
|
241
|
Domain generalization on medical imaging classification using episodic training with task augmentation. Comput Biol Med 2021; 141:105144. [PMID: 34971982 DOI: 10.1016/j.compbiomed.2021.105144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 12/12/2021] [Accepted: 12/13/2021] [Indexed: 12/22/2022]
Abstract
Medical imaging datasets usually exhibit domain shift due to the variations of scanner vendors, imaging protocols, etc. This raises the concern about the generalization capacity of machine learning models. Domain generalization (DG), which aims to learn a model from multiple source domains such that it can be directly generalized to unseen test domains, seems particularly promising to medical imaging community. To address DG, recent model-agnostic meta-learning (MAML) has been introduced, which transfers the knowledge from previous training tasks to facilitate the learning of novel testing tasks. However, in clinical practice, there are usually only a few annotated source domains available, which decreases the capacity of training task generation and thus increases the risk of overfitting to training tasks in the paradigm. In this paper, we propose a novel DG scheme of episodic training with task augmentation on medical imaging classification. Based on meta-learning, we develop the paradigm of episodic training to construct the knowledge transfer from episodic training-task simulation to the real testing task of DG. Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting and we propose task augmentation to enhance the variety during training task generation to alleviate it. With the established learning framework, we further exploit a novel meta-objective to regularize the deep embedding of training domains. To validate the effectiveness of the proposed method, we perform experiments on histopathological images and abdominal CT images.
Collapse
|
242
|
Lovejoy CA, Arora A, Buch V, Dayan I. Key considerations for the use of artificial intelligence in healthcare and clinical research. Future Healthc J 2021; 9:75-78. [DOI: 10.7861/fhj.2021-0128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
243
|
Zhang C, Gu J, Zhu Y, Meng Z, Tong T, Li D, Liu Z, Du Y, Wang K, Tian J. AI in spotting high-risk characteristics of medical imaging and molecular pathology. PRECISION CLINICAL MEDICINE 2021; 4:271-286. [PMID: 35692858 PMCID: PMC8982528 DOI: 10.1093/pcmedi/pbab026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 11/26/2021] [Accepted: 11/29/2021] [Indexed: 02/07/2023] Open
Abstract
Medical imaging provides a comprehensive perspective and rich information for disease diagnosis. Combined with artificial intelligence technology, medical imaging can be further mined for detailed pathological information. Many studies have shown that the macroscopic imaging characteristics of tumors are closely related to microscopic gene, protein and molecular changes. In order to explore the function of artificial intelligence algorithms in in-depth analysis of medical imaging information, this paper reviews the articles published in recent years from three perspectives: medical imaging analysis method, clinical applications and the development of medical imaging in the direction of pathological molecular prediction. We believe that AI-aided medical imaging analysis will be extensively contributing to precise and efficient clinical decision.
Collapse
Affiliation(s)
- Chong Zhang
- Department of Big Data Management and Application, School of International Economics and Management, Beijing Technology and Business University, Beijing 100048, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Jionghui Gu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yangyang Zhu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dongyang Li
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
244
|
Machine learning approaches for classification of colorectal cancer with and without feature selection method on microarray data. GENE REPORTS 2021. [DOI: 10.1016/j.genrep.2021.101419] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
245
|
Großerueschkamp F, Jütte H, Gerwert K, Tannapfel A. Advances in Digital Pathology: From Artificial Intelligence to Label-Free Imaging. Visc Med 2021; 37:482-490. [PMID: 35087898 DOI: 10.1159/000518494] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 07/14/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Digital pathology, in its primary meaning, describes the utilization of computer screens to view scanned histology slides. Digitized tissue sections can be easily shared for a second opinion. In addition, it allows tissue image analysis using specialized software to identify and measure events previously observed by a human observer. These tissue-based readouts were highly reproducible and precise. Digital pathology has developed over the years through new technologies. Currently, the most discussed development is the application of artificial intelligence to automatically analyze tissue images. However, even new label-free imaging technologies are being developed to allow imaging of tissues by means of their molecular composition. SUMMARY This review provides a summary of the current state-of-the-art and future digital pathologies. Developments in the last few years have been presented and discussed. In particular, the review provides an outlook on interesting new technologies (e.g., infrared imaging), which would allow for deeper understanding and analysis of tissue thin sections beyond conventional histopathology. KEY MESSAGES In digital pathology, mathematical methods are used to analyze images and draw conclusions about diseases and their progression. New innovative methods and techniques (e.g., label-free infrared imaging) will bring significant changes in the field in the coming years.
Collapse
Affiliation(s)
- Frederik Großerueschkamp
- Center for Protein Diagnostics (PRODI), Biospectroscopy, Ruhr University Bochum, Bochum, Germany.,Department of Biophysics, Faculty of Biology and Biotechnology, Ruhr University Bochum, Bochum, Germany
| | - Hendrik Jütte
- Center for Protein Diagnostics (PRODI), Biospectroscopy, Ruhr University Bochum, Bochum, Germany.,Institute of Pathology, Ruhr University Bochum, Bochum, Germany
| | - Klaus Gerwert
- Center for Protein Diagnostics (PRODI), Biospectroscopy, Ruhr University Bochum, Bochum, Germany.,Department of Biophysics, Faculty of Biology and Biotechnology, Ruhr University Bochum, Bochum, Germany
| | - Andrea Tannapfel
- Center for Protein Diagnostics (PRODI), Biospectroscopy, Ruhr University Bochum, Bochum, Germany.,Institute of Pathology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
246
|
Zhang B, Yao K, Xu M, Wu J, Cheng C. Deep Learning Predicts EBV Status in Gastric Cancer Based on Spatial Patterns of Lymphocyte Infiltration. Cancers (Basel) 2021; 13:6002. [PMID: 34885112 PMCID: PMC8656870 DOI: 10.3390/cancers13236002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/17/2021] [Accepted: 11/23/2021] [Indexed: 12/28/2022] Open
Abstract
EBV infection occurs in around 10% of gastric cancer cases and represents a distinct subtype, characterized by a unique mutation profile, hypermethylation, and overexpression of PD-L1. Moreover, EBV positive gastric cancer tends to have higher immune infiltration and a better prognosis. EBV infection status in gastric cancer is most commonly determined using PCR and in situ hybridization, but such a method requires good nucleic acid preservation. Detection of EBV status with histopathology images may complement PCR and in situ hybridization as a first step of EBV infection assessment. Here, we developed a deep learning-based algorithm to directly predict EBV infection in gastric cancer from H&E stained histopathology slides. Our model can not only predict EBV infection in gastric cancers from tumor regions but also from normal regions with potential changes induced by adjacent EBV+ regions within each H&E slide. Furthermore, in cohorts with zero EBV abundances, a significant difference of immune infiltration between high and low EBV score samples was observed, consistent with the immune infiltration difference observed between EBV positive and negative samples. Therefore, we hypothesized that our model's prediction of EBV infection is partially driven by the spatial information of immune cell composition, which was supported by mostly positive local correlations between the EBV score and immune infiltration in both tumor and normal regions across all H&E slides. Finally, EBV scores calculated from our model were found to be significantly associated with prognosis. This framework can be readily applied to develop interpretable models for prediction of virus infection across cancers.
Collapse
Affiliation(s)
- Baoyi Zhang
- Department of Chemical and Biomolecular Engineering, Rice University, Houston, TX 77030, USA;
| | - Kevin Yao
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA;
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA;
- Computer Vision Department, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi 144534, United Arab Emirates
| | - Jia Wu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Chao Cheng
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
247
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
248
|
Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9025470. [PMID: 34754327 PMCID: PMC8572604 DOI: 10.1155/2021/9025470] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 12/30/2022]
Abstract
Deep learning (DL) is a branch of machine learning and artificial intelligence that has been applied to many areas in different domains such as health care and drug design. Cancer prognosis estimates the ultimate fate of a cancer subject and provides survival estimation of the subjects. An accurate and timely diagnostic and prognostic decision will greatly benefit cancer subjects. DL has emerged as a technology of choice due to the availability of high computational resources. The main components in a standard computer-aided design (CAD) system are preprocessing, feature recognition, extraction and selection, categorization, and performance assessment. Reduction of costs associated with sequencing systems offers a myriad of opportunities for building precise models for cancer diagnosis and prognosis prediction. In this survey, we provided a summary of current works where DL has helped to determine the best models for the cancer diagnosis and prognosis prediction tasks. DL is a generic model requiring minimal data manipulations and achieves better results while working with enormous volumes of data. Aims are to scrutinize the influence of DL systems using histopathology images, present a summary of state-of-the-art DL methods, and give directions to future researchers to refine the existing methods.
Collapse
|
249
|
Brockmoeller S, Echle A, Ghaffari Laleh N, Eiholm S, Malmstrøm ML, Plato Kuhlmann T, Levic K, Grabsch HI, West NP, Saldanha OL, Kouvidi K, Bono A, Heij LR, Brinker TJ, Gögenür I, Quirke P, Kather JN. Deep Learning identifies inflamed fat as a risk factor for lymph node metastasis in early colorectal cancer. J Pathol 2021; 256:269-281. [PMID: 34738636 DOI: 10.1002/path.5831] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 10/18/2021] [Accepted: 11/01/2021] [Indexed: 11/07/2022]
Abstract
The spread of early-stage (T1 and T2) adenocarcinomas to loco-regional lymph nodes is a key event in disease progression of colorectal cancer (CRC). The cellular mechanisms behind this event are not completely understood and existing predictive biomarkers are imperfect. Here, we used an end-to-end Deep Learning algorithm to identify risk factors for lymph node metastasis (LNM) status in digitized histopathology slides of the primary CRC and its surrounding tissue. In two large population-based cohorts, we show that this system can predict the presence of more than one LNM in pT2 CRC patients with an area under the receiver operating curve (AUROC) of 0.733 (0.67-0.758) and patients with any LNM with an AUROC of 0.711 (0.597-0.797). Similarly, in pT1 CRC patients, the presence of more than one LNM or any LNM was predictable with an AUROC of 0.733 (0.644-0.778) and 0.567 (0.542-0.597), respectively. Based on these findings, we used the Deep Learning system to guide human pathology experts towards highly predictive regions for LNM in the whole slide images. This hybrid human observer and Deep Learning approach identified inflamed adipose tissue as the highest predictive feature for LNM presence. Our study is a first proof of concept that artificial intelligence (AI) systems may be able to discover potentially new biological mechanisms in cancer progression. Our Deep Learning algorithm is publicly available and can be used for biomarker discovery in any disease setting. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Scarlet Brockmoeller
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Amelie Echle
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | | | - Susanne Eiholm
- Department of Pathology, Zealand University Hospital, University of Copenhagen, Roskilde, Denmark
| | | | | | - Katarina Levic
- Department of Surgery, Herlev University Hospital, Copenhagen, Denmark
| | - Heike Irmgard Grabsch
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Department of Pathology, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Nicholas P West
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | | | - Katerina Kouvidi
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Aurora Bono
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Lara R Heij
- Department of Pathology, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht, The Netherlands
- Institute of Pathology, University Hospital RWTH Aachen, Aachen, Germany
- Department of Surgery and Transplantation, University Hospital RWTH Aachen, Aachen, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, National Center for Tumour Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ismayil Gögenür
- Department of Surgery, Zealand University Hospital, University of Copenhagen, Køge, Denmark
- Gastrounit - Surgical Division, Center for Surgical Research, Copenhagen University Hospital Hvidovre, Copenhagen, Denmark
| | - Philip Quirke
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Jakob Nikolas Kather
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Medical Oncology, National Center of Tumour Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
250
|
Yu G, Sun K, Xu C, Shi XH, Wu C, Xie T, Meng RQ, Meng XH, Wang KS, Xiao HM, Deng HW. Accurate recognition of colorectal cancer with semi-supervised deep learning on pathological images. Nat Commun 2021; 12:6311. [PMID: 34728629 PMCID: PMC8563931 DOI: 10.1038/s41467-021-26643-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 10/12/2021] [Indexed: 02/07/2023] Open
Abstract
Machine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.
Collapse
Affiliation(s)
- Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Chao Xu
- Department of Biostatistics and Epidemiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - Xing-Hua Shi
- Department of Computer & Information Sciences, College of Science and Technology, Temple University, Philadelphia, PA, 19122, USA
| | - Chong Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - Ting Xie
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Run-Qi Meng
- Electronic Information Science and Technology, School of Physics and Electronics, Central South University, 410083, Changsha, Hunan, China
| | - Xiang-He Meng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kuan-Song Wang
- Department of Pathology, Xiangya Hospital, School of Basic Medical Science, Central South University, 410078, Changsha, Hunan, China.
| | - Hong-Mei Xiao
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
| | - Hong-Wen Deng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
- Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|