1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
3
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
4
|
Song J, Im S, Lee SH, Jang HJ. Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images. Diagnostics (Basel) 2022; 12:2623. [PMID: 36359467 PMCID: PMC9689570 DOI: 10.3390/diagnostics12112623] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 08/11/2023] Open
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
Collapse
Affiliation(s)
- JaeYen Song
- Department of Obstetrics and Gynecology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Soyoung Im
- Department of Hospital Pathology, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 16247, Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| |
Collapse
|
5
|
Fu B, Zhang M, He J, Cao Y, Guo Y, Wang R. StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106924. [PMID: 35671603 DOI: 10.1016/j.cmpb.2022.106924] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/28/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Gastric cancer has high morbidity and mortality compared to other cancers. Accurate histopathological diagnosis has great significance for the treatment of gastric cancer. With the development of artificial intelligence, many researchers have applied deep learning for the classification of gastric cancer pathological images. However, most studies have used binary classification on pathological images of gastric cancer, which is insufficient with respect to the clinical requirements. Therefore, we proposed a multi-classification method based on deep learning with more practical clinical value. METHODS In this study, we developed a novel multi-scale model called StoHisNet based on Transformer and the convolutional neural network (CNN) for the multi-classification task. StoHisNet adopts Transformer to learn global features to alleviate the inherent limitations of the convolution operation. The proposed StoHisNet can classify the publicly available pathological images of a gastric dataset into four categories -normal tissue, tubular adenocarcinoma, mucinous adenocarcinoma, and papillary adenocarcinoma. RESULTS The accuracy, F1-score, recall, and precision of the proposed model in the public gastric pathological image dataset were 94.69%, 94.96%, 94.95%, and 94.97%, respectively. We conducted additional experiments using two other public datasets to verify the generalization ability of the model. On the BreakHis dataset, our model performed better compared with other classification models, and the accuracy was 91.64%. Similarly, on the four-classification task on the Endometrium dataset, our model showed better classification ability than others with accuracy of 81.74%. These experiments showed that the proposed model has excellent ability of classification and generalization. CONCLUSION The StoHisNet model had high performance in the multi-classification on gastric histopathological images and showed strong generalization ability on other pathological datasets. This model may be a potential tool to assist pathologists in the analysis of gastric histopathological images.
Collapse
Affiliation(s)
- Bangkang Fu
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Mudan Zhang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Junjie He
- College of Computer Science and Technology, Guizhou University, Guizhou 550025, China
| | - Ying Cao
- Medical College, Guizhou University, Guizhou 550000, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100192, China
| | - Rongpin Wang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China.
| |
Collapse
|
6
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
7
|
Jang HJ, Lee A, Kang J, Song IH, Lee SH. Prediction of genetic alterations from gastric cancer histopathology images using a fully automated deep learning approach. World J Gastroenterol 2021; 27:7687-7704. [PMID: 34908807 PMCID: PMC8641056 DOI: 10.3748/wjg.v27.i44.7687] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/05/2021] [Accepted: 11/13/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Studies correlating specific genetic mutations and treatment response are ongoing to establish an effective treatment strategy for gastric cancer (GC). To facilitate this research, a cost- and time-effective method to analyze the mutational status is necessary. Deep learning (DL) has been successfully applied to analyze hematoxylin and eosin (H and E)-stained tissue slide images. AIM To test the feasibility of DL-based classifiers for the frequently occurring mutations from the H and E-stained GC tissue whole slide images (WSIs). METHODS From the GC dataset of The Cancer Genome Atlas (TCGA-STAD), wild-type/mutation classifiers for CDH1, ERBB2, KRAS, PIK3CA, and TP53 genes were trained on 360 × 360-pixel patches of tissue images. RESULTS The area under the curve (AUC) for the receiver operating characteristic (ROC) curves ranged from 0.727 to 0.862 for the TCGA frozen WSIs and 0.661 to 0.858 for the TCGA formalin-fixed paraffin-embedded (FFPE) WSIs. The performance of the classifier can be improved by adding new FFPE WSI training dataset from our institute. The classifiers trained for mutation prediction in colorectal cancer completely failed to predict the mutational status in GC, indicating that DL-based mutation classifiers are incompatible between different cancers. CONCLUSION This study concluded that DL could predict genetic mutations in H and E-stained tissue slides when they are trained with appropriate tissue data.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Ahwon Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Jun Kang
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - In Hye Song
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| |
Collapse
|
8
|
Jang HJ, Song IH, Lee SH. Deep Learning for Automatic Subclassification of Gastric Carcinoma Using Whole-Slide Histopathology Images. Cancers (Basel) 2021; 13:3811. [PMID: 34359712 PMCID: PMC8345042 DOI: 10.3390/cancers13153811] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/19/2021] [Accepted: 07/24/2021] [Indexed: 12/12/2022] Open
Abstract
Histomorphologic types of gastric cancer (GC) have significant prognostic values that should be considered during treatment planning. Because the thorough quantitative review of a tissue slide is a laborious task for pathologists, deep learning (DL) can be a useful tool to support pathologic workflow. In the present study, a fully automated approach was applied to distinguish differentiated/undifferentiated and non-mucinous/mucinous tumor types in GC tissue whole-slide images from The Cancer Genome Atlas (TCGA) stomach adenocarcinoma dataset (TCGA-STAD). By classifying small patches of tissue images into differentiated/undifferentiated and non-mucinous/mucinous tumor tissues, the relative proportion of GC tissue subtypes can be easily quantified. Furthermore, the distribution of different tissue subtypes can be clearly visualized. The patch-level areas under the curves for the receiver operating characteristic curves for the differentiated/undifferentiated and non-mucinous/mucinous classifiers were 0.932 and 0.979, respectively. We also validated the classifiers on our own GC datasets and confirmed that the generalizability of the classifiers is excellent. The results indicate that the DL-based tissue classifier could be a useful tool for the quantitative analysis of cancer tissue slides. By combining DL-based classifiers for various molecular and morphologic variations in tissue slides, the heterogeneity of tumor tissues can be unveiled more efficiently.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - In-Hye Song
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Sung-Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| |
Collapse
|
9
|
A State-of-the-Art Review for Gastric Histopathology Image Analysis Approaches and Future Development. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6671417. [PMID: 34258279 PMCID: PMC8257332 DOI: 10.1155/2021/6671417] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 05/09/2021] [Accepted: 05/25/2021] [Indexed: 02/08/2023]
Abstract
Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.
Collapse
|
10
|
Lee SH, Song IH, Jang HJ. Feasibility of deep learning-based fully automated classification of microsatellite instability in tissue slides of colorectal cancer. Int J Cancer 2021; 149:728-740. [PMID: 33851412 DOI: 10.1002/ijc.33599] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 02/19/2021] [Accepted: 04/07/2021] [Indexed: 12/16/2022]
Abstract
High levels of microsatellite instability (MSI-H) occurs in about 15% of sporadic colorectal cancer (CRC) and is an important predictive marker for response to immune checkpoint inhibitors. To test the feasibility of a deep learning (DL)-based classifier as a screening tool for MSI status, we built a fully automated DL-based MSI classifier using pathology whole-slide images (WSIs) of CRCs. On small image patches of The Cancer Genome Atlas (TCGA) CRC WSI dataset, tissue/non-tissue, normal/tumor and MSS/MSI-H classifiers were applied sequentially for the fully automated prediction of the MSI status. The classifiers were also tested on an independent cohort. Furthermore, to test how the expansion of the training data affects the performance of the DL-based classifier, additional classifier trained on both TCGA and external datasets was tested. The areas under the receiver operating characteristic curves were 0.892 and 0.972 for the TCGA and external datasets, respectively, by a classifier trained on both datasets. The performance of the DL-based classifier was much better than that of previously reported histomorphology-based methods. We speculated that about 40% of CRC slides could be screened for MSI status without molecular testing by the DL-based classifier. These results demonstrated that the DL-based method has potential as a screening tool to discriminate molecular alteration in tissue slides.
Collapse
Affiliation(s)
- Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary's Hospital, Seoul, South Korea
| | - In Hye Song
- Department of Hospital Pathology, Seoul St. Mary's Hospital, Seoul, South Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul, South Korea
| |
Collapse
|
11
|
Generalizability of Deep Learning System for the Pathologic Diagnosis of Various Cancers. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11020808] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The deep learning (DL)-based approaches in tumor pathology help to overcome the limitations of subjective visual examination from pathologists and improve diagnostic accuracy and objectivity. However, it is unclear how a DL system trained to discriminate normal/tumor tissues in a specific cancer could perform on other tumor types. Herein, we cross-validated the DL-based normal/tumor classifiers separately trained on the tissue slides of cancers from bladder, lung, colon and rectum, stomach, bile duct, and liver. Furthermore, we compared the differences between the classifiers trained on the frozen or formalin-fixed paraffin-embedded (FFPE) tissues. The Area under the curve (AUC) for the receiver operating characteristic (ROC) curve ranged from 0.982 to 0.999 when the tissues were analyzed by the classifiers trained on the same tissue preparation modalities and cancer types. However, the AUCs could drop to 0.476 and 0.439 when the classifiers trained for different tissue modalities and cancer types were applied. Overall, the optimal performance could be achieved only when the tissue slides were analyzed by the classifiers trained on the same preparation modalities and cancer types.
Collapse
|
12
|
Jang HJ, Lee A, Kang J, Song IH, Lee SH. Prediction of clinically actionable genetic alterations from colorectal cancer histopathology images using deep learning. World J Gastroenterol 2020; 26:6207-6223. [PMID: 33177794 PMCID: PMC7596644 DOI: 10.3748/wjg.v26.i40.6207] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 08/09/2020] [Accepted: 09/25/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Identifying genetic mutations in cancer patients have been increasingly important because distinctive mutational patterns can be very informative to determine the optimal therapeutic strategy. Recent studies have shown that deep learning-based molecular cancer subtyping can be performed directly from the standard hematoxylin and eosin (H&E) sections in diverse tumors including colorectal cancers (CRCs). Since H&E-stained tissue slides are ubiquitously available, mutation prediction with the pathology images from cancers can be a time- and cost-effective complementary method for personalized treatment. AIM To predict the frequently occurring actionable mutations from the H&E-stained CRC whole-slide images (WSIs) with deep learning-based classifiers. METHODS A total of 629 CRC patients from The Cancer Genome Atlas (TCGA-COAD and TCGA-READ) and 142 CRC patients from Seoul St. Mary Hospital (SMH) were included. Based on the mutation frequency in TCGA and SMH datasets, we chose APC, KRAS, PIK3CA, SMAD4, and TP53 genes for the study. The classifiers were trained with 360 × 360 pixel patches of tissue images. The receiver operating characteristic (ROC) curves and area under the curves (AUCs) for all the classifiers were presented. RESULTS The AUCs for ROC curves ranged from 0.693 to 0.809 for the TCGA frozen WSIs and from 0.645 to 0.783 for the TCGA formalin-fixed paraffin-embedded WSIs. The prediction performance can be enhanced with the expansion of datasets. When the classifiers were trained with both TCGA and SMH data, the prediction performance was improved. CONCLUSION APC, KRAS, PIK3CA, SMAD4, and TP53 mutations can be predicted from H&E pathology images using deep learning-based classifiers, demonstrating the potential for deep learning-based mutation prediction in the CRC tissue slides.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine and Health Sciences, Catholic Neuroscience Institute, The Catholic University of Korea, Seoul 06591, South Korea
| | - Ahwon Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - J Kang
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - In Hye Song
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| |
Collapse
|