1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Akram F, de Bruyn DP, van den Bosch QCC, Trandafir TE, van den Bosch TPP, Verdijk RM, de Klein A, Kiliç E, Stubbs AP, Brosens E, von der Thüsen JH. Prediction of molecular subclasses of uveal melanoma by deep learning using routine haematoxylin-eosin-stained tissue slides. Histopathology 2024. [PMID: 38952117 DOI: 10.1111/his.15271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 06/09/2024] [Accepted: 06/16/2024] [Indexed: 07/03/2024]
Abstract
AIMS Uveal melanoma has a high propensity to metastasize. Prognosis is associated with specific driver mutations and copy number variations, and these can only be obtained after genetic testing. In this study we evaluated the efficacy of patient outcome prediction using deep learning on haematoxylin and eosin (HE)-stained primary uveal melanoma slides in comparison to molecular testing. METHODS In this retrospective study of patients with uveal melanoma, 113 patients from the Erasmus Medical Centre who underwent enucleation had tumour tissue analysed for molecular classification between 1993 and 2020. Routine HE-stained slides were scanned to obtain whole-slide images (WSI). After annotation of regions of interest, tiles of 1024 × 1024 pixels were extracted at a magnification of 40×. An ablation study to select the best-performing deep-learning model was carried out using three state-of-the-art deep-learning models (EfficientNet, Vision Transformer, and Swin Transformer). RESULTS Deep-learning models were subjected to a training cohort (n = 40), followed by a validation cohort (n = 20), and finally underwent a test cohort (n = 48). A k-fold cross-validation (k = 3) of validation and test cohorts (n = 113 of three classes: BAP1, SF3B1, EIF1AX) demonstrated Swin Transformer as the best-performing deep-learning model to predict molecular subclasses based on HE stains. The model achieved an accuracy of 0.83 ± 0.09 on the validation cohort and 0.75 ± 0.04 on the test cohort. Within the subclasses, this model correctly predicted 70% BAP1-mutated, 61% SF3B1-mutated and 80% EIF1AX-mutated UM in the test set. CONCLUSIONS This study showcases the potential of the deep-learning methodology for predicting molecular subclasses in a multiclass manner using HE-stained WSI. This development holds promise for advanced prognostication of UM patients without the need of molecular or immunohistochemical testing. Additionally, this study suggests there are distinct histopathological features per subclass; mainly utilizing epithelioid cellular morphology for BAP1-classification, but an unknown feature distinguishes EIF1AX and SF3B1.
Collapse
Affiliation(s)
- Farhan Akram
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Daniël P de Bruyn
- Ophthalmology, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Clinical Genetics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Cancer Institute, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Quincy C C van den Bosch
- Ophthalmology, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Clinical Genetics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Cancer Institute, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Teodora E Trandafir
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Thierry P P van den Bosch
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Rob M Verdijk
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Annelies de Klein
- Ophthalmology, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Cancer Institute, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Emine Kiliç
- Ophthalmology, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Cancer Institute, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Andrew P Stubbs
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Erwin Brosens
- Clinical Genetics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
- Cancer Institute, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| | - Jan H von der Thüsen
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
3
|
Chen J, Yang J, Wang J, Zhao Z, Wang M, Sun C, Song N, Feng S. Study on an Automatic Classification Method for Determining the Malignancy Grade of Glioma Pathological Sections Based on Hyperspectral Multi-Scale Spatial-Spectral Fusion Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:3803. [PMID: 38931588 PMCID: PMC11207485 DOI: 10.3390/s24123803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/05/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024]
Abstract
This study describes a novel method for grading pathological sections of gliomas. Our own integrated hyperspectral imaging system was employed to characterize 270 bands of cancerous tissue samples from microarray slides of gliomas. These samples were then classified according to the guidelines developed by the World Health Organization, which define the subtypes and grades of diffuse gliomas. We explored a hyperspectral feature extraction model called SMLMER-ResNet using microscopic hyperspectral images of brain gliomas of different malignancy grades. The model combines the channel attention mechanism and multi-scale image features to automatically learn the pathological organization of gliomas and obtain hierarchical feature representations, effectively removing the interference of redundant information. It also completes multi-modal, multi-scale spatial-spectral feature extraction to improve the automatic classification of glioma subtypes. The proposed classification method demonstrated high average classification accuracy (>97.3%) and a Kappa coefficient (0.954), indicating its effectiveness in improving the automatic classification of hyperspectral gliomas. The method is readily applicable in a wide range of clinical settings, offering valuable assistance in alleviating the workload of clinical pathologists. Furthermore, the study contributes to the development of more personalized and refined treatment plans, as well as subsequent follow-up and treatment adjustment, by providing physicians with insights into the underlying pathological organization of gliomas.
Collapse
Affiliation(s)
- Jiaqi Chen
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
- University of Chinese Academy of Sciences, Beijing 130033, China
| | - Jin Yang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
| | - Jinyu Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
- University of Chinese Academy of Sciences, Beijing 130033, China
| | - Zitong Zhao
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
- University of Chinese Academy of Sciences, Beijing 130033, China
| | - Mingjia Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
| | - Ci Sun
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
| | - Nan Song
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
| | - Shulong Feng
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (J.C.)
| |
Collapse
|
4
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
5
|
Norose T, Ohike N, Nakaya D, Kamiya K, Sugiura Y, Takatsuki M, Koizumi H, Okawa C, Ohya A, Sasaki M, Aoki R, Nakahara K, Kobayashi S, Tateishi K, Koike J. Investigation of the usefulness of a bile duct biopsy and bile cytology using a hyperspectral camera and machine learning. Pathol Int 2024. [PMID: 38787324 DOI: 10.1111/pin.13438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 04/15/2024] [Accepted: 05/04/2024] [Indexed: 05/25/2024]
Abstract
To improve the efficiency of pathological diagnoses, the development of automatic pathological diagnostic systems using artificial intelligence (AI) is progressing; however, problems include the low interpretability of AI technology and the need for large amounts of data. We herein report the usefulness of a general-purpose method that combines a hyperspectral camera with machine learning. As a result of analyzing bile duct biopsy and bile cytology specimens, which are especially difficult to determine as benign or malignant, using multiple machine learning models, both were able to identify benign or malignant cells with an accuracy rate of more than 80% (93.3% for bile duct biopsy specimens and 83.2% for bile cytology specimens). This method has the potential to contribute to the diagnosis and treatment of bile duct cancer and is expected to be widely applied and utilized in general pathological diagnoses.
Collapse
Affiliation(s)
- Tomoko Norose
- Division of Molecular Pathology, Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Nobuyuki Ohike
- Division of Molecular Pathology, Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| | | | | | - Yoshiya Sugiura
- Division of Molecular Pathology, Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Misato Takatsuki
- Division of Molecular Pathology, Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Hirotaka Koizumi
- Division of Molecular Pathology, Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Chie Okawa
- Division of Pathology, St. Marianna University Hospital, Kawasaki, Japan
| | - Aya Ohya
- Division of Pathology, St. Marianna University Hospital, Kawasaki, Japan
| | - Miyu Sasaki
- Division of Pathology, St. Marianna University Hospital, Kawasaki, Japan
| | - Ruka Aoki
- Division of Pathology, St. Marianna University Hospital, Kawasaki, Japan
| | - Kazunari Nakahara
- Department of Gastroenterology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Shinjiro Kobayashi
- Department of Gastroenterological and General Surgery, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Keisuke Tateishi
- Department of Gastroenterology, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Junki Koike
- Department of Pathology, St. Marianna University School of Medicine, Kawasaki, Japan
| |
Collapse
|
6
|
Zhou CM, Zhao SH. Evaluation of the value of combined detection of tumor markers CA724, carcinoembryonic antigen, CA242, and CA19-9 in gastric cancer. World J Gastrointest Oncol 2024; 16:1737-1744. [PMID: 38764828 PMCID: PMC11099441 DOI: 10.4251/wjgo.v16.i5.1737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/08/2024] [Accepted: 03/20/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Gastric cancer is a global health concern that poses a significant threat to human well-being. AIM To detecting serum changes in carcinoembryonic antigen (CEA), carbohydrate antigens (CA) 724, CA242, and CA19-9 expression among patients with gastric cancer. METHODS Eighty patients diagnosed with gastric cancer between January 2020 and January 2023 were included in the observation group, while 80 patients with benign gastric diseases were included in the control group. Both groups were tested for tumor markers (CA724, CEA, CA242, and CA19-9]. Tumor marker indicators (CA724, CEA, CA242, and CA19-9) were compared between the two groups, assessing positive rates of tumor markers across various stages in the observation group. Additionally, single and combined detection of various tumor markers were examined. RESULTS The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value observed for the combined detection of CA724, CEA, CA242, and CA19-9 were higher than those of CA724, CEA, CA242, and CA19-9 individually. Therefore, the combined detection of CA724, CEA, CA242, and CA19-9 has a high diagnostic accuracy and could reduce the occurrence of missed or misdiagnosed cases, facilitating the early diagnosis and treatment of patients. CONCLUSION CA724, CEA, CA242, and CA19-9 serum levels in gastric cancer patients significantly surpassed those in non-gastric cancer patients (P < 0.05). Their combined detection can improve the diagnostic accuracy for gastric cancer, warranting clinical promotion.
Collapse
Affiliation(s)
- Chong-Mei Zhou
- Department of Clinical Laboratory, Henan Provincial People's Hospital, Zhengzhou 450003, Henan Province, China
| | - Shao-Hua Zhao
- Department of Clinical Laboratory, Henan Provincial People's Hospital, Zhengzhou 450003, Henan Province, China
| |
Collapse
|
7
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
8
|
Yilmaz F, Brickman A, Najdawi F, Yakirevich E, Egger R, Resnick MB. Advancing Artificial Intelligence Integration Into the Pathology Workflow: Exploring Opportunities in Gastrointestinal Tract Biopsies. J Transl Med 2024; 104:102043. [PMID: 38431118 DOI: 10.1016/j.labinv.2024.102043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/26/2024] [Indexed: 03/05/2024] Open
Abstract
This review aims to present a comprehensive overview of the current landscape of artificial intelligence (AI) applications in the analysis of tubular gastrointestinal biopsies. These publications cover a spectrum of conditions, ranging from inflammatory ailments to malignancies. Moving beyond the conventional diagnosis based on hematoxylin and eosin-stained whole-slide images, the review explores additional implications of AI, including its involvement in interpreting immunohistochemical results, molecular subtyping, and the identification of cellular spatial biomarkers. Furthermore, the review examines how AI can contribute to enhancing the quality and control of diagnostic processes, introducing new workflow options, and addressing the limitations and caveats associated with current AI platforms in this context.
Collapse
Affiliation(s)
- Fazilet Yilmaz
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Arlen Brickman
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Fedaa Najdawi
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Evgeny Yakirevich
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | | | - Murray B Resnick
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island.
| |
Collapse
|
9
|
Yuan L, Zhou H, Xiao X, Zhang X, Chen F, Liu L, Liu J, Bao S, Tao K. Development and external validation of a transfer learning-based system for the pathological diagnosis of colorectal cancer: a large emulated prospective study. Front Oncol 2024; 14:1365364. [PMID: 38725622 PMCID: PMC11079287 DOI: 10.3389/fonc.2024.1365364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/11/2024] [Indexed: 05/12/2024] Open
Abstract
Background The progress in Colorectal cancer (CRC) screening and management has resulted in an unprecedented caseload for histopathological diagnosis. While artificial intelligence (AI) presents a potential solution, the predominant emphasis on slide-level aggregation performance without thorough verification of cancer in each location, impedes both explainability and transparency. Effectively addressing these challenges is crucial to ensuring the reliability and efficacy of AI in histology applications. Method In this study, we created an innovative AI algorithm using transfer learning from a polyp segmentation model in endoscopy. The algorithm precisely localized CRC targets within 0.25 mm² grids from whole slide imaging (WSI). We assessed the CRC detection capabilities at this fine granularity and examined the influence of AI on the diagnostic behavior of pathologists. The evaluation utilized an extensive dataset comprising 858 consecutive patient cases with 1418 WSIs obtained from an external center. Results Our results underscore a notable sensitivity of 90.25% and specificity of 96.60% at the grid level, accompanied by a commendable area under the curve (AUC) of 0.962. This translates to an impressive 99.39% sensitivity at the slide level, coupled with a negative likelihood ratio of <0.01, signifying the dependability of the AI system to preclude diagnostic considerations. The positive likelihood ratio of 26.54, surpassing 10 at the grid level, underscores the imperative for meticulous scrutiny of any AI-generated highlights. Consequently, all four participating pathologists demonstrated statistically significant diagnostic improvements with AI assistance. Conclusion Our transfer learning approach has successfully yielded an algorithm that can be validated for CRC histological localizations in whole slide imaging. The outcome advocates for the integration of the AI system into histopathological diagnosis, serving either as a diagnostic exclusion application or a computer-aided detection (CADe) tool. This integration has the potential to alleviate the workload of pathologists and ultimately benefit patients.
Collapse
Affiliation(s)
- Liuhong Yuan
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| | - Henghua Zhou
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| | | | - Xiuqin Zhang
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| | - Feier Chen
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| | - Lin Liu
- Institute of Natural Sciences, MOE-LSC, School of Mathematical Sciences, CMA-Shanghai, SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | | | - Shisan Bao
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| | - Kun Tao
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Pathology, Tongren Hospital, School of Medicine Shanghai Jiaotong University, Shanghai, China
| |
Collapse
|
10
|
Wang BS, Zhang CL, Cui X, Li Q, Yang L, He ZY, Yang Z, Zeng MM, Cao N. Curcumin inhibits the growth and invasion of gastric cancer by regulating long noncoding RNA AC022424.2. World J Gastrointest Oncol 2024; 16:1437-1452. [PMID: 38660661 PMCID: PMC11037052 DOI: 10.4251/wjgo.v16.i4.1437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/19/2023] [Accepted: 01/17/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Gastric cancer, characterized by a multifactorial etiology and high heterogeneity, continues to confound researchers in terms of its pathogenesis. Curcumin, a natural anticancer agent, exhibits therapeutic promise in gastric cancer. Its effects include promoting cell apoptosis, curtailing tumor angiogenesis, and enhancing sensitivity to radiation and chemotherapy. Long noncoding RNAs (lncRNAs) have garnered significant attention as biomarkers for early screening, diagnosis, treatment, and drug response because of their remarkable specificity and sensitivity. Recent investigations have revealed an association between aberrant lncRNA expression and early diagnosis, clinical staging, metastasis, drug sensitivity, and prognosis in gastric cancer. A profound understanding of the intricate mechanisms through which lncRNAs influence gastric cancer development can provide novel insights for precision treatment and tailored management of patients with gastric cancer. This study aimed to unravel the potential of curcumin in suppressing the malignant behavior of gastric cancer cells by upregulating specific lncRNAs and modulating gastric cancer onset and progression. AIM To identify lncRNAs associated with curcumin treatment and investigate the role of lncRNA AC022424.2 in the effects of curcumin on gastric cancer cell apoptosis, proliferation, and invasion. Furthermore, these findings were validated in clinical samples. METHODS The study employed CCK-8 assays to assess the impact of curcumin on gastric cancer cell proliferation, flow cytometry to investigate its effects on apoptosis, and scratch and Transwell assays to evaluate its influence on the migration and invasion of BGC-823 and MGC-803 cells. Western blotting was used to gauge changes in the protein expression levels of CDK6, CDK4, Bax, Bcl-2, caspase-3, P65, and the PI3K/Akt/mTOR pathway in gastric cancer cell lines after curcumin treatment. Differential expression of lncRNAs before and after curcumin treatment was assessed using lncRNA sequencing and validated using quantitative reverse transcription polymerase chain reaction (qRT-PCR) in BGC-823 and MGC-803 cells. AC022424.2-1 knockdown BGC-823 and MGC-803 cells were generated to scrutinize the impact of lncRNA AC022424.2 on apoptosis, proliferation, migration, and invasion of gastric cancer cells. Western blotting was performed to ascertain changes in the expression of proteins implicated in the PI3K/Akt/mTOR and NF-κB signaling pathways. RT-PCR was employed to measure lncRNA AC022424.2 expression in clinical gastric cancer tissues and to correlate its expression with clinical pathological characteristics. RESULTS Curcumin induced apoptosis and hindered proliferation, migration, and invasion of gastric cancer cells in a dose- and time-dependent manner. LncRNA AC022424.2 was upregulated after curcumin treatment, and its knockdown enhanced cancer cell aggressiveness. LncRNA AC022424.2 may have affected cancer cells via the PI3K/Akt/mTOR and NF-κB signaling pathways. LncRNA AC022424.2 downregulation was correlated with lymph node metastasis, making it a potential diagnostic and prognostic marker. CONCLUSION Curcumin has potential anticancer effects on gastric cancer cells by regulating lncRNA AC022424.2. This lncRNA plays a significant role in cancer cell behavior and may have clinical implications in diagnosis and prognosis evaluation. The results of this study enhance our understanding of gastric cancer development and precision treatment.
Collapse
Affiliation(s)
- Bin-Sheng Wang
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Chen-Li Zhang
- School of Basic Medical Sciences, Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Xiang Cui
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Qiang Li
- Third Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Lei Yang
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Zhi-Yun He
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Ze Yang
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Miao-Miao Zeng
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| | - Nong Cao
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, Gansu Province, China
| |
Collapse
|
11
|
Tam KH, Soares MF, Kers J, Sharples EJ, Ploeg RJ, Kaisar M, Rittscher J. Predicting clinical endpoints and visual changes with quality-weighted tissue-based renal histological features. FRONTIERS IN TRANSPLANTATION 2024; 3:1305468. [PMID: 38993786 PMCID: PMC11235227 DOI: 10.3389/frtra.2024.1305468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 03/15/2024] [Indexed: 07/13/2024]
Abstract
Two common obstacles limiting the performance of data-driven algorithms in digital histopathology classification tasks are the lack of expert annotations and the narrow diversity of datasets. Multi-instance learning (MIL) can address the former challenge for the analysis of whole slide images (WSI), but performance is often inferior to full supervision. We show that the inclusion of weak annotations can significantly enhance the effectiveness of MIL while keeping the approach scalable. An analysis framework was developed to process periodic acid-Schiff (PAS) and Sirius Red (SR) slides of renal biopsies. The workflow segments tissues into coarse tissue classes. Handcrafted and deep features were extracted from these tissues and combined using a soft attention model to predict several slide-level labels: delayed graft function (DGF), acute tubular injury (ATI), and Remuzzi grade components. A tissue segmentation quality metric was also developed to reduce the adverse impact of poorly segmented instances. The soft attention model was trained using 5-fold cross-validation on a mixed dataset and tested on the QUOD dataset containing n = 373 PAS and n = 195 SR biopsies. The average ROC-AUC over different prediction tasks was found to be 0.598 ± 0.011 , significantly higher than using only ResNet50 ( 0.545 ± 0.012 ), only handcrafted features ( 0.542 ± 0.011 ), and the baseline ( 0.532 ± 0.012 ) of state-of-the-art performance. In conjunction with soft attention, weighting tissues by segmentation quality has led to further improvement ( A U C = 0.618 ± 0.010 ) . Using an intuitive visualisation scheme, we show that our approach may also be used to support clinical decision making as it allows pinpointing individual tissues relevant to the predictions.
Collapse
Affiliation(s)
- Ka Ho Tam
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| | - Maria F. Soares
- Department of Cellular Pathology, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Oxford, United Kingdom
| | - Jesper Kers
- Department of Pathology, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
- Department of Pathology, Leiden Transplant Center, Leiden University Medical Center, Leiden, Netherlands
- Van’t Hoff Institute for Molecular Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Edward J. Sharples
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
| | - Rutger J. Ploeg
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
- Research and Development, NHS Blood and Transplant Filton and Oxford, Oxford, United Kingdom
| | - Maria Kaisar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
- Research and Development, NHS Blood and Transplant Filton and Oxford, Oxford, United Kingdom
| | - Jens Rittscher
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
12
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. An effective colorectal polyp classification for histopathological images based on supervised contrastive learning. Comput Biol Med 2024; 172:108267. [PMID: 38479197 DOI: 10.1016/j.compbiomed.2024.108267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/06/2024] [Accepted: 03/06/2024] [Indexed: 03/26/2024]
Abstract
Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait
| |
Collapse
|
13
|
Wu D, Lu J, Zheng N, Elsehrawy MG, Alfaiz FA, Zhao H, Alqahtani MS, Xu H. Utilizing nanotechnology and advanced machine learning for early detection of gastric cancer surgery. ENVIRONMENTAL RESEARCH 2024; 245:117784. [PMID: 38065392 DOI: 10.1016/j.envres.2023.117784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/20/2023] [Accepted: 11/23/2023] [Indexed: 01/06/2024]
Abstract
Nanotechnology has emerged as a promising frontier in revolutionizing the early diagnosis and surgical management of gastric cancers. The primary factors influencing curative efficacy in GIC patients are drug inefficacy and high surgical and pharmacological therapy recurrence rates. Due to its unique optical features, good biocompatibility, surface effects, and small size effects, nanotechnology is a developing and advanced area of study for detecting and treating cancer. Considering the limitations of GIC MRI and endoscopy and the complexity of gastric surgery, the early diagnosis and prompt treatment of gastric illnesses by nanotechnology has been a promising development. Nanoparticles directly target tumor cells, allowing their detection and removal. It also can be engineered to carry specific payloads, such as drugs or contrast agents, and enhance the efficacy and precision of cancer treatment. In this research, the boosting technique of machine learning was utilized to capture nonlinear interactions between a large number of input variables and outputs by using XGBoost and RNN-CNN as a classification method. The research sample included 350 patients, comprising 200 males and 150 females. The patients' mean ± SD was 50.34 ± 13.04 with a mean age of 50.34 ± 13.04. High-risk behaviors (P = 0.070), age at diagnosis (P = 0.034), distant metastasis (P = 0.004), and tumor stage (P = 0.014) were shown to have a statistically significant link with GC patient survival. AUC was 93.54%, Accuracy 93.54%, F1-score 93.57%, Precision 93.65%, and Recall 93.87% when analyzing stomach pictures. Integrating nanotechnology with advanced machine learning techniques holds promise for improving the diagnosis and treatment of gastric cancer, providing new avenues for precision medicine and better patient outcomes.
Collapse
Affiliation(s)
- Dan Wu
- Department of Gastrointestinal Surgery, Lishui Municipal Central Hospital, Lishui, 323000, Zhejiang, China
| | - Jianhua Lu
- Department of Gastrointestinal Surgery, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Nan Zheng
- School of Pharmacy, Wenzhou Medicine University, Wenzhou, 325000, China
| | - Mohamed Gamal Elsehrawy
- Prince Sattam Bin Abdulaziz University, College of Applied Medical Sciences, Kingdom of Saudi Arabia; Nursing Faculty, Port-Said University, Egypt.
| | - Faiz Abdulaziz Alfaiz
- Department of Biology, College of Science, Majmaah University, Al-Majmaah, 11952, Saudi Arabia.
| | - Huajun Zhao
- School of Pharmacy, Wenzhou Medicine University, Wenzhou, 325000, China.
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha, 61421, Saudi Arabia; BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE1 7RH, UK
| | - Hongtao Xu
- Department of Gastrointestinal Surgery, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui, 323000, Zhejiang, China.
| |
Collapse
|
14
|
Bouzid K, Sharma H, Killcoyne S, Castro DC, Schwaighofer A, Ilse M, Salvatelli V, Oktay O, Murthy S, Bordeaux L, Moore L, O'Donovan M, Thieme A, Nori A, Gehrung M, Alvarez-Valle J. Enabling large-scale screening of Barrett's esophagus using weakly supervised deep learning in histopathology. Nat Commun 2024; 15:2026. [PMID: 38467600 PMCID: PMC10928093 DOI: 10.1038/s41467-024-46174-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 02/15/2024] [Indexed: 03/13/2024] Open
Abstract
Timely detection of Barrett's esophagus, the pre-malignant condition of esophageal adenocarcinoma, can improve patient survival rates. The Cytosponge-TFF3 test, a non-endoscopic minimally invasive procedure, has been used for diagnosing intestinal metaplasia in Barrett's. However, it depends on pathologist's assessment of two slides stained with H&E and the immunohistochemical biomarker TFF3. This resource-intensive clinical workflow limits large-scale screening in the at-risk population. To improve screening capacity, we propose a deep learning approach for detecting Barrett's from routinely stained H&E slides. The approach solely relies on diagnostic labels, eliminating the need for expensive localized expert annotations. We train and independently validate our approach on two clinical trial datasets, totaling 1866 patients. We achieve 91.4% and 87.3% AUROCs on discovery and external test datasets for the H&E model, comparable to the TFF3 model. Our proposed semi-automated clinical workflow can reduce pathologists' workload to 48% without sacrificing diagnostic performance, enabling pathologists to prioritize high risk cases.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Ilse
- Microsoft Health Futures, Cambridge, UK
| | | | | | | | | | - Luiza Moore
- Department of Histopathology, Addenbrookes Hospital, Cambridge University NHS Foundation Trust, Cambridge, UK
| | - Maria O'Donovan
- Cyted Ltd, Cambridge, UK
- Department of Histopathology, Addenbrookes Hospital, Cambridge University NHS Foundation Trust, Cambridge, UK
| | | | | | | | | |
Collapse
|
15
|
Neto PC, Montezuma D, Oliveira SP, Oliveira D, Fraga J, Monteiro A, Monteiro J, Ribeiro L, Gonçalves S, Reinhard S, Zlobec I, Pinto IM, Cardoso JS. An interpretable machine learning system for colorectal cancer diagnosis from pathology slides. NPJ Precis Oncol 2024; 8:56. [PMID: 38443695 PMCID: PMC10914836 DOI: 10.1038/s41698-024-00539-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 02/08/2024] [Indexed: 03/07/2024] Open
Abstract
Considering the profound transformation affecting pathology practice, we aimed to develop a scalable artificial intelligence (AI) system to diagnose colorectal cancer from whole-slide images (WSI). For this, we propose a deep learning (DL) system that learns from weak labels, a sampling strategy that reduces the number of training samples by a factor of six without compromising performance, an approach to leverage a small subset of fully annotated samples, and a prototype with explainable predictions, active learning features and parallelisation. Noting some problems in the literature, this study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs. Of these samples, 900 are testing samples. Furthermore, the robustness of the proposed method is assessed with two additional external datasets (TCGA and PAIP) and a dataset of samples collected directly from the proposed prototype. Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia and uses that information to classify the whole slide. It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations. The mixed-supervision scheme allowed for an intelligent sampling strategy effectively evaluated in several different scenarios without compromising the performance. On the internal dataset, the method shows an accuracy of 93.44% and a sensitivity between positive (low-grade and high-grade dysplasia) and non-neoplastic samples of 0.996. On the external test samples varied with TCGA being the most challenging dataset with an overall accuracy of 84.91% and a sensitivity of 0.996.
Collapse
Affiliation(s)
- Pedro C Neto
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal.
- Faculty of Engineering, University of Porto (FEUP), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal.
| | - Diana Montezuma
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal.
- Cancer Biology and Epigenetics Group, Research Center of IPO Porto (CI-IPOP) / RISE@CI-IPOP (Health Research Network), Portuguese Oncology Institute of Porto (IPO Porto) / Porto Comprehensive Cancer Center (Porto.CCC), R. Dr. António Bernardino de Almeida 865, Porto, 4200-072, Porto, Portugal.
- Doctoral Programme in Medical Sciences, School of Medicine and Biomedical Sciences - University of Porto (ICBAS-UP), R. Jorge de Viterbo Ferreira 228, Porto, 4050-313, Porto, Portugal.
| | - Sara P Oliveira
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal.
- Faculty of Engineering, University of Porto (FEUP), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal.
| | - Domingos Oliveira
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - João Fraga
- Department of Pathology, IPO-Porto, R. Dr. António Bernardino de Almeida 865, Porto, 4200-072, Porto, Portugal
| | - Ana Monteiro
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - João Monteiro
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - Liliana Ribeiro
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - Sofia Gonçalves
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - Stefan Reinhard
- Institute of Pathology, University of Bern, Uni Bern, Murtenstrasse 31, Bern, 3008, Bern, Switzerland
| | - Inti Zlobec
- Institute of Pathology, University of Bern, Uni Bern, Murtenstrasse 31, Bern, 3008, Bern, Switzerland
| | - Isabel M Pinto
- IMP Diagnostics, Praça do Bom Sucesso, 61, sala 808, Porto, 4150-146, Porto, Portugal
| | - Jaime S Cardoso
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), R. Dr. Roberto Frias, Porto, 4200-465, Porto, Portugal
| |
Collapse
|
16
|
Schacherer DP, Herrmann MD, Clunie DA, Höfener H, Clifford W, Longabaugh WJR, Pieper S, Kikinis R, Fedorov A, Homeyer A. The NCI Imaging Data Commons as a platform for reproducible research in computational pathology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107839. [PMID: 37832430 PMCID: PMC10841477 DOI: 10.1016/j.cmpb.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 09/20/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides >120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research. METHODS Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services. RESULTS The results of different runs of the same experiment were reproducible to a large extent. However, we observed occasional, small variations in AUC values, indicating a practical limit to reproducibility. CONCLUSIONS We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.
Collapse
Affiliation(s)
- Daniela P Schacherer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Markus D Herrmann
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | | | | | | | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany.
| |
Collapse
|
17
|
Shi J, Tang L, Gao Z, Li Y, Wang C, Gong T, Li C, Fu H. MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3871-3883. [PMID: 37682644 DOI: 10.1109/tmi.2023.3313252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Multiple instance learning (MIL)-based methods have become the mainstream for processing the megapixel-sized whole slide image (WSI) with pyramid structure in the field of digital pathology. The current MIL-based methods usually crop a large number of patches from WSI at the highest magnification, resulting in a lot of redundancy in the input and feature space. Moreover, the spatial relations between patches can not be sufficiently modeled, which may weaken the model's discriminative ability on fine-grained features. To solve the above limitations, we propose a Multi-scale Graph Transformer (MG-Trans) with information bottleneck for whole slide image classification. MG-Trans is composed of three modules: patch anchoring module (PAM), dynamic structure information learning module (SILM), and multi-scale information bottleneck module (MIBM). Specifically, PAM utilizes the class attention map generated from the multi-head self-attention of vision Transformer to identify and sample the informative patches. SILM explicitly introduces the local tissue structure information into the Transformer block to sufficiently model the spatial relations between patches. MIBM effectively fuses the multi-scale patch features by utilizing the principle of information bottleneck to generate a robust and compact bag-level representation. Besides, we also propose a semantic consistency loss to stabilize the training of the whole model. Extensive studies on three subtyping datasets and seven gene mutation detection datasets demonstrate the superiority of MG-Trans.
Collapse
|
18
|
Yousif M, Pantanowitz L. Artificial Intelligence-Enabled Gastric Cancer Interpretations: Are We There yet? Surg Pathol Clin 2023; 16:673-686. [PMID: 37863559 DOI: 10.1016/j.path.2023.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
The integration of digital pathology and artificial intelligence (AI) is revolutionizing pathology by providing pathologists with new tools to improve workflow, enhance diagnostic accuracy, and undertake novel discovery. The capability of AI to recognize patterns and features in digital images beyond human perception is particularly valuable, thereby providing additional information for prognostic and predictive purposes. AI-based tools diagnose gastric carcinoma in digital images, detect gastric carcinoma metastases in lymph nodes, automate Ki-67 scoring in gastric neuroendocrine tumors, and quantify tumor-infiltrating lymphocytes. This article provides an overview of all of these applications of AI pertaining to gastric cancer.
Collapse
Affiliation(s)
- Mustafa Yousif
- Department of Pathology, University of Michigan, NCRC Building 35, 2800 Plymouth Road, Ann Arbor, MI 48109, USA.
| | - Liron Pantanowitz
- Department of Pathology, UPMC Shadyside Hospital, 5150 Centre Avenue Cancer Pavilion, POB2, Suite 3B, Room 347, Pittsburgh, PA 15232, USA
| |
Collapse
|
19
|
Kourounis G, Elmahmudi AA, Thomson B, Hunter J, Ugail H, Wilson C. Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals. Postgrad Med J 2023; 99:1287-1294. [PMID: 37794609 PMCID: PMC10658730 DOI: 10.1093/postmj/qgad095] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 10/06/2023]
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to 'see' and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Collapse
Affiliation(s)
- Georgios Kourounis
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| | - Ali Ahmed Elmahmudi
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Brian Thomson
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - James Hunter
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - Hassan Ugail
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Colin Wilson
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| |
Collapse
|
20
|
Dörrich M, Hecht M, Fietkau R, Hartmann A, Iro H, Gostian AO, Eckstein M, Kist AM. Explainable convolutional neural networks for assessing head and neck cancer histopathology. Diagn Pathol 2023; 18:121. [PMID: 37924082 PMCID: PMC10623808 DOI: 10.1186/s13000-023-01407-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/24/2023] [Indexed: 11/06/2023] Open
Abstract
PURPOSE Although neural networks have shown remarkable performance in medical image analysis, their translation into clinical practice remains difficult due to their lack of interpretability. An emerging field that addresses this problem is Explainable AI. METHODS Here, we aimed to investigate the ability of Convolutional Neural Networks (CNNs) to classify head and neck cancer histopathology. To this end, we manually annotated 101 histopathological slides of locally advanced head and neck squamous cell carcinoma. We trained a CNN to classify tumor and non-tumor tissue, and another CNN to semantically segment four classes - tumor, non-tumor, non-specified tissue, and background. We applied Explainable AI techniques, namely Grad-CAM and HR-CAM, to both networks and explored important features that contributed to their decisions. RESULTS The classification network achieved an accuracy of 89.9% on previously unseen data. Our segmentation network achieved a class-averaged Intersection over Union score of 0.690, and 0.782 for tumor tissue in particular. Explainable AI methods demonstrated that both networks rely on features agreeing with the pathologist's expert opinion. CONCLUSION Our work suggests that CNNs can predict head and neck cancer with high accuracy. Especially if accompanied by visual explanations, CNNs seem promising for assisting pathologists in the assessment of cancer sections.
Collapse
Affiliation(s)
- Marion Dörrich
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91052, Germany
| | - Markus Hecht
- Department of Radiotherapy and Radiation Oncology, Saarland University Medical Center, Homburg/Saar, 66421, Germany
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Comprehensive Cancer Center EMN, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
| | - Arndt Hartmann
- Comprehensive Cancer Center EMN, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Institute of Pathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Bavarian Cancer Research Center (BZKF), Bavaria, Germany
| | - Heinrich Iro
- Department of Otolaryngology - Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
| | - Antoniu-Oreste Gostian
- Comprehensive Cancer Center EMN, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Bavarian Cancer Research Center (BZKF), Bavaria, Germany
- Department of Otolaryngology - Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
| | - Markus Eckstein
- Comprehensive Cancer Center EMN, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Institute of Pathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91054, Germany
- Bavarian Cancer Research Center (BZKF), Bavaria, Germany
| | - Andreas M Kist
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, 91052, Germany.
| |
Collapse
|
21
|
Bilal M, Tsang YW, Ali M, Graham S, Hero E, Wahab N, Dodd K, Sahota H, Wu S, Lu W, Jahanifar M, Robinson A, Azam A, Benes K, Nimir M, Hewitt K, Bhalerao A, Eldaly H, Raza SEA, Gopalakrishnan K, Minhas F, Snead D, Rajpoot N. Development and validation of artificial intelligence-based prescreening of large-bowel biopsies taken in the UK and Portugal: a retrospective cohort study. Lancet Digit Health 2023; 5:e786-e797. [PMID: 37890902 DOI: 10.1016/s2589-7500(23)00148-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 07/10/2023] [Accepted: 07/25/2023] [Indexed: 10/29/2023]
Abstract
BACKGROUND Histopathological examination is a crucial step in the diagnosis and treatment of many major diseases. Aiming to facilitate diagnostic decision making and improve the workload of pathologists, we developed an artificial intelligence (AI)-based prescreening tool that analyses whole-slide images (WSIs) of large-bowel biopsies to identify typical, non-neoplastic, and neoplastic biopsies. METHODS This retrospective cohort study was conducted with an internal development cohort of slides acquired from a hospital in the UK and three external validation cohorts of WSIs acquired from two hospitals in the UK and one clinical laboratory in Portugal. To learn the differential histological patterns from digitised WSIs of large-bowel biopsy slides, our proposed weakly supervised deep-learning model (Colorectal AI Model for Abnormality Detection [CAIMAN]) used slide-level diagnostic labels and no detailed cell or region-level annotations. The method was developed with an internal development cohort of 5054 biopsy slides from 2080 patients that were labelled with corresponding diagnostic categories assigned by pathologists. The three external validation cohorts, with a total of 1536 slides, were used for independent validation of CAIMAN. Each WSI was classified into one of three classes (ie, typical, atypical non-neoplastic, and atypical neoplastic). Prediction scores of image tiles were aggregated into three prediction scores for the whole slide, one for its likelihood of being typical, one for its likelihood of being non-neoplastic, and one for its likelihood of being neoplastic. The assessment of the external validation cohorts was conducted by the trained and frozen CAIMAN model. To evaluate model performance, we calculated area under the convex hull of the receiver operating characteristic curve (AUROC), area under the precision-recall curve, and specificity compared with our previously published iterative draw and rank sampling (IDaRS) algorithm. We also generated heat maps and saliency maps to analyse and visualise the relationship between the WSI diagnostic labels and spatial features of the tissue microenvironment. The main outcome of this study was the ability of CAIMAN to accurately identify typical and atypical WSIs of colon biopsies, which could potentially facilitate automatic removing of typical biopsies from the diagnostic workload in clinics. FINDINGS A randomly selected subset of all large bowel biopsies was obtained between Jan 1, 2012, and Dec 31, 2017. The AI training, validation, and assessments were done between Jan 1, 2021, and Sept 30, 2022. WSIs with diagnostic labels were collected between Jan 1 and Sept 30, 2022. Our analysis showed no statistically significant differences across prediction scores from CAIMAN for typical and atypical classes based on anatomical sites of the biopsy. At 0·99 sensitivity, CAIMAN (specificity 0·5592) was more accurate than an IDaRS-based weakly supervised WSI-classification pipeline (0·4629) in identifying typical and atypical biopsies on cross-validation in the internal development cohort (p<0·0001). At 0·99 sensitivity, CAIMAN was also more accurate than IDaRS for two external validation cohorts (p<0·0001), but not for a third external validation cohort (p=0·10). CAIMAN provided higher specificity than IDaRS at some high-sensitivity thresholds (0·7763 vs 0·6222 for 0·95 sensitivity, 0·7126 vs 0·5407 for 0·97 sensitivity, and 0·5615 vs 0·3970 for 0·99 sensitivity on one of the external validation cohorts) and showed high classification performance in distinguishing between neoplastic biopsies (AUROC 0·9928, 95% CI 0·9927-0·9929), inflammatory biopsies (0·9658, 0·9655-0·9661), and atypical biopsies (0·9789, 0·9786-0·9792). On the three external validation cohorts, CAIMAN had AUROC values of 0·9431 (95% CI 0·9165-0·9697), 0·9576 (0·9568-0·9584), and 0·9636 (0·9615-0·9657) for the detection of atypical biopsies. Saliency maps supported the representation of disease heterogeneity in model predictions and its association with relevant histological features. INTERPRETATION CAIMAN, with its high sensitivity in detecting atypical large-bowel biopsies, might be a promising improvement in clinical workflow efficiency and diagnostic decision making in prescreening of typical colorectal biopsies. FUNDING The Pathology Image Data Lake for Analytics, Knowledge and Education Centre of Excellence; the UK Government's Industrial Strategy Challenge Fund; and Innovate UK on behalf of UK Research and Innovation.
Collapse
Affiliation(s)
- Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Department of Artificial Intelligence and Data Science, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Mahmoud Ali
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Histofy, Birmingham, UK
| | - Emily Hero
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK; Department of Pathology, University Hospitals of Leicester National Health Service Trust, Leicester, UK
| | - Noorul Wahab
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Katherine Dodd
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Harvir Sahota
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Shaobin Wu
- Department of Pathology, East Suffolk and North Essex National Health Service Foundation Trust, Colchester, UK
| | - Wenqi Lu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Andrew Robinson
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Ayesha Azam
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Ksenija Benes
- Department of Pathology, The Royal Wolverhampton National Health Service Trust, Wolverhampton, UK
| | - Mohammed Nimir
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Abhir Bhalerao
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Hesham Eldaly
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Kishore Gopalakrishnan
- Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - David Snead
- Warwick Medical School, University of Warwick, Coventry, UK; Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK; Histofy, Birmingham, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Department of Pathology, University Hospitals Coventry and Warwickshire National Health Service Trust, Coventry, UK; Histofy, Birmingham, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
22
|
Anjum S, Ahmed I, Asif M, Aljuaid H, Alturise F, Ghadi YY, Elhabob R. Lung Cancer Classification in Histopathology Images Using Multiresolution Efficient Nets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7282944. [PMID: 37876944 PMCID: PMC10593544 DOI: 10.1155/2023/7282944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/07/2022] [Accepted: 11/29/2022] [Indexed: 10/26/2023]
Abstract
Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model's performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.
Collapse
Affiliation(s)
- Sunila Anjum
- Center of Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 25000, Pakistan
| | - Imran Ahmed
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, UK
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Fahad Alturise
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass, Qassim, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Software Engineering/Computer Science, Al Ain University, Al Ain, UAE
| | - Rashad Elhabob
- College of Computer Science and Information Technology, Karary University, Omdurman, Sudan
| |
Collapse
|
23
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
24
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
25
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
26
|
Matsushima J, Sato T, Ohnishi T, Yoshimura Y, Mizutani H, Koto S, Ikeda JI, Kano M, Matsubara H, Hayashi H. The Use of Deep Learning-Based Computer Diagnostic Algorithm for Detection of Lymph Node Metastases of Gastric Adenocarcinoma. Int J Surg Pathol 2023; 31:975-981. [PMID: 35898183 DOI: 10.1177/10668969221113475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Objectives. The diversifying modalities of treatment for gastric cancer raise urgent demands for the rapid and precise diagnosis of metastases in regional lymph nodes, thereby significantly impact the workload of pathologists. Meanwhile, the recent advent of whole-slide scanners and deep-learning techniques have enabled the computer-assisted analysis of histopathological images, which could help to alleviate this impact. Thus, we developed a deep learning-based diagnostic algorithm to detect lymph node metastases of gastric adenocarcinoma and evaluated its performance. Methods. We randomly selected 20 patients with gastric adenocarcinoma who underwent surgery as definitive treatment and were found to be node metastasis-positive. HEMATOXYLIN-eosin (HE) stained glass slides, including a total of 51 metastasis-positive nodes, were retrieved from the specimens of these cases. Other slides with 776 metastasis-negative nodes were also retrieved from other twenty cases with the same disease that were diagnosed as metastasis-negative by the final pathological examinations. All glass slides were digitized using a whole-slide scanner. A deep-learning algorithm to detect metastases was developed using the data in which metastasis-positive parts of the images were annotated by a well-trained pathologist, and its performance in detecting metastases was evaluated. Results. Cross-validation analysis indicated an area of 0.9994 under the receiver operating characteristic curve. Free-response receiver operating characteristic curve (FROC) analysis indicated a sensitivity of 1.00 with three false positives. Further evaluation using an independent dataset also showed similar level of accuracies. Conclusion. This deep learning-based diagnosis-aid system is a promising tool that can assist pathologists involved in gastric cancer care and reduce their workload.
Collapse
Affiliation(s)
- Jun Matsushima
- Department of Pathology, Saitama Medical Center, Dokkyo Medical University, Saitama, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
- Department of Diagnostic Pathology, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Tamotsu Sato
- Toshiba Digital Solutions Corporation, Kanagawa, Japan
| | - Takashi Ohnishi
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | | | | | | | - Jun-Ichiro Ikeda
- Department of Diagnostic Pathology, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Masayuki Kano
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Hisahiro Matsubara
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Hideki Hayashi
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
27
|
Kaczmarzyk JR, Gupta R, Kurc TM, Abousamra S, Saltz JH, Koo PK. ChampKit: A framework for rapid evaluation of deep neural networks for patch-based histopathology classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107631. [PMID: 37271050 PMCID: PMC11093625 DOI: 10.1016/j.cmpb.2023.107631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 04/23/2023] [Accepted: 05/28/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Histopathology is the gold standard for diagnosis of many cancers. Recent advances in computer vision, specifically deep learning, have facilitated the analysis of histopathology images for many tasks, including the detection of immune cells and microsatellite instability. However, it remains difficult to identify optimal models and training configurations for different histopathology classification tasks due to the abundance of available architectures and the lack of systematic evaluations. Our objective in this work is to present a software tool that addresses this need and enables robust, systematic evaluation of neural network models for patch classification in histology in a light-weight, easy-to-use package for both algorithm developers and biomedical researchers. METHODS Here we present ChampKit (Comprehensive Histopathology Assessment of Model Predictions toolKit): an extensible, fully reproducible evaluation toolkit that is a one-stop-shop to train and evaluate deep neural networks for patch classification. ChampKit curates a broad range of public datasets. It enables training and evaluation of models supported by timm directly from the command line, without the need for users to write any code. External models are enabled through a straightforward API and minimal coding. As a result, Champkit facilitates the evaluation of existing and new models and deep learning architectures on pathology datasets, making it more accessible to the broader scientific community. To demonstrate the utility of ChampKit, we establish baseline performance for a subset of possible models that could be employed with ChampKit, focusing on several popular deep learning models, namely ResNet18, ResNet50, and R26-ViT, a hybrid vision transformer. In addition, we compare each model trained either from random weight initialization or with transfer learning from ImageNet pretrained models. For ResNet18, we also consider transfer learning from a self-supervised pretrained model. RESULTS The main result of this paper is the ChampKit software. Using ChampKit, we were able to systemically evaluate multiple neural networks across six datasets. We observed mixed results when evaluating the benefits of pretraining versus random intialization, with no clear benefit except in the low data regime, where transfer learning was found to be beneficial. Surprisingly, we found that transfer learning from self-supervised weights rarely improved performance, which is counter to other areas of computer vision. CONCLUSIONS Choosing the right model for a given digital pathology dataset is nontrivial. ChampKit provides a valuable tool to fill this gap by enabling the evaluation of hundreds of existing (or user-defined) deep learning models across a variety of pathology tasks. Source code and data for the tool are freely accessible at https://github.com/SBU-BMI/champkit.
Collapse
Affiliation(s)
- Jakub R Kaczmarzyk
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA; Simons Center for Quantitative Biology, 1 Bungtown Rd, Cold Spring Harbor, 11724, NY, USA.
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA
| | - Tahsin M Kurc
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA
| | - Shahira Abousamra
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Joel H Saltz
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA.
| | - Peter K Koo
- Simons Center for Quantitative Biology, 1 Bungtown Rd, Cold Spring Harbor, 11724, NY, USA.
| |
Collapse
|
28
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
29
|
Xu H, Wu A, Ren H, Yu C, Liu G, Liu L. Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images. Acta Histochem 2023; 125:152057. [PMID: 37300984 DOI: 10.1016/j.acthis.2023.152057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.
Collapse
Affiliation(s)
- Huilin Xu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - Aoshen Wu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - He Ren
- Faculty of Medical Instrumentation, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China
| | - Chenghang Yu
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research), Key Laboratory of Parasite and Vector Biology, National Health Commission of the People's Republic of China, WHO Collaborating Center for Tropical Diseases, Shanghai 200025, China
| | - Gang Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| | - Lei Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| |
Collapse
|
30
|
Sengun B, Iscan Y, Tataroglu Ozbulak GA, Kumbasar N, Egriboz E, Sormaz IC, Aksakal N, Deniz SM, Haklidir M, Tunca F, Giles Senyurek Y. Artificial Intelligence in Minimally Invasive Adrenalectomy: Using Deep Learning to Identify the Left Adrenal Vein. Surg Laparosc Endosc Percutan Tech 2023; 33:327-331. [PMID: 37311027 DOI: 10.1097/sle.0000000000001185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/18/2023] [Indexed: 06/15/2023]
Abstract
BACKGROUND Minimally invasive adrenalectomy is the main surgical treatment option for the resection of adrenal masses. Recognition and ligation of adrenal veins are critical parts of adrenal surgery. The utilization of artificial intelligence and deep learning algorithms to identify anatomic structures during laparoscopic and robot-assisted surgery can be used to provide real-time guidance. METHODS In this experimental feasibility study, intraoperative videos of patients who underwent minimally invasive transabdominal left adrenalectomy procedures between 2011 and 2022 in a tertiary endocrine referral center were retrospectively analyzed and used to develop an artificial intelligence model. Semantic segmentation of the left adrenal vein with deep learning was performed. To train a model, 50 random images per patient were captured during the identification and dissection of the left adrenal vein. A randomly selected 70% of data was used to train models while 15% for testing and 15% for validation with 3 efficient stage-wise feature pyramid networks (ESFPNet). Dice similarity coefficient (DSC) and intersection over union scores were used to evaluate segmentation accuracy. RESULTS A total of 40 videos were analyzed. Annotation of the left adrenal vein was performed in 2000 images. The segmentation network training on 1400 images was used to identify the left adrenal vein in 300 test images. The mean DSC and sensitivity for the highest scoring efficient stage-wise feature pyramid network B-2 network were 0.77 (±0.16 SD) and 0.82 (±0.15 SD), respectively, while the maximum DSC was 0.93, suggesting a successful prediction of anatomy. CONCLUSIONS Deep learning algorithms can predict the left adrenal vein anatomy with high performance and can potentially be utilized to identify critical anatomy during adrenal surgery and provide real-time guidance in the near future.
Collapse
Affiliation(s)
- Berke Sengun
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yalin Iscan
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | | | - Ismail C Sormaz
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Nihat Aksakal
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | - Fatih Tunca
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yasemin Giles Senyurek
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
31
|
Oh Y, Bae GE, Kim KH, Yeo MK, Ye JC. Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology: AI-Based Decision Support System for Gastric Cancer Treatment. IEEE J Biomed Health Inform 2023; 27:4143-4153. [PMID: 37192031 DOI: 10.1109/jbhi.2023.3276778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Gastric endoscopic screening is an effective way to decide appropriate gastric cancer treatment at an early stage, reducing gastric cancer-associated mortality rate. Although artificial intelligence has brought a great promise to assist pathologist to screen digitalized endoscopic biopsies, existing artificial intelligence systems are limited to be utilized in planning gastric cancer treatment. We propose a practical artificial intelligence-based decision support system that enables five subclassifications of gastric cancer pathology, which can be directly matched to general gastric cancer treatment guidance. The proposed framework is designed to efficiently differentiate multi-classes of gastric cancer through multiscale self-attention mechanism using 2-stage hybrid vision transformer networks, by mimicking the way how human pathologists understand histology. The proposed system demonstrates its reliable diagnostic performance by achieving class-average sensitivity of above 0.85 for multicentric cohort tests. Moreover, the proposed system demonstrates its great generalization capability on gastrointestinal track organ cancer by achieving the best class-average sensitivity among contemporary networks. Furthermore, in the observational study, artificial intelligence-assisted pathologists show significantly improved diagnostic sensitivity within saved screening time compared to human pathologists. Our results demonstrate that the proposed artificial intelligence system has a great potential for providing presumptive pathologic opinion and supporting decision of appropriate gastric cancer treatment in practical clinical settings.
Collapse
|
32
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
33
|
Li W, Zhang M, Cai S, Wu L, Li C, He Y, Yang G, Wang J, Pan Y. Neural network-based prognostic predictive tool for gastric cardiac cancer: the worldwide retrospective study. BioData Min 2023; 16:21. [PMID: 37464415 DOI: 10.1186/s13040-023-00335-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 07/03/2023] [Indexed: 07/20/2023] Open
Abstract
BACKGROUNDS The incidence of gastric cardiac cancer (GCC) has obviously increased recently with poor prognosis. It's necessary to compare GCC prognosis with other gastric sites carcinoma and set up an effective prognostic model based on a neural network to predict the survival of GCC patients. METHODS In the population-based cohort study, we first enrolled the clinical features from the Surveillance, Epidemiology and End Results (SEER) data (n = 31,397) as well as the public Chinese data from different hospitals (n = 1049). Then according to the diagnostic time, the SEER data were then divided into two cohorts, the train cohort (patients were diagnosed as GCC in 2010-2014, n = 4414) and the test cohort (diagnosed in 2015, n = 957). Age, sex, pathology, tumor, node, and metastasis (TNM) stage, tumor size, surgery or not, radiotherapy or not, chemotherapy or not and history of malignancy were chosen as the predictive clinical features. The train cohort was utilized to conduct the neural network-based prognostic predictive model which validated by itself and the test cohort. Area under the receiver operating characteristics curve (AUC) was used to evaluate model performance. RESULTS The prognosis of GCC patients in SEER database was worse than that of non GCC (NGCC) patients, while it was not worse in the Chinese data. The total of 5371 patients were used to conduct the model, following inclusion and exclusion criteria. Neural network-based prognostic predictive model had a satisfactory performance for GCC overall survival (OS) prediction, which owned 0.7431 AUC in the train cohort (95% confidence intervals, CI, 0.7423-0.7439) and 0.7419 in the test cohort (95% CI, 0.7411-0.7428). CONCLUSIONS GCC patients indeed have different survival time compared with non GCC patients. And the neural network-based prognostic predictive tool developed in this study is a novel and promising software for the clinical outcome analysis of GCC patients.
Collapse
Affiliation(s)
- Wei Li
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, No.9 Beiguan Street, Tongzhou District, Beijing, 101149, China
| | - Minghang Zhang
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, No.9 Beiguan Street, Tongzhou District, Beijing, 101149, China
| | - Siyu Cai
- Dermatology Department, General Hospital of Western Theater Command, No.270 Tianhui Road, Chengdu, 610083, Sichuan Province, China
| | - Liangliang Wu
- Institute of Oncology, Senior Department of Oncology, the First Medical Center of Chinese CLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing, 100853, China
| | - Chao Li
- Department of Gastroenterology, Peking University Aerospace School of Clinical Medicine, No.15 Yuquan Road, Haidian District, Beijing, 100049, China
| | - Yuqi He
- Department of Gastroenterology, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, No.9 Beiguan Street, Tongzhou District, Beijing, 101149, China
| | - Guibin Yang
- Department of Gastroenterology, Peking University Aerospace School of Clinical Medicine, No.15 Yuquan Road, Haidian District, Beijing, 100049, China
| | - Jinghui Wang
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, No.9 Beiguan Street, Tongzhou District, Beijing, 101149, China.
| | - Yuanming Pan
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, No.9 Beiguan Street, Tongzhou District, Beijing, 101149, China.
| |
Collapse
|
34
|
Kim S, Lee E. A deep attention LSTM embedded aggregation network for multiple histopathological images. PLoS One 2023; 18:e0287301. [PMID: 37384648 PMCID: PMC10310006 DOI: 10.1371/journal.pone.0287301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 06/03/2023] [Indexed: 07/01/2023] Open
Abstract
Recent advancements in computer vision and neural networks have facilitated the medical imaging survival analysis for various medical applications. However, challenges arise when patients have multiple images from multiple lesions, as current deep learning methods provide multiple survival predictions for each patient, complicating result interpretation. To address this issue, we developed a deep learning survival model that can provide accurate predictions at the patient level. We propose a deep attention long short-term memory embedded aggregation network (DALAN) for histopathology images, designed to simultaneously perform feature extraction and aggregation of lesion images. This design enables the model to efficiently learn imaging features from lesions and aggregate lesion-level information to the patient level. DALAN comprises a weight-shared CNN, attention layers, and LSTM layers. The attention layer calculates the significance of each lesion image, while the LSTM layer combines the weighted information to produce an all-encompassing representation of the patient's lesion data. Our proposed method performed better on both simulated and real data than other competing methods in terms of prediction accuracy. We evaluated DALAN against several naive aggregation methods on simulated and real datasets. Our results showed that DALAN outperformed the competing methods in terms of c-index on the MNIST and Cancer dataset simulations. On the real TCGA dataset, DALAN also achieved a higher c-index of 0.803±0.006 compared to the naive methods and the competing models. Our DALAN effectively aggregates multiple histopathology images, demonstrating a comprehensive survival model using attention and LSTM mechanisms.
Collapse
Affiliation(s)
- Sunghun Kim
- Department of Information and Statistics, Chungnam National University, Daejeon, Republic of Korea
- Department of Artificial Intelligence, Sungkyunkwan University, Suwon, Republic of Korea
| | - Eunjee Lee
- Department of Information and Statistics, Chungnam National University, Daejeon, Republic of Korea
| |
Collapse
|
35
|
Khazaee Fadafen M, Rezaee K. Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Sci Rep 2023; 13:8823. [PMID: 37258631 DOI: 10.1038/s41598-023-35431-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2023] [Indexed: 06/02/2023] Open
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world, so digital pathology is essential for assessing prognosis. Due to the increasing resolution and quantity of whole slide images (WSIs), as well as the lack of annotated information, previous methodologies cannot be generalized as effective decision-making systems. Since deep learning (DL) methods can handle large-scale applications, they can provide a viable alternative to histopathology image (HI) analysis. DL architectures, however, may not be sufficient to classify CRC tissues based on anatomical histopathology data. A dilated ResNet (dResNet) structure and attention module are used to generate deep feature maps in order to classify multiple tissues in HIs. In addition, neighborhood component analysis (NCA) overcomes the constraint of computational complexity. Data is fed into a deep support vector machine (SVM) based on an ensemble learning algorithm called DeepSVM after the features have been selected. CRC-5000 and NCT-CRC-HE-100 K datasets were analyzed to validate and test the hybrid procedure. We demonstrate that the hybrid model achieves 98.75% and 99.76% accuracy on CRC datasets. The results showed that only pathologists' labels could successfully classify unseen WSIs. Furthermore, the hybrid deep learning method outperforms state-of-the-art approaches in terms of computational efficiency and time. Using the proposed mechanism for tissue analysis, it will be possible to correctly predict CRC based on accurate pathology image classification.
Collapse
Affiliation(s)
- Masoud Khazaee Fadafen
- Department of Electrical Engineering, Technical and Vocational University (TVU), Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran.
| |
Collapse
|
36
|
Yong MP, Hum YC, Lai KW, Lee YL, Goh CH, Yap WS, Tee YK. Histopathological Gastric Cancer Detection on GasHisSDB Dataset Using Deep Ensemble Learning. Diagnostics (Basel) 2023; 13:diagnostics13101793. [PMID: 37238277 DOI: 10.3390/diagnostics13101793] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 05/28/2023] Open
Abstract
Gastric cancer is a leading cause of cancer-related deaths worldwide, underscoring the need for early detection to improve patient survival rates. The current clinical gold standard for detection is histopathological image analysis, but this process is manual, laborious, and time-consuming. As a result, there has been growing interest in developing computer-aided diagnosis to assist pathologists. Deep learning has shown promise in this regard, but each model can only extract a limited number of image features for classification. To overcome this limitation and improve classification performance, this study proposes ensemble models that combine the decisions of several deep learning models. To evaluate the effectiveness of the proposed models, we tested their performance on the publicly available gastric cancer dataset, Gastric Histopathology Sub-size Image Database. Our experimental results showed that the top 5 ensemble model achieved state-of-the-art detection accuracy in all sub-databases, with the highest detection accuracy of 99.20% in the 160 × 160 pixels sub-database. These results demonstrated that ensemble models could extract important features from smaller patch sizes and achieve promising performance. Overall, our proposed work could assist pathologists in detecting gastric cancer through histopathological image analysis and contribute to early gastric cancer detection to improve patient survival rates.
Collapse
Affiliation(s)
- Ming Ping Yong
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Yan Chai Hum
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Ying Loong Lee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Choon-Hian Goh
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Wun-She Yap
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Yee Kai Tee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| |
Collapse
|
37
|
Dehkharghanian T, Bidgoli AA, Riasatian A, Mazaheri P, Campbell CJV, Pantanowitz L, Tizhoosh HR, Rahnamayan S. Biased data, biased AI: deep networks predict the acquisition site of TCGA images. Diagn Pathol 2023; 18:67. [PMID: 37198691 DOI: 10.1186/s13000-023-01355-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/07/2023] [Indexed: 05/19/2023] Open
Abstract
BACKGROUND Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. METHODS 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide's acquisition site, and also for slide representation in image search. RESULTS DenseNet's deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet's deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.
Collapse
Affiliation(s)
- Taher Dehkharghanian
- University Health Network, Toronto, ON, Canada
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
| | - Azam Asilian Bidgoli
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada
- Bharti School of Engineering and Computer Science, Laurentian University, Sudbury, ON, Canada
| | | | - Pooria Mazaheri
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
| | - Clinton J V Campbell
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
- William Osler Health System, Brampton, ON, Canada
| | | | - H R Tizhoosh
- KIMIA Lab, University of Waterloo, Waterloo, ON, Canada
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Shahryar Rahnamayan
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada.
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada.
| |
Collapse
|
38
|
Xiao Y, Wang S, Ling R, Song Y. Application of artificial neural network algorithm in pathological diagnosis and prognosis prediction of digestive tract malignant tumors. Zhejiang Da Xue Xue Bao Yi Xue Ban 2023; 52:243-248. [PMID: 37283110 DOI: 10.3724/zdxbyxb-2022-0569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The application of artificial neural network algorithm in pathological diagnosis of gastrointestinal malignant tumors has become a research hotspot. In the previous studies, the algorithm research mainly focused on the model development based on convolutional neural networks, while only a few studies used the combination of convolutional neural networks and recurrent neural networks. The research contents included classical histopathological diagnosis and molecular typing of malignant tumors, and the prediction of patient prognosis by utilizing artificial neural networks. This article reviews the research progress on artificial neural network algorithm in the pathological diagnosis and prognosis prediction of digestive tract malignant tumors.
Collapse
Affiliation(s)
- Ya Xiao
- Health Science Center, Ningbo University, Ningbo 315211, Zhejiang Province, China.
| | - Shuyang Wang
- Department of Pathology, School of Basic Medical Sciences, Fudan University, Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Ren Ling
- Shanghai Laizi Software Technology Co. Ltd., Shanghai 201499, China
| | - Yufei Song
- Department of Gastroenterology, the Affiliated Lihuili Hospital, Ningbo University, Ningbo 315046, Zhejiang Province, China.
| |
Collapse
|
39
|
Lan J, Chen M, Wang J, Du M, Wu Z, Zhang H, Xue Y, Wang T, Chen L, Xu C, Han Z, Hu Z, Zhou Y, Zhou X, Tong T, Chen G. Using less annotation workload to establish a pathological auxiliary diagnosis system for gastric cancer. Cell Rep Med 2023; 4:101004. [PMID: 37044091 PMCID: PMC10140598 DOI: 10.1016/j.xcrm.2023.101004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/20/2022] [Accepted: 03/17/2023] [Indexed: 04/14/2023]
Abstract
Pathological diagnosis of gastric cancer requires pathologists to have extensive clinical experience. To help pathologists improve diagnostic accuracy and efficiency, we collected 1,514 cases of stomach H&E-stained specimens with complete diagnostic information to establish a pathological auxiliary diagnosis system based on deep learning. At the slide level, our system achieves a specificity of 0.8878 while maintaining a high sensitivity close to 1.0 on 269 biopsy specimens (147 malignancies) and 163 surgical specimens (80 malignancies). The classified accuracy of our system is 0.9034 at the slide level for 352 biopsy specimens (201 malignancies) from 50 medical centers. With the help of our system, the pathologists' average false-negative rate and average false-positive rate on 100 biopsy specimens (50 malignancies) are reduced to 1/5 and 1/2 of the original rates, respectively. At the same time, the average uncertainty rate and the average diagnosis time are reduced by approximately 22% and 20%, respectively.
Collapse
Affiliation(s)
- Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Musheng Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Jianchao Wang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zhida Wu
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Hejun Zhang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Yuyang Xue
- School of Engineering, University of Edinburgh, Edinburgh EH8 9JU, UK
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Lifan Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Chaohui Xu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Yuanbo Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China; Imperial Vision Technology, Fuzhou, Fujian 350100, China.
| | - Gang Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China.
| |
Collapse
|
40
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. Improved classification of colorectal polyps on histopathological images with ensemble learning and stain normalization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107441. [PMID: 36905748 DOI: 10.1016/j.cmpb.2023.107441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/05/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Early detection of colon adenomatous polyps is critically important because correct detection of it significantly reduces the potential of developing colon cancers in the future. The key challenge in the detection of adenomatous polyps is differentiating it from its visually similar counterpart, non-adenomatous tissues. Currently, it solely depends on the experience of the pathologist. To assist the pathologists, the objective of this work is to provide a novel non-knowledge-based Clinical Decision Support System (CDSS) for improved detection of adenomatous polyps on colon histopathology images. METHODS The domain shift problem arises when the train and test data are coming from different distributions of diverse settings and unequal color levels. This problem, which can be tackled by stain normalization techniques, restricts the machine learning models to attain higher classification accuracies. In this work, the proposed method integrates stain normalization techniques with ensemble of competitively accurate, scalable and robust variants of CNNs, ConvNexts. The improvement is empirically analyzed for five widely employed stain normalization techniques. The classification performance of the proposed method is evaluated on three datasets comprising more than 10k colon histopathology images. RESULTS The comprehensive experiments demonstrate that the proposed method outperforms the state-of-the-art deep convolutional neural network based models by attaining 95% classification accuracy on the curated dataset, and 91.1% and 90% on EBHI and UniToPatho public datasets, respectively. CONCLUSIONS These results show that the proposed method can accurately classify colon adenomatous polyps on histopathology images. It retains remarkable performance scores even for different datasets coming from different distributions. This indicates that the model has a notable generalization ability.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| |
Collapse
|
41
|
Giuste FO, Sequeira R, Keerthipati V, Lais P, Mirzazadeh A, Mohseni A, Zhu Y, Shi W, Marteau B, Zhong Y, Tong L, Das B, Shehata B, Deshpande S, Wang MD. Explainable synthetic image generation to improve risk assessment of rare pediatric heart transplant rejection. J Biomed Inform 2023; 139:104303. [PMID: 36736449 PMCID: PMC10031799 DOI: 10.1016/j.jbi.2023.104303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/23/2022] [Accepted: 01/29/2023] [Indexed: 02/05/2023]
Abstract
Expert microscopic analysis of cells obtained from frequent heart biopsies is vital for early detection of pediatric heart transplant rejection to prevent heart failure. Detection of this rare condition is prone to low levels of expert agreement due to the difficulty of identifying subtle rejection signs within biopsy samples. The rarity of pediatric heart transplant rejection also means that very few gold-standard images are available for developing machine learning models. To solve this urgent clinical challenge, we developed a deep learning model to automatically quantify rejection risk within digital images of biopsied tissue using an explainable synthetic data augmentation approach. We developed this explainable AI framework to illustrate how our progressive and inspirational generative adversarial network models distinguish between normal tissue images and those containing cellular rejection signs. To quantify biopsy-level rejection risk, we first detect local rejection features using a binary image classifier trained with expert-annotated and synthetic examples. We converted these local predictions into a biopsy-wide rejection score via an interpretable histogram-based approach. Our model significantly improves upon prior works with the same dataset with an area under the receiver operating curve (AUROC) of 98.84% for the local rejection detection task and 95.56% for the biopsy-rejection prediction task. A biopsy-level sensitivity of 83.33% makes our approach suitable for early screening of biopsies to prioritize expert analysis. Our framework provides a solution to rare medical imaging challenges currently limited by small datasets.
Collapse
Affiliation(s)
- Felipe O Giuste
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA.
| | - Ryan Sequeira
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Vikranth Keerthipati
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Peter Lais
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Ali Mirzazadeh
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Arshawn Mohseni
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Yuanda Zhu
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Wenqi Shi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Benoit Marteau
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Yishan Zhong
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Li Tong
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
| | - Bibhuti Das
- Department of Pediatric Cardiology, University of Mississippi Medical Center, Jackson, 39216, MS, USA
| | - Bahig Shehata
- Department of Pathology, Wayne State University School of Medicine, Detroit, 48201, MI, USA
| | - Shriprasad Deshpande
- Department of Pediatric Cardiology, Children's National Health System, Washington, 20010, DC, USA
| | - May D Wang
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA.
| |
Collapse
|
42
|
Haughey MJ, Bassolas A, Sousa S, Baker AM, Graham TA, Nicosia V, Huang W. First passage time analysis of spatial mutation patterns reveals sub-clonal evolutionary dynamics in colorectal cancer. PLoS Comput Biol 2023; 19:e1010952. [PMID: 36913406 PMCID: PMC10035892 DOI: 10.1371/journal.pcbi.1010952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 03/23/2023] [Accepted: 02/14/2023] [Indexed: 03/14/2023] Open
Abstract
The signature of early cancer dynamics on the spatial arrangement of tumour cells is poorly understood, and yet could encode information about how sub-clones grew within the expanding tumour. Novel methods of quantifying spatial tumour data at the cellular scale are required to link evolutionary dynamics to the resulting spatial architecture of the tumour. Here, we propose a framework using first passage times of random walks to quantify the complex spatial patterns of tumour cell population mixing. First, using a simple model of cell mixing we demonstrate how first passage time statistics can distinguish between different pattern structures. We then apply our method to simulated patterns of mutated and non-mutated tumour cell population mixing, generated using an agent-based model of expanding tumours, to explore how first passage times reflect mutant cell replicative advantage, time of emergence and strength of cell pushing. Finally, we explore applications to experimentally measured human colorectal cancer, and estimate parameters of early sub-clonal dynamics using our spatial computational model. We infer a wide range of sub-clonal dynamics, with mutant cell division rates varying between 1 and 4 times the rate of non-mutated cells across our sample set. Some mutated sub-clones emerged after as few as 100 non-mutant cell divisions, and others only after 50,000 divisions. The majority were consistent with boundary driven growth or short-range cell pushing. By analysing multiple sub-sampled regions in a small number of samples, we explore how the distribution of inferred dynamics could inform about the initial mutational event. Our results demonstrate the efficacy of first passage time analysis as a new methodology in spatial analysis of solid tumour tissue, and suggest that patterns of sub-clonal mixing can provide insights into early cancer dynamics.
Collapse
Affiliation(s)
- Magnus J. Haughey
- School of Mathematical Sciences, Queen Mary University of London, London, United Kingdom
| | - Aleix Bassolas
- School of Mathematical Sciences, Queen Mary University of London, London, United Kingdom
| | - Sandro Sousa
- School of Mathematical Sciences, Queen Mary University of London, London, United Kingdom
| | - Ann-Marie Baker
- Centre for Evolution and Cancer, Institute of Cancer Research, London, United Kingdom
- Centre for Genomics and Computational Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Trevor A. Graham
- Centre for Evolution and Cancer, Institute of Cancer Research, London, United Kingdom
- Centre for Genomics and Computational Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Vincenzo Nicosia
- School of Mathematical Sciences, Queen Mary University of London, London, United Kingdom
| | - Weini Huang
- School of Mathematical Sciences, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
43
|
Ohshima H, Mishima K. Oral biosciences: The annual review 2022. J Oral Biosci 2023; 65:1-12. [PMID: 36740188 DOI: 10.1016/j.job.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND The Journal of Oral Biosciences is devoted to advancing and disseminating fundamental knowledge concerning every aspect of oral biosciences. HIGHLIGHT This review features review articles in the fields of "Bone Cell Biology," "Tooth Development & Regeneration," "Tooth Bleaching," "Adipokines," "Milk Thistle," "Epithelial-Mesenchymal Transition," "Periodontitis," "Diagnosis," "Salivary Glands," "Tooth Root," "Exosome," "New Perspectives of Tooth Identification," "Dental Pulp," and "Saliva" in addition to the review articles by the winner of the "Lion Dental Research Award" ("Plastic changes in nociceptive pathways contributing to persistent orofacial pain") presented by the Japanese Association for Oral Biology. CONCLUSION The review articles in the Journal of Oral Biosciences have inspired its readers to broaden their knowledge about various aspects of oral biosciences. The current editorial review introduces these exciting review articles.
Collapse
Affiliation(s)
- Hayato Ohshima
- Division of Anatomy and Cell Biology of the Hard Tissue, Department of Tissue Regeneration and Reconstruction, Niigata University Graduate School of Medical and Dental Sciences, 2-5274 Gakkocho-dori, Chuo-ku, Niigata 951-8514, Japan.
| | - Kenji Mishima
- Division of Pathology, Department of Oral Diagnostic Sciences, Showa University School of Dentistry, 1-5-8, Hatanodai, Shinagawa-ku, Tokyo 142-8555, Japan
| |
Collapse
|
44
|
Wen Z, Wang S, Yang DM, Xie Y, Chen M, Bishop J, Xiao G. Deep learning in digital pathology for personalized treatment plans of cancer patients. Semin Diagn Pathol 2023; 40:109-119. [PMID: 36890029 DOI: 10.1053/j.semdp.2023.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/22/2023] [Indexed: 02/27/2023]
Abstract
Over the past decade, many new cancer treatments have been developed and made available to patients. However, in most cases, these treatments only benefit a specific subgroup of patients, making the selection of treatment for a specific patient an essential but challenging task for oncologists. Although some biomarkers were found to associate with treatment response, manual assessment is time-consuming and subjective. With the rapid developments and expanded implementation of artificial intelligence (AI) in digital pathology, many biomarkers can be quantified automatically from histopathology images. This approach allows for a more efficient and objective assessment of biomarkers, aiding oncologists in formulating personalized treatment plans for cancer patients. This review presents an overview and summary of the recent studies on biomarker quantification and treatment response prediction using hematoxylin-eosin (H&E) stained pathology images. These studies have shown that an AI-based digital pathology approach can be practical and will become increasingly important in improving the selection of cancer treatments for patients.
Collapse
Affiliation(s)
- Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Mingyi Chen
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Justin Bishop
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
45
|
Reis HC, Turk V. Transfer Learning Approach and Nucleus Segmentation with MedCLNet Colon Cancer Database. J Digit Imaging 2023; 36:306-325. [PMID: 36127531 PMCID: PMC9984669 DOI: 10.1007/s10278-022-00701-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 11/30/2022] Open
Abstract
Machine learning has been recently used especially in the medical field. In the diagnosis of serious diseases such as cancer, deep learning techniques can be used to reduce the workload of experts and to produce quick solutions. The nuclei found in the histopathology dataset are an essential parameter in disease detection. The nucleus segmentation was performed using the colorectal histology MNIST dataset for nucleus detection in this study. The graph theory, PSO, watershed, and random walker algorithms were used for the segmentation process. In addition, we present the 10-class MedCLNet visual dataset consisting of the NCT-CRC-HE-100 K dataset, LC25000 dataset, and GlaS dataset that can be used in transfer learning studies from deep learning techniques. The study proposes a transfer learning technique using the MedCLNet database. Deep neural networks pre-trained with the proposed transfer learning method were used in the classification with the colorectal histology MNIST dataset in the experimental process. DenseNet201, DenseNet169, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet101V2, and Xception deep learning algorithms were used in transfer learning and the classification studies. The proposed approach was analyzed before and after transfer learning with different methods (DenseNet169 + SVM, DenseNet169 + GRU). In the performance measurement, using the colorectal histology MNIST dataset, 94.29% accuracy was obtained in the DenseNet169 model, which was initiated with random weights in the multi-classification study, and 95.00% accuracy after transfer learning was applied. In comparison with the results obtained from empirical studies, it was demonstrated that the proposed method produced satisfactory outcomes. The application is expected to provide a secondary evaluation for physicians in colon cancer detection and the segmentation.
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, Gumushane, 2900, Turkey.
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| |
Collapse
|
46
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States,*Correspondence: Dania Daye,
| |
Collapse
|
47
|
Sulyok M, Luibrand J, Strohäker J, Karacsonyi P, Frauenfeld L, Makky A, Mattern S, Zhao J, Nadalin S, Fend F, Schürch CM. Implementing deep learning models for the classification of Echinococcus multilocularis infection in human liver tissue. Parasit Vectors 2023; 16:29. [PMID: 36694210 PMCID: PMC9875509 DOI: 10.1186/s13071-022-05640-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 12/26/2022] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND The histological diagnosis of alveolar echinococcosis can be challenging. Decision support models based on deep learning (DL) are increasingly used to aid pathologists, but data on the histology of tissue-invasive parasitic infections are missing. The aim of this study was to implement DL methods to classify Echinococcus multilocularis liver lesions and normal liver tissue and assess which regions and structures play the most important role in classification decisions. METHODS We extracted 15,756 echinococcus tiles from 28 patients using 59 whole slide images (WSI); 11,602 tiles of normal liver parenchyma from 18 patients using 33 WSI served as a control group. Different pretrained model architectures were used with a 60-20-20% random splitting. We visualized the predictions using probability-thresholded heat maps of WSI. The area-under-the-curve (AUC) value and other performance metrics were calculated. The GradCAM method was used to calculate and visualize important spatial features. RESULTS The models achieved a high validation and test set accuracy. The calculated AUC values were 1.0 in all models. Pericystic fibrosis and necrotic areas, as well as germinative and laminated layers of the metacestodes played an important role in decision tasks according to the superimposed GradCAM heatmaps. CONCLUSION Deep learning models achieved a high predictive performance in classifying E. multilocularis liver lesions. A possible next step could be to validate the model using other datasets and test it against other pathologic entities as well, such as, for example, Echinococcus granulosus infection.
Collapse
Affiliation(s)
- Mihaly Sulyok
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Julia Luibrand
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Jens Strohäker
- grid.411544.10000 0001 0196 8249Department of Surgery, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Peter Karacsonyi
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Leonie Frauenfeld
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Ahmad Makky
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Sven Mattern
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Jing Zhao
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Silvio Nadalin
- grid.411544.10000 0001 0196 8249Department of Surgery, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Falko Fend
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Christian M. Schürch
- grid.411544.10000 0001 0196 8249Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| |
Collapse
|
48
|
Tsuneki M, Abe M, Ichihara S, Kanavati F. Inference of core needle biopsy whole slide images requiring definitive therapy for prostate cancer. BMC Cancer 2023; 23:11. [PMID: 36600203 DOI: 10.1186/s12885-022-10488-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/26/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Prostate cancer is often a slowly progressive indolent disease. Unnecessary treatments from overdiagnosis are a significant concern, particularly low-grade disease. Active surveillance has being considered as a risk management strategy to avoid potential side effects by unnecessary radical treatment. In 2016, American Society of Clinical Oncology (ASCO) endorsed the Cancer Care Ontario (CCO) Clinical Practice Guideline on active surveillance for the management of localized prostate cancer. METHODS Based on this guideline, we developed a deep learning model to classify prostate adenocarcinoma into indolent (applicable for active surveillance) and aggressive (necessary for definitive therapy) on core needle biopsy whole slide images (WSIs). In this study, we trained deep learning models using a combination of transfer, weakly supervised, and fully supervised learning approaches using a dataset of core needle biopsy WSIs (n=1300). In addition, we performed an inter-rater reliability evaluation on the WSI classification. RESULTS We evaluated the models on a test set (n=645), achieving ROC-AUCs of 0.846 for indolent and 0.980 for aggressive. The inter-rater reliability evaluation showed s-scores in the range of 0.10 to 0.95, with the lowest being on the WSIs with both indolent and aggressive classification by the model, and the highest on benign WSIs. CONCLUSION The results demonstrate the promising potential of deployment in a practical prostate adenocarcinoma histopathological diagnostic workflow system.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan.
| | - Makoto Abe
- Department of Pathology, Tochigi Cancer Center, 4-9-13 Yohnan, Utsunomiya, 320-0834, Japan
| | - Shin Ichihara
- Department of Surgical Pathology, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo, 060-0033, Japan
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan
| |
Collapse
|
49
|
Liao J, Li X, Gan Y, Han S, Rong P, Wang W, Li W, Zhou L. Artificial intelligence assists precision medicine in cancer treatment. Front Oncol 2023; 12:998222. [PMID: 36686757 PMCID: PMC9846804 DOI: 10.3389/fonc.2022.998222] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/22/2022] [Indexed: 01/06/2023] Open
Abstract
Cancer is a major medical problem worldwide. Due to its high heterogeneity, the use of the same drugs or surgical methods in patients with the same tumor may have different curative effects, leading to the need for more accurate treatment methods for tumors and personalized treatments for patients. The precise treatment of tumors is essential, which renders obtaining an in-depth understanding of the changes that tumors undergo urgent, including changes in their genes, proteins and cancer cell phenotypes, in order to develop targeted treatment strategies for patients. Artificial intelligence (AI) based on big data can extract the hidden patterns, important information, and corresponding knowledge behind the enormous amount of data. For example, the ML and deep learning of subsets of AI can be used to mine the deep-level information in genomics, transcriptomics, proteomics, radiomics, digital pathological images, and other data, which can make clinicians synthetically and comprehensively understand tumors. In addition, AI can find new biomarkers from data to assist tumor screening, detection, diagnosis, treatment and prognosis prediction, so as to providing the best treatment for individual patients and improving their clinical outcomes.
Collapse
Affiliation(s)
- Jinzhuang Liao
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Xiaoying Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Yu Gan
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Shuangze Han
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Pengfei Rong
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Wei Wang
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Wei Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Li Zhou
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,Department of Pathology, The Xiangya Hospital of Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| |
Collapse
|
50
|
Yin Z, Yao C, Zhang L, Qi S. Application of artificial intelligence in diagnosis and treatment of colorectal cancer: A novel Prospect. Front Med (Lausanne) 2023; 10:1128084. [PMID: 36968824 PMCID: PMC10030915 DOI: 10.3389/fmed.2023.1128084] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 02/13/2023] [Indexed: 03/29/2023] Open
Abstract
In the past few decades, according to the rapid development of information technology, artificial intelligence (AI) has also made significant progress in the medical field. Colorectal cancer (CRC) is the third most diagnosed cancer worldwide, and its incidence and mortality rates are increasing yearly, especially in developing countries. This article reviews the latest progress in AI in diagnosing and treating CRC based on a systematic collection of previous literature. Most CRCs transform from polyp mutations. The computer-aided detection systems can significantly improve the polyp and adenoma detection rate by early colonoscopy screening, thereby lowering the possibility of mutating into CRC. Machine learning and bioinformatics analysis can help screen and identify more CRC biomarkers to provide the basis for non-invasive screening. The Convolutional neural networks can assist in reading histopathologic tissue images, reducing the experience difference among doctors. Various studies have shown that AI-based high-level auxiliary diagnostic systems can significantly improve the readability of medical images and help clinicians make more accurate diagnostic and therapeutic decisions. Moreover, Robotic surgery systems such as da Vinci have been more and more commonly used to treat CRC patients, according to their precise operating performance. The application of AI in neoadjuvant chemoradiotherapy has further improved the treatment and efficacy evaluation of CRC. In addition, AI represented by deep learning in gene sequencing research offers a new treatment option. All of these things have seen that AI has a promising prospect in the era of precision medicine.
Collapse
Affiliation(s)
- Zugang Yin
- Department of General Surgery, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Chenhui Yao
- Department of General Surgery, The First Affiliated Hospital of Dalian Medical University, Dalian, China
- *Correspondence: Chenhui Yao,
| | - Limin Zhang
- Department of Respiratory, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Shaohua Qi
- Institute of Laboratory Animal Science, Chinese Academy of Medical Sciences and Comparative Medicine Center, Peking Union Medical College, Beijing, China
| |
Collapse
|