1
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
2
|
Migliorelli G, Fiorentino MC, Di Cosmo M, Villani FP, Mancini A, Moccia S. On the use of contrastive learning for standard-plane classification in fetal ultrasound imaging. Comput Biol Med 2024; 174:108430. [PMID: 38613892 DOI: 10.1016/j.compbiomed.2024.108430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 03/06/2024] [Accepted: 04/07/2024] [Indexed: 04/15/2024]
Abstract
BACKGROUND To investigate the effectiveness of contrastive learning, in particular SimClr, in reducing the need for large annotated ultrasound (US) image datasets for fetal standard plane identification. METHODS We explore SimClr advantage in the cases of both low and high inter-class variability, considering at the same time how classification performance varies according to different amounts of labels used. This evaluation is performed by exploiting contrastive learning through different training strategies. We apply both quantitative and qualitative analyses, using standard metrics (F1-score, sensitivity, and precision), Class Activation Mapping (CAM), and t-Distributed Stochastic Neighbor Embedding (t-SNE). RESULTS When dealing with high inter-class variability classification tasks, contrastive learning does not bring a significant advantage; whereas it results to be relevant for low inter-class variability classification, specifically when initialized with ImageNet weights. CONCLUSIONS Contrastive learning approaches are typically used when a large number of unlabeled data is available, which is not representative of US datasets. We proved that SimClr either as pre-training with backbone initialized via ImageNet weights or used in an end-to-end dual-task may impact positively the performance over standard transfer learning approaches, under a scenario in which the dataset is small and characterized by low inter-class variability.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy
| | | | - Adriano Mancini
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
3
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
4
|
Mohammadi M, Cooper J, Arandelović O, Fell C, Morrison D, Syed S, Konanahalli P, Bell S, Bryson G, Harrison DJ, Harris-Birtill D. Weakly supervised learning and interpretability for endometrial whole slide image diagnosis. Exp Biol Med (Maywood) 2022; 247:2025-2037. [PMID: 36281799 PMCID: PMC9791308 DOI: 10.1177/15353702221126560] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Fully supervised learning for whole slide image-based diagnostic tasks in histopathology is problematic due to the requirement for costly and time-consuming manual annotation by experts. Weakly supervised learning that utilizes only slide-level labels during training is becoming more widespread as it relieves this burden, but has not yet been applied to endometrial whole slide images, in iSyntax format. In this work, we apply a weakly supervised learning algorithm to a real-world dataset of this type for the first time, with over 85% validation accuracy and over 87% test accuracy. We then employ interpretability methods including attention heatmapping, feature visualization, and a novel end-to-end saliency-mapping approach to identify distinct morphologies learned by the model and build an understanding of its behavior. These interpretability methods, alongside consultation with expert pathologists, allow us to make comparisons between machine-learned knowledge and consensus in the field. This work contributes to the state of the art by demonstrating a robust practical application of weakly supervised learning on a real-world digital pathology dataset and shows the importance of fine-grained interpretability to support understanding and evaluation of model performance in this high-stakes use case.
Collapse
Affiliation(s)
- Mahnaz Mohammadi
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK,Mahnaz Mohammadi.
| | - Jessica Cooper
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Christina Fell
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David Morrison
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Sheeba Syed
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow G51 4TF, UK
| | - Prakash Konanahalli
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow G51 4TF, UK
| | - Sarah Bell
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow G51 4TF, UK
| | - Gareth Bryson
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow G51 4TF, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
| | | |
Collapse
|
5
|
Yang P, Yin X, Lu H, Hu Z, Zhang X, Jiang R, Lv H. CS-CO: A Hybrid Self-Supervised Visual Representation Learning Method for H&E-stained Histopathological Images. Med Image Anal 2022; 81:102539. [DOI: 10.1016/j.media.2022.102539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 06/11/2022] [Accepted: 07/11/2022] [Indexed: 12/01/2022]
|
6
|
Liu Y, Bilodeau E, Pollack B, Batmanghelich K. Automated detection of premalignant oral lesions on whole slide images using convolutional neural networks. Oral Oncol 2022; 134:106109. [PMID: 36126604 DOI: 10.1016/j.oraloncology.2022.106109] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/12/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022]
Abstract
INTRODUCTION Oral epithelial dysplasia (OED) is a precursor lesion to oral squamous cell carcinoma, a disease with a reported overall survival rate of 56 percent across all stages. Accurate detection of OED is critical as progression to oral cancer can be impeded with complete excision of premalignant lesions. However, previous research has demonstrated that the task of grading of OED, even when performed by highly trained experts, is subject to high rates of reader variability and misdiagnosis. Thus, our study aims to develop a convolutional neural network (CNN) model that can identify regions suspicious for OED whole-slide pathology images. METHODS During model development, we optimized key training hyperparameters including loss function on 112 pathologist annotated cases between the training and validation sets. Then, we compared OED segmentation and classification metrics between two well-established CNN architectures for medical imaging, DeepLabv3+ and UNet++. To further assess generalizability, we assessed case-level performance of a held-out test set of 44 whole-slide images. RESULTS DeepLabv3+ outperformed UNet++ in overall accuracy, precision, and segmentation metrics in a 4-fold cross validation study. When applied to the held-out test set, our best performing DeepLabv3+ model achieved an overall accuracy and F1-Score of 93.3 percent and 90.9 percent, respectively. CONCLUSION The present study trained and implemented a CNN-based deep learning model for identification and segmentation of oral epithelial dysplasia (OED) with reasonable success. Computer assisted detection was shown to be feasible in detecting premalignant/precancerous oral lesions, laying groundwork for eventual clinical implementation.
Collapse
Affiliation(s)
- Yingci Liu
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA; Rutgers School of Dental Medicine, 110, Bergen St, Newark, NJ 07101, USA.
| | - Elizabeth Bilodeau
- University of Pittsburgh School of Dental Medicine, 3501 Terrace St., Pittsburgh, PA 15206, USA
| | - Brian Pollack
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA
| | - Kayhan Batmanghelich
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA
| |
Collapse
|
7
|
Sheikh TS, Kim JY, Shim J, Cho M. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12061480. [PMID: 35741289 PMCID: PMC9222016 DOI: 10.3390/diagnostics12061480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
An automatic pathological diagnosis is a challenging task because histopathological images with different cellular heterogeneity representations are sometimes limited. To overcome this, we investigated how the holistic and local appearance features with limited information can be fused to enhance the analysis performance. We propose an unsupervised deep learning model for whole-slide image diagnosis, which uses stacked autoencoders simultaneously feeding multiple-image descriptors such as the histogram of oriented gradients and local binary patterns along with the original image to fuse the heterogeneous features. The pre-trained latent vectors are extracted from each autoencoder, and these fused feature representations are utilized for classification. We observed that training with additional descriptors helps the model to overcome the limitations of multiple variants and the intricate cellular structure of histopathology data by various experiments. Our model outperforms existing state-of-the-art approaches by achieving the highest accuracies of 87.2 for ICIAR2018, 94.6 for Dartmouth, and other significant metrics for public benchmark datasets. Our model does not rely on a specific set of pre-trained features based on classifiers to achieve high performance. Unsupervised spaces are learned from the number of independent multiple descriptors and can be used with different variants of classifiers to classify cancer diseases from whole-slide images. Furthermore, we found that the proposed model classifies the types of breast and lung cancer similar to the viewpoint of pathologists by visualization. We also designed our whole-slide image processing toolbox to extract and process the patches from whole-slide images.
Collapse
Affiliation(s)
| | - Jee-Yeon Kim
- Department of Pathology, Pusan National University Yangsan Hospital, School of Medicine, Pusan National University, Yangsan-si 50612, Korea;
| | - Jaesool Shim
- School of Mechanical Engineering, Yeungnam University, Gyeongsan 38541, Korea
- Correspondence: (J.S.); (M.C.)
| | - Migyung Cho
- Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Korea;
- Correspondence: (J.S.); (M.C.)
| |
Collapse
|