1
|
Yang X, Zhang R, Yang Y, Zhang Y, Chen K. PathEX: Make good choice for whole slide image extraction. PLoS One 2024; 19:e0304702. [PMID: 39208135 PMCID: PMC11361590 DOI: 10.1371/journal.pone.0304702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 05/17/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND The tile-based approach has been widely used for slide-level predictions in whole slide image (WSI) analysis. However, the irregular shapes and variable dimensions of tumor regions pose challenges for the process. To address this issue, we proposed PathEX, a framework that integrates intersection over tile (IoT) and background over tile (BoT) algorithms to extract tile images around boundaries of annotated regions while excluding the blank tile images within these regions. METHODS We developed PathEX, which incorporated IoT and BoT into tile extraction, for training a classification model in CAM (239 WSIs) and PAIP (40 WSIs) datasets. By adjusting the IoT and BoT parameters, we generated eight training sets and corresponding models for each dataset. The performance of PathEX was assessed on the testing set comprising 13,076 tile images from 48 WSIs of CAM dataset and 6,391 tile images from 10 WSIs of PAIP dataset. RESULTS PathEX could extract tile images around boundaries of annotated region differently by adjusting the IoT parameter, while exclusion of blank tile images within annotated regions achieved by setting the BoT parameter. As adjusting IoT from 0.1 to 1.0, and 1-BoT from 0.0 to 0.5, we got 8 train sets. Experimentation revealed that set C demonstrates potential as the most optimal candidate. Nevertheless, a combination of IoT values ranging from 0.2 to 0.5 and 1-BoT values ranging from 0.2 to 0.5 also yielded favorable outcomes. CONCLUSIONS In this study, we proposed PathEX, a framework that integrates IoT and BoT algorithms for tile image extraction at the boundaries of annotated regions while excluding blank tiles within these regions. Researchers can conveniently set the thresholds for IoT and BoT to facilitate tile image extraction in their own studies. The insights gained from this research provide valuable guidance for tile image extraction in digital pathology applications.
Collapse
Affiliation(s)
- Xinda Yang
- Renmin University of China School of Information, Beijing, P.R. China
| | - Ranze Zhang
- Breast Tumor Center, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Breast Tumor Center, Sun Yat-sen Breast Tumor Hospital, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan Yang
- Department of Research and Development, Health Data (Beijing) Technology Co., Ltd, Guangzhou, Guangdong, P.R. China
| | - Yu Zhang
- Renmin University of China School of Information, Beijing, P.R. China
| | - Kai Chen
- Breast Tumor Center, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Breast Tumor Center, Sun Yat-sen Breast Tumor Hospital, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Artificial Intelligence Lab, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Ghezloo F, Chang OH, Knezevich SR, Shaw KC, Thigpen KG, Reisch LM, Shapiro LG, Elmore JG. Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01202-x. [PMID: 39122892 DOI: 10.1007/s10278-024-01202-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 08/12/2024]
Abstract
Deep learning techniques offer improvements in computer-aided diagnosis systems. However, acquiring image domain annotations is challenging due to the knowledge and commitment required of expert pathologists. Pathologists often identify regions in whole slide images with diagnostic relevance rather than examining the entire slide, with a positive correlation between the time spent on these critical image regions and diagnostic accuracy. In this paper, a heatmap is generated to represent pathologists' viewing patterns during diagnosis and used to guide a deep learning architecture during training. The proposed system outperforms traditional approaches based on color and texture image characteristics, integrating pathologists' domain expertise to enhance region of interest detection without needing individual case annotations. Evaluating our best model, a U-Net model with a pre-trained ResNet-18 encoder, on a skin biopsy whole slide image dataset for melanoma diagnosis, shows its potential in detecting regions of interest, surpassing conventional methods with an increase of 20%, 11%, 22%, and 12% in precision, recall, F1-score, and Intersection over Union, respectively. In a clinical evaluation, three dermatopathologists agreed on the model's effectiveness in replicating pathologists' diagnostic viewing behavior and accurately identifying critical regions. Finally, our study demonstrates that incorporating heatmaps as supplementary signals can enhance the performance of computer-aided diagnosis systems. Without the availability of eye tracking data, identifying precise focus areas is challenging, but our approach shows promise in assisting pathologists in improving diagnostic accuracy and efficiency, streamlining annotation processes, and aiding the training of new pathologists.
Collapse
Affiliation(s)
- Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA.
| | - Oliver H Chang
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | | | | | | | - Lisa M Reisch
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Linda G Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los AngelesLos Angeles, CA, USA
| |
Collapse
|
3
|
Abstract
Multiplex imaging has emerged as an invaluable tool for immune-oncologists and translational researchers, enabling them to examine intricate interactions among immune cells, stroma, matrix, and malignant cells within the tumor microenvironment (TME). It holds significant promise in the quest to discover improved biomarkers for treatment stratification and identify novel therapeutic targets. Nonetheless, several challenges exist in the realms of study design, experiment optimization, and data analysis. In this review, our aim is to present an overview of the utilization of multiplex imaging in immuno-oncology studies and inform novice researchers about the fundamental principles at each stage of the imaging and analysis process.
Collapse
Affiliation(s)
- Chen Zhao
- Thoracic and GI Malignancies Branch, CCR, NCI, Bethesda, Maryland, USA
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| | - Ronald N Germain
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| |
Collapse
|
4
|
Akerman M, Choudhary S, Liebmann JM, Cioffi GA, Chen RWS, Thakoor KA. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
Affiliation(s)
- Michelle Akerman
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Sanmati Choudhary
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Jeffrey M. Liebmann
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - George A. Cioffi
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Royce W. S. Chen
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
- Department of Computer Science, Columbia University, New York, NY, United States
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|
5
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
6
|
Xu Y, Zheng X, Li Y, Ye X, Cheng H, Wang H, Lyu J. Exploring patient medication adherence and data mining methods in clinical big data: A contemporary review. J Evid Based Med 2023; 16:342-375. [PMID: 37718729 DOI: 10.1111/jebm.12548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 08/30/2023] [Indexed: 09/19/2023]
Abstract
BACKGROUND Increasingly, patient medication adherence data are being consolidated from claims databases and electronic health records (EHRs). Such databases offer an indirect avenue to gauge medication adherence in our data-rich healthcare milieu. The surge in data accessibility, coupled with the pressing need for its conversion to actionable insights, has spotlighted data mining, with machine learning (ML) emerging as a pivotal technique. Nonadherence poses heightened health risks and escalates medical costs. This paper elucidates the synergistic interaction between medical database mining for medication adherence and the role of ML in fostering knowledge discovery. METHODS We conducted a comprehensive review of EHR applications in the realm of medication adherence, leveraging ML techniques. We expounded on the evolution and structure of medical databases pertinent to medication adherence and harnessed both supervised and unsupervised ML paradigms to delve into adherence and its ramifications. RESULTS Our study underscores the applications of medical databases and ML, encompassing both supervised and unsupervised learning, for medication adherence in clinical big data. Databases like SEER and NHANES, often underutilized due to their intricacies, have gained prominence. Employing ML to excavate patient medication logs from these databases facilitates adherence analysis. Such findings are pivotal for clinical decision-making, risk stratification, and scholarly pursuits, aiming to elevate healthcare quality. CONCLUSION Advanced data mining in the era of big data has revolutionized medication adherence research, thereby enhancing patient care. Emphasizing bespoke interventions and research could herald transformative shifts in therapeutic modalities.
Collapse
Affiliation(s)
- Yixian Xu
- Department of Anesthesiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Xinkai Zheng
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Yuanjie Li
- Planning & Discipline Construction Office, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Xinmiao Ye
- Department of Anesthesiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Hongtao Cheng
- School of Nursing, Jinan University, Guangzhou, China
| | - Hao Wang
- Department of Anesthesiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Jun Lyu
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Guangzhou, China
| |
Collapse
|
7
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
8
|
Hossain MS, Shahriar GM, Syeed MMM, Uddin MF, Hasan M, Shivam S, Advani S. Region of interest (ROI) selection using vision transformer for automatic analysis using whole slide images. Sci Rep 2023; 13:11314. [PMID: 37443188 PMCID: PMC10344922 DOI: 10.1038/s41598-023-38109-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Selecting regions of interest (ROI) is a common step in medical image analysis across all imaging modalities. An ROI is a subset of an image appropriate for the intended analysis and identified manually by experts. In modern pathology, the analysis involves processing multidimensional and high resolution whole slide image (WSI) tiles automatically with an overwhelming quantity of structural and functional information. Despite recent improvements in computing capacity, analyzing such a plethora of data is challenging but vital to accurate analysis. Automatic ROI detection can significantly reduce the number of pixels to be processed, speed the analysis, improve accuracy and reduce dependency on pathologists. In this paper, we present an ROI detection method for WSI and demonstrated it for human epidermal growth factor receptor 2 (HER2) grading for breast cancer patients. Existing HER2 grading relies on manual ROI selection, which is tedious, time-consuming and suffers from inter-observer and intra-observer variability. This study found that the HER2 grade changes with ROI selection. We proposed an ROI detection method using Vision Transformer and investigated the role of image magnification for ROI detection. This method yielded an accuracy of 99% using 20 × WSI and 97% using 10 × WSI for the ROI detection. In the demonstration, the proposed method increased the diagnostic agreement to 99.3% with the clinical scores and reduced the time to 15 seconds for automated HER2 grading.
Collapse
Affiliation(s)
- Md Shakhawat Hossain
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh.
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh.
| | | | - M M Mahbubul Syeed
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Mohammad Faisal Uddin
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Mahady Hasan
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Shingla Shivam
- Department of Pathology, SL Raheja Hospital, Mumbai, 400016, India
| | - Suresh Advani
- Department of Pathology, SL Raheja Hospital, Mumbai, 400016, India
| |
Collapse
|
9
|
Improving the Diagnosis of Skin Biopsies Using Tissue Segmentation. Diagnostics (Basel) 2022; 12:diagnostics12071713. [PMID: 35885617 PMCID: PMC9316584 DOI: 10.3390/diagnostics12071713] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/04/2022] [Accepted: 07/12/2022] [Indexed: 11/16/2022] Open
Abstract
Invasive melanoma, a common type of skin cancer, is considered one of the deadliest. Pathologists routinely evaluate melanocytic lesions to determine the amount of atypia, and if the lesion represents an invasive melanoma, its stage. However, due to the complicated nature of these assessments, inter- and intra-observer variability among pathologists in their interpretation are very common. Machine-learning techniques have shown impressive and robust performance on various tasks including healthcare. In this work, we study the potential of including semantic segmentation of clinically important tissue structure in improving the diagnosis of skin biopsy images. Our experimental results show a 6% improvement in F-score when using whole slide images along with epidermal nests and cancerous dermal nest segmentation masks compared to using whole-slide images alone in training and testing the diagnosis pipeline.
Collapse
|
10
|
Du H, Wen S, Guo Y, Jin F, Gallas BD. Single reader between-cases AUC estimator with nested data. Stat Methods Med Res 2022; 31:2069-2086. [PMID: 35790462 DOI: 10.1177/09622802221111539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The area under the receiver operating characteristic curve (AUC) is widely used in evaluating diagnostic performance for many clinical tasks. It is still challenging to evaluate the reading performance of distinguishing between positive and negative regions of interest (ROIs) in the nested-data problem, where multiple ROIs are nested within the cases. To address this issue, we identify two kinds of AUC estimators, within-cases AUC and between-cases AUC. We focus on the between-cases AUC estimator, since our main research interest is in patient-level diagnostic performance rather than location-level performance (the ability to separate ROIs with and without disease within each patient). Another reason is that as the case number increases, the number of between-cases paired ROIs is much larger than the number of within-cases ROIs. We provide estimators for the variance of the between-cases AUC and for the covariance when there are two readers. We derive and prove the above estimators' theoretical values based on a simulation model and characterize their behavior using Monte Carlo simulation results. We also provide a real-data example. Moreover, we connect the distribution-based simulation model with the simulation model based on the linear mixed-effect model, which helps better understand the sources of variation in the simulated dataset.
Collapse
Affiliation(s)
- Hongfei Du
- Statistics Department, 8367The George Washington University, Washington, USA
| | - Si Wen
- 4137U.S. Food and Drug Administration, CDRH, OSEL, DIDSR, Silver Spring, USA
| | - Yufei Guo
- Statistics Department, 8367The George Washington University, Washington, USA
| | - Fang Jin
- Statistics Department, 8367The George Washington University, Washington, USA
| | - Brandon D Gallas
- 4137U.S. Food and Drug Administration, CDRH, OSEL, DIDSR, Silver Spring, USA
| |
Collapse
|
11
|
Rojas F, Hernandez S, Lazcano R, Laberiano-Fernandez C, Parra ER. Multiplex Immunofluorescence and the Digital Image Analysis Workflow for Evaluation of the Tumor Immune Environment in Translational Research. Front Oncol 2022; 12:889886. [PMID: 35832550 PMCID: PMC9271766 DOI: 10.3389/fonc.2022.889886] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/27/2022] [Indexed: 11/13/2022] Open
Abstract
A robust understanding of the tumor immune environment has important implications for cancer diagnosis, prognosis, research, and immunotherapy. Traditionally, immunohistochemistry (IHC) has been regarded as the standard method for detecting proteins in situ, but this technique allows for the evaluation of only one cell marker per tissue sample at a time. However, multiplexed imaging technologies enable the multiparametric analysis of a tissue section at the same time. Also, through the curation of specific antibody panels, these technologies enable researchers to study the cell subpopulations within a single immunological cell group. Thus, multiplexed imaging gives investigators the opportunity to better understand tumor cells, immune cells, and the interactions between them. In the multiplexed imaging technology workflow, once the protocol for a tumor immune micro environment study has been defined, histological slides are digitized to produce high-resolution images in which regions of interest are selected for the interrogation of simultaneously expressed immunomarkers (including those co-expressed by the same cell) by using an image analysis software and algorithm. Most currently available image analysis software packages use similar machine learning approaches in which tissue segmentation first defines the different components that make up the regions of interest and cell segmentation, then defines the different parameters, such as the nucleus and cytoplasm, that the software must utilize to segment single cells. Image analysis tools have driven dramatic evolution in the field of digital pathology over the past several decades and provided the data necessary for translational research and the discovery of new therapeutic targets. The next step in the growth of digital pathology is optimization and standardization of the different tasks in cancer research, including image analysis algorithm creation, to increase the amount of data generated and their accuracy in a short time as described herein. The aim of this review is to describe this process, including an image analysis algorithm creation for multiplex immunofluorescence analysis, as an essential part of the optimization and standardization of the different processes in cancer research, to increase the amount of data generated and their accuracy in a short time.
Collapse
|
12
|
Chronic Lymphocytic Leukemia Progression Diagnosis with Intrinsic Cellular Patterns via Unsupervised Clustering. Cancers (Basel) 2022; 14:cancers14102398. [PMID: 35626003 PMCID: PMC9139505 DOI: 10.3390/cancers14102398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Distinguishing between chronic lymphocytic leukemia (CLL), accelerated CLL (aCLL), and full-blown transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications. Identifying cellular phenotypes via unsupervised clustering provides the most robust analytic performance in analyzing digitized pathology slides. This study serves as a proof of concept that using an unsupervised machine learning scheme can enhance diagnostic accuracy. Abstract Identifying the progression of chronic lymphocytic leukemia (CLL) to accelerated CLL (aCLL) or transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications as it prompts a major change in patient management. However, the differentiation between these disease phases may be challenging in routine practice. Unsupervised learning has gained increased attention because of its substantial potential in data intrinsic pattern discovery. Here, we demonstrate that cellular feature engineering, identifying cellular phenotypes via unsupervised clustering, provides the most robust analytic performance in analyzing digitized pathology slides (accuracy = 0.925, AUC = 0.978) when compared to alternative approaches, such as mixed features, supervised features, unsupervised/mixed/supervised feature fusion and selection, as well as patch-based convolutional neural network (CNN) feature extraction. We further validate the reproducibility and robustness of unsupervised feature extraction via stability and repeated splitting analysis, supporting its utility as a diagnostic aid in identifying CLL patients with histologic evidence of disease progression. The outcome of this study serves as proof of principle using an unsupervised machine learning scheme to enhance the diagnostic accuracy of the heterogeneous histology patterns that pathologists might not easily see.
Collapse
|
13
|
Mehta S, Lu X, Wu W, Weaver D, Hajishirzi H, Elmore JG, Shapiro LG. End-to-End Diagnosis of Breast Biopsy Images with Transformers. Med Image Anal 2022; 79:102466. [PMID: 35525135 PMCID: PMC10162595 DOI: 10.1016/j.media.2022.102466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 02/25/2022] [Accepted: 04/18/2022] [Indexed: 01/18/2023]
Abstract
Diagnostic disagreements among pathologists occur throughout the spectrum of benign to malignant lesions. A computer-aided diagnostic system capable of reducing uncertainties would have important clinical impact. To develop a computer-aided diagnosis method for classifying breast biopsy images into a range of diagnostic categories (benign, atypia, ductal carcinoma in situ, and invasive breast cancer), we introduce a transformer-based hollistic attention network called HATNet. Unlike state-of-the-art histopathological image classification systems that use a two pronged approach, i.e., they first learn local representations using a multi-instance learning framework and then combine these local representations to produce image-level decisions, HATNet streamlines the histopathological image classification pipeline and shows how to learn representations from gigapixel size images end-to-end. HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision. It outperforms the previous best network Y-Net, which uses supervision in the form of tissue-level segmentation masks, by 8%. Importantly, our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of 87 U.S. pathologists for this challenging test set.
Collapse
Affiliation(s)
| | - Ximing Lu
- University of Washington, Seattle, USA
| | - Wenjun Wu
- University of Washington, Seattle, USA
| | - Donald Weaver
- Department of Pathology, The University of Vermont College of Medicine, USA
| | | | - Joann G Elmore
- David Geffen School of Medicine, University of California, Los Angeles, USA
| | | |
Collapse
|
14
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
15
|
McAlpine ED, Michelow P, Celik T. The Utility of Unsupervised Machine Learning in Anatomic Pathology. Am J Clin Pathol 2022; 157:5-14. [PMID: 34302331 DOI: 10.1093/ajcp/aqab085] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/18/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVES Developing accurate supervised machine learning algorithms is hampered by the lack of representative annotated datasets. Most data in anatomic pathology are unlabeled and creating large, annotated datasets is a time consuming and laborious process. Unsupervised learning, which does not require annotated data, possesses the potential to assist with this challenge. This review aims to introduce the concept of unsupervised learning and illustrate how clustering, generative adversarial networks (GANs) and autoencoders have the potential to address the lack of annotated data in anatomic pathology. METHODS A review of unsupervised learning with examples from the literature was carried out. RESULTS Clustering can be used as part of semisupervised learning where labels are propagated from a subset of annotated data points to remaining unlabeled data points in a dataset. GANs may assist by generating large amounts of synthetic data and performing color normalization. Autoencoders allow training of a network on a large, unlabeled dataset and transferring learned representations to a classifier using a smaller, labeled subset (unsupervised pretraining). CONCLUSIONS Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.
Collapse
Affiliation(s)
- Ewen D McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, South Africa
- Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
16
|
Mikhailov IA, Khvostikov AV, Krylov AS. [Methodical approaches to annotation and labeling of histological images in order to automatically detect the layers of the stomach wall and the depth of invasion of gastric cancer]. Arkh Patol 2022; 84:67-73. [PMID: 36469721 DOI: 10.17116/patol20228406167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE Development of original methodological approaches to annotation and labeling of histological images in relation to the problem of automatic segmentation of the layers of the stomach wall. MATERIAL AND METHODS Three image collections were used in the study: NCT-CRC-HE-100K, CRC-VAL-HE-7K, and part of the PATH-DT-MSU collection. The used part of the original PATH-DT-MSU collection contains 20 histological images obtained using a high performance digital scanning microscope. UNLABELLED Each image is a fragment of the stomach wall, cut from the surgical material of gastric cancer and stained with hematoxylin and eosin. Images were obtained using a scanning microscope Leica Aperio AT2 (Leica Microsystems Inc., Germany), annotations were made using Aperio ImageScope 12.3.3 (Leica Microsystems Inc., Germany). RESULTS A labeling system is proposed that includes 5 classes (tissue types): areas of gastric adenocarcinoma (TUM), unchanged areas of the lamina propria (LP), unchanged areas of the muscular lamina of the mucosa (MM), a class of underlying tissues (AT), including areas of the submucosa, own muscular layer of the stomach and subserous sections, image background (BG). The advantage of this marking technique is to ensure high efficiency of recognition of the muscularis lamina (MM) - a natural «line» separating the lamina propria of the mucous membrane and all other underlying layers of the stomach wall. The disadvantages of the technique include a small number of classes, which leads to insufficient detailing of automatic segmentation. CONCLUSION In the course of the study, an original technique for labeling and annotating images was developed, including 5 classes (types of tissues). This technique is effective at the initial stages of teaching mathematical algorithms for the classification and segmentation of histological images. Further stages in the development of a real diagnostic algorithm to automatically determine the depth of invasion of gastric cancer require the correction and development of the presented method of marking and annotation.
Collapse
Affiliation(s)
| | | | - A S Krylov
- Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
17
|
Biloborodova T, Lomakin S, Skarga-Bandurova I, Krytska Y. Region of Interest Identification in the Cervical Digital Histology Images. PROGRESS IN ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/978-3-031-16474-3_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
18
|
Elmes S, Chakraborti T, Fan M, Uhlig H, Rittscher J. Automated Annotator: Capturing Expert Knowledge for Free. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2664-2667. [PMID: 34891800 DOI: 10.1109/embc46164.2021.9630309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning enabled medical image analysis is heavily reliant on expert annotations which is costly. We present a simple yet effective automated annotation pipeline that uses autoencoder based heatmaps to exploit high level information that can be extracted from a histology viewer in an unobtrusive fashion. By predicting heatmaps on unseen images the model effectively acts like a robot annotator. The method is demonstrated in the context of coeliac disease histology images in this initial work, but the approach is task agnostic and may be used for other medical image annotation applications. The results are evaluated by a pathologist and also empirically using a deep network for coeliac disease classification. Initial results using this simple but effective approach are encouraging and merit further investigation, specially considering the possibility of scaling this up to a large number of users.
Collapse
|
19
|
Aust J, Mitrovic A, Pons D. Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:6135. [PMID: 34577343 PMCID: PMC8473167 DOI: 10.3390/s21186135] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 01/20/2023]
Abstract
Background-The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor 'cleanliness' was analysed among other factors. Method-Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results-The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions-Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance.
Collapse
Affiliation(s)
- Jonas Aust
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Antonija Mitrovic
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Dirk Pons
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| |
Collapse
|
20
|
Lu X, Mehta S, Brunyé TT, Weaver DL, Elmore JG, Shapiro LG. Analysis of Regions of Interest and Distractor Regions in Breast Biopsy Images. ... IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS. IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2021; 2021:10.1109/bhi50953.2021.9508513. [PMID: 36589620 PMCID: PMC9801511 DOI: 10.1109/bhi50953.2021.9508513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This paper studies why pathologists can misdiagnose diagnostically challenging breast biopsy cases, using a data set of 240 whole slide images (WSIs). Three experienced pathologists agreed on a consensus reference ground-truth diagnosis for each slide and also a consensus region of interest (ROI) from which the diagnosis could best be made. A study group of 87 other pathologists then diagnosed test sets (60 slides each) and marked their own regions of interest. Diagnoses and ROIs were categorized such that if on a given slide, their ROI differed from the consensus ROI and their diagnosis was incorrect, that ROI was called a distractor. We used the HATNet transformer-based deep learning classifier to evaluate the visual similarities and differences between the true (consensus) ROIs and the distractors. Results showed high accuracy for both the similarity and difference networks, showcasing the challenging nature of feature classification with breast biopsy images. This study is important in the potential use of its results for teaching pathologists how to diagnose breast biopsy slides.
Collapse
Affiliation(s)
- Ximing Lu
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Sachin Mehta
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, School of Engineering, Tufts University, Medford
| | | | - Joann G. Elmore
- David Geffen School of Medicine, University of California, Los Angeles
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| |
Collapse
|
21
|
Mercan C, Aygunes B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J Biomed Health Inform 2021; 25:2041-2049. [PMID: 33166257 PMCID: PMC8274968 DOI: 10.1109/jbhi.2020.3036734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.
Collapse
|
22
|
Zhu C, Mei K, Peng T, Luo Y, Liu J, Wang Y, Jin M. Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.154] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
23
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|
24
|
Zheng Y, Jiang Z, Xie F, Shi J, Zhang H, Huai J, Cao M, Yang X. Diagnostic Regions Attention Network (DRA-Net) for Histopathology WSI Recommendation and Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1090-1103. [PMID: 33351756 DOI: 10.1109/tmi.2020.3046636] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The development of whole slide imaging techniques and online digital pathology platforms have accelerated the popularization of telepathology for remote tumor diagnoses. During a diagnosis, the behavior information of the pathologist can be recorded by the platform and then archived with the digital case. The browsing path of the pathologist on the WSI is one of the valuable information in the digital database because the image content within the path is expected to be highly correlated with the diagnosis report of the pathologist. In this article, we proposed a novel approach for computer-assisted cancer diagnosis named session-based histopathology image recommendation (SHIR) based on the browsing paths on WSIs. To achieve the SHIR, we developed a novel diagnostic regions attention network (DRA-Net) to learn the pathology knowledge from the image content associated with the browsing paths. The DRA-Net does not rely on the pixel-level or region-level annotations of pathologists. All the data for training can be automatically collected by the digital pathology platform without interrupting the pathologists' diagnoses. The proposed approaches were evaluated on a gastric dataset containing 983 cases within 5 categories of gastric lesions. The quantitative and qualitative assessments on the dataset have demonstrated the proposed SHIR framework with the novel DRA-Net is effective in recommending diagnostically relevant cases for auxiliary diagnosis. The MRR and MAP for the recommendation are respectively 0.816 and 0.836 on the gastric dataset. The source code of the DRA-Net is available at https://github.com/zhengyushan/dpathnet.
Collapse
|
25
|
Kumar A, Prateek M. Localization of Nuclei in Breast Cancer Using Whole Slide Imaging System Supported by Morphological Features and Shape Formulas. Cancer Manag Res 2020; 12:4573-4583. [PMID: 32606950 PMCID: PMC7305844 DOI: 10.2147/cmar.s248166] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 05/25/2020] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Cancer rates are exponentially increasing worldwide and over 15 million new cases are expected in the year 2020 according to the World Cancer Report. To support the clinical diagnosis of the disease, recent technical advancements in digital microscopy have been achieved to reduce the cost and increase the efficiency of the process. Food and Drug Administration (FDA or Agency) has issued the guidelines, in particular, the development of digital whole slide image scanning system. It is very helpful to the computer-aided diagnosis of breast cancer. METHODS Whole slide imaging supported by fluorescence, immunohistochemistry, and multispectral imaging concepts. Due to the high dimension of WSI images and computation, it is a challenging task to find the region of interest (ROI) on a malignant sample image. The unsupervised machine learning and quantitative analysis of malignant sample images are supported by morphological features and shape formulas to find the correct region of interest. Due to computational limitations, it starts to work on small patches, integrate the results, and automated localize or detect the ROI. It is also compared to the handcrafted and automated region of interest provided in the ICIAR2018 dataset. RESULTS A total of 10 hematoxylins and eosin (H&E) stained malignant breast histology microscopy whole slide image samples are labeled and annotated by two medical experts who are team members of the ICIAR 2018 challenge. After applying the proposed methodology, it is successfully able to localize the malignant patches of WSI sample images and getting the ROI with an average accuracy of 85.5%. CONCLUSION With the help of the k-means clustering algorithm, morphological features, and shape formula, it is possible to recognize the region of interest using the whole slide imaging concept.
Collapse
Affiliation(s)
- Anil Kumar
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| | - Manish Prateek
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| |
Collapse
|
26
|
Chang MC, Mrkonjic M. Review of the current state of digital image analysis in breast pathology. Breast J 2020; 26:1208-1212. [PMID: 32342590 DOI: 10.1111/tbj.13858] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 11/05/2019] [Indexed: 01/10/2023]
Abstract
Advances in digital image analysis have the potential to transform the practice of breast pathology. In the near future, a move to a digital workflow offers improvements in efficiency. Coupled with artificial intelligence (AI), digital pathology can assist pathologist interpretation, automate time-consuming tasks, and discover novel morphologic patterns. Opportunities for digital enhancements abound in breast pathology, from increasing reproducibility in grading and biomarker interpretation, to discovering features that correlate with patient outcome and treatment. Our objective is to review the most recent developments in digital pathology with clear impact to breast pathology practice. Although breast pathologists currently undertake limited adoption of digital methods, the field is rapidly evolving. Care is needed to validate emerging technologies for effective patient care.
Collapse
Affiliation(s)
- Martin C Chang
- University of Vermont Cancer Center, Burlington, VT, USA.,Department of Pathology and Laboratory Medicine, Larner College of Medicine at the University of Vermont, Burlington, VT, USA
| | - Miralem Mrkonjic
- Sinai Health System, Toronto, ON, Canada.,Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
27
|
Wu W, Li B, Mercan E, Mehta S, Bartlett J, Weaver DL, Elmore JG, Shapiro LG. MLCD: A Unified Software Package for Cancer Diagnosis. JCO Clin Cancer Inform 2020; 4:290-298. [PMID: 32216637 PMCID: PMC7113135 DOI: 10.1200/cci.19.00129] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/05/2020] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Machine Learning Package for Cancer Diagnosis (MLCD) is the result of a National Institutes of Health/National Cancer Institute (NIH/NCI)-sponsored project for developing a unified software package from state-of-the-art breast cancer biopsy diagnosis and machine learning algorithms that can improve the quality of both clinical practice and ongoing research. METHODS Whole-slide images of 240 well-characterized breast biopsy cases, initially assembled under R01 CA140560, were used for developing the algorithms and training the machine learning models. This software package is based on the methodology developed and published under our recent NIH/NCI-sponsored research grant (R01 CA172343) for finding regions of interest (ROIs) in whole-slide breast biopsy images, for segmenting ROIs into histopathologic tissue types and for using this segmentation in classifiers that can suggest final diagnoses. RESULT The package provides an ROI detector for whole-slide images and modules for semantic segmentation into tissue classes and diagnostic classification into 4 classes (benign, atypia, ductal carcinoma in situ, invasive cancer) of the ROIs. It is available through the GitHub repository under the Massachusetts Institute of Technology license and will later be distributed with the Pathology Image Informatics Platform system. A Web page provides instructions for use. CONCLUSION Our tools have the potential to provide help to other cancer researchers and, ultimately, to practicing physicians and will motivate future research in this field. This article describes the methodology behind the software development and gives sample outputs to guide those interested in using this package.
Collapse
Affiliation(s)
- Wenjun Wu
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA
| | - Beibin Li
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA
| | - Ezgi Mercan
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA
- Craniofacial Center, Seattle Children’s Hospital, Seattle WA
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA
| | | | - Donald L. Weaver
- Department of Pathology and University of Vermont Cancer Center, Larner College of Medicine, University of Vermont, Burlington, VT
| | - Joann G. Elmore
- Division of General Internal Medicine and Health Services Research, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, Los Angeles, CA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA
| |
Collapse
|
28
|
Jaber MI, Song B, Taylor C, Vaske CJ, Benz SC, Rabizadeh S, Soon-Shiong P, Szeto CW. A deep learning image-based intrinsic molecular subtype classifier of breast tumors reveals tumor heterogeneity that may affect survival. Breast Cancer Res 2020; 22:12. [PMID: 31992350 PMCID: PMC6988279 DOI: 10.1186/s13058-020-1248-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 01/13/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Breast cancer intrinsic molecular subtype (IMS) as classified by the expression-based PAM50 assay is considered a strong prognostic feature, even when controlled for by standard clinicopathological features such as age, grade, and nodal status, yet the molecular testing required to elucidate these subtypes is not routinely performed. Furthermore, when such bulk assays as RNA sequencing are performed, intratumoral heterogeneity that may affect prognosis and therapeutic decision-making can be missed. METHODS As a more facile and readily available method for determining IMS in breast cancer, we developed a deep learning approach for approximating PAM50 intrinsic subtyping using only whole-slide images of H&E-stained breast biopsy tissue sections. This algorithm was trained on images from 443 tumors that had previously undergone PAM50 subtyping to classify small patches of the images into four major molecular subtypes-Basal-like, HER2-enriched, Luminal A, and Luminal B-as well as Basal vs. non-Basal. The algorithm was subsequently used for subtype classification of a held-out set of 222 tumors. RESULTS This deep learning image-based classifier correctly subtyped the majority of samples in the held-out set of tumors. However, in many cases, significant heterogeneity was observed in assigned subtypes across patches from within a single whole-slide image. We performed further analysis of heterogeneity, focusing on contrasting Luminal A and Basal-like subtypes because classifications from our deep learning algorithm-similar to PAM50-are associated with significant differences in survival between these two subtypes. Patients with tumors classified as heterogeneous were found to have survival intermediate between Luminal A and Basal patients, as well as more varied levels of hormone receptor expression patterns. CONCLUSIONS Here, we present a method for minimizing manual work required to identify cancer-rich patches among all multiscale patches in H&E-stained WSIs that can be generalized to any indication. These results suggest that advanced deep machine learning methods that use only routinely collected whole-slide images can approximate RNA-seq-based molecular tests such as PAM50 and, importantly, may increase detection of heterogeneous tumors that may require more detailed subtype analysis.
Collapse
Affiliation(s)
| | - Bing Song
- ImmunityBio, 9920 Jefferson Blvd., Culver City, CA 90232 USA
| | - Clive Taylor
- Department of Pathology, Keck School of Medicine, University of Southern California, HMR 2011 Zonal Ave., Health Sciences Campus, Los Angeles, CA 90033 USA
| | | | - Stephen C. Benz
- ImmunityBio, 2901 Mission St. Ext., Santa Cruz, CA 95066 USA
| | - Shahrooz Rabizadeh
- NantOmics LLC, 9920 Jefferson Blvd., Culver City, CA 90232 USA
- ImmunityBio, 9920 Jefferson Blvd., Culver City, CA 90232 USA
| | | | | |
Collapse
|
29
|
Mercan E, Mehta S, Bartlett J, Shapiro LG, Weaver DL, Elmore JG. Assessment of Machine Learning of Breast Pathology Structures for Automated Differentiation of Breast Cancer and High-Risk Proliferative Lesions. JAMA Netw Open 2019; 2:e198777. [PMID: 31397859 PMCID: PMC6692690 DOI: 10.1001/jamanetworkopen.2019.8777] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
IMPORTANCE Following recent US Food and Drug Administration approval, adoption of whole slide imaging in clinical settings may be imminent, and diagnostic accuracy, particularly among challenging breast biopsy specimens, may benefit from computerized diagnostic support tools. OBJECTIVE To develop and evaluate computer vision methods to assist pathologists in diagnosing the full spectrum of breast biopsy samples, from benign to invasive cancer. DESIGN, SETTING, AND PARTICIPANTS In this diagnostic study, 240 breast biopsies from Breast Cancer Surveillance Consortium registries that varied by breast density, diagnosis, patient age, and biopsy type were selected, reviewed, and categorized by 3 expert pathologists as benign, atypia, ductal carcinoma in situ (DCIS), and invasive cancer. The atypia and DCIS cases were oversampled to increase statistical power. High-resolution digital slide images were obtained, and 2 automated image features (tissue distribution feature and structure feature) were developed and evaluated according to the consensus diagnosis of the expert panel. The performance of the automated image analysis methods was compared with independent interpretations from 87 practicing US pathologists. Data analysis was performed between February 2017 and February 2019. MAIN OUTCOMES AND MEASURES Diagnostic accuracy defined by consensus reference standard of 3 experienced breast pathologists. RESULTS The accuracy of machine learning tissue distribution features, structure features, and pathologists for classification of invasive cancer vs noninvasive cancer was 0.94, 0.91, and 0.98, respectively; the accuracy of classification of atypia and DCIS vs benign tissue was 0.70, 0.70, and 0.81, respectively; and the accuracy of classification of DCIS vs atypia was 0.83, 0.85, and 0.80, respectively. The sensitivity of both machine learning features was lower than that of the pathologists for the invasive vs noninvasive classification (tissue distribution feature, 0.70; structure feature, 0.49; pathologists, 0.84) but higher for the classification of atypia and DCIS vs benign cases (tissue distribution feature, 0.79; structure feature, 0.85; pathologists, 0.72) and the classification of DCIS vs atypia (tissue distribution feature, 0.88; structure feature, 0.89; pathologists, 0.70). For the DCIS vs atypia classification, the specificity of the machine learning feature classification was similar to that of the pathologists (tissue distribution feature, 0.78; structure feature, 0.80; pathologists, 0.82). CONCLUSION AND RELEVANCE The computer-based automated approach to interpreting breast pathology showed promise, especially as a diagnostic aid in differentiating DCIS from atypical hyperplasia.
Collapse
Affiliation(s)
- Ezgi Mercan
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
- nowwith Seattle Children’s Hospital, Seattle, Washington
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle
| | - Jamen Bartlett
- University of Vermont Medical Center, Burlington
- now with Southern Ohio Pathology Consultants, Cincinnati, Ohio
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Donald L. Weaver
- Department of Pathology and University of Vermont Cancer Center, Larner College of Medicine, University of Vermont, Burlington
| | - Joann G. Elmore
- Division of General Internal Medicine and Health Services Research, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles
| |
Collapse
|
30
|
Brunyé TT, Drew T, Weaver DL, Elmore JG. A review of eye tracking for understanding and improving diagnostic interpretation. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:7. [PMID: 30796618 PMCID: PMC6515770 DOI: 10.1186/s41235-019-0159-2] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 02/01/2019] [Indexed: 12/29/2022]
Abstract
Inspecting digital imaging for primary diagnosis introduces perceptual and cognitive demands for physicians tasked with interpreting visual medical information and arriving at appropriate diagnoses and treatment decisions. The process of medical interpretation and diagnosis involves a complex interplay between visual perception and multiple cognitive processes, including memory retrieval, problem-solving, and decision-making. Eye-tracking technologies are becoming increasingly available in the consumer and research markets and provide novel opportunities to learn more about the interpretive process, including differences between novices and experts, how heuristics and biases shape visual perception and decision-making, and the mechanisms underlying misinterpretation and misdiagnosis. The present review provides an overview of eye-tracking technology, the perceptual and cognitive processes involved in medical interpretation, how eye tracking has been employed to understand medical interpretation and promote medical education and training, and some of the promises and challenges for future applications of this technology.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 200 Boston Ave., Suite 3000, Medford, MA, 02155, USA.
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 1530 E, Salt Lake City, UT, 84112, USA
| | - Donald L Weaver
- Department of Pathology and University of Vermont Cancer Center, University of Vermont, 111 Colchester Ave., Burlington, VT, 05401, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, University of California at Los Angeles, 10833 Le Conte Ave., Los Angeles, CA, 90095, USA
| |
Collapse
|
31
|
Gecer B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. PATTERN RECOGNITION 2018; 84:345-356. [PMID: 30679879 PMCID: PMC6342566 DOI: 10.1016/j.patcog.2018.07.022] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists' screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.
Collapse
Affiliation(s)
- Baris Gecer
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Selim Aksoy
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Ezgi Mercan
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Donald L. Weaver
- Department of Pathology, University of Vermont, Burlington, VT 05405, USA
| | - Joann G. Elmore
- Department of Medicine, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
32
|
Zheng Y, Jiang Z, Zhang H, Xie F, Ma Y, Shi H, Zhao Y. Histopathological Whole Slide Image Analysis Using Context-Based CBIR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1641-1652. [PMID: 29969415 DOI: 10.1109/tmi.2018.2796130] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Histopathological image classification (HIC) and content-based histopathological image retrieval (CBHIR) are two promising applications for the histopathological whole slide image (WSI) analysis. HIC can efficiently predict the type of lesion involved in a histopathological image. In general, HIC can aid pathologists in locating high-risk cancer regions from a WSI by providing a cancerous probability map for the WSI. In contrast, CBHIR was developed to allow searches for regions with similar content for a region of interest (ROI) from a database consisting of historical cases. Sets of cases with similar content are accessible to pathologists, which can provide more valuable references for diagnosis. A drawback of the recent CBHIR framework is that a query ROI needs to be manually selected from a WSI. An automatic CBHIR approach for a WSI-wise analysis needs to be developed. In this paper, we propose a novel aided-diagnosis framework of breast cancer using whole slide images, which shares the advantages of both HIC and CBHIR. In our framework, CBHIR is automatically processed throughout the WSI, based on which a probability map regarding the malignancy of breast tumors is calculated. Through the probability map, the malignant regions in WSIs can be easily recognized. Furthermore, the retrieval results corresponding to each sub-region of the WSIs are recorded during the automatic analysis and are available to pathologists during their diagnosis. Our method was validated on fully annotated WSI data sets of breast tumors. The experimental results certify the effectiveness of the proposed method.
Collapse
|
33
|
Komura D, Ishikawa S. Machine Learning Methods for Histopathological Image Analysis. Comput Struct Biotechnol J 2018; 16:34-42. [PMID: 30275936 PMCID: PMC6158771 DOI: 10.1016/j.csbj.2018.01.001] [Citation(s) in RCA: 356] [Impact Index Per Article: 59.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 12/03/2017] [Accepted: 01/14/2018] [Indexed: 12/12/2022] Open
Abstract
Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.
Collapse
Affiliation(s)
- Daisuke Komura
- Department of Genomic Pathology, Medical Research Institute, Tokyo Medical and Dental University, Tokyo, Japan
| | | |
Collapse
|
34
|
Mercan C, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Multi-Instance Multi-Label Learning for Multi-Class Classification of Whole Slide Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:316-325. [PMID: 28981408 PMCID: PMC5774338 DOI: 10.1109/tmi.2017.2758580] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Digital pathology has entered a new era with the availability of whole slide scanners that create the high-resolution images of full biopsy slides. Consequently, the uncertainty regarding the correspondence between the image areas and the diagnostic labels assigned by pathologists at the slide level, and the need for identifying regions that belong to multiple classes with different clinical significances have emerged as two new challenges. However, generalizability of the state-of-the-art algorithms, whose accuracies were reported on carefully selected regions of interest (ROIs) for the binary benign versus cancer classification, to these multi-class learning and localization problems is currently unknown. This paper presents our potential solutions to these challenges by exploiting the viewing records of pathologists and their slide-level annotations in weakly supervised learning scenarios. First, we extract candidate ROIs from the logs of pathologists' image screenings based on different behaviors, such as zooming, panning, and fixation. Then, we model each slide with a bag of instances represented by the candidate ROIs and a set of class labels extracted from the pathology forms. Finally, we use four different multi-instance multi-label learning algorithms for both slide-level and ROI-level predictions of diagnostic categories in whole slide breast histopathology images. Slide-level evaluation using 5-class and 14-class settings showed average precision values up to 81% and 69%, respectively, under different weakly labeled learning scenarios. ROI-level predictions showed that the classifier could successfully perform multi-class localization and classification within whole slide images that were selected to include the full range of challenging diagnostic categories.
Collapse
|
35
|
Schaumberg AJ, Sirintrapun SJ, Al-Ahmadie HA, Schüffler PJ, Fuchs TJ. DeepScope: Nonintrusive Whole Slide Saliency Annotation and Prediction from Pathologists at the Microscope. COMPUTATIONAL INTELLIGENCE METHODS FOR BIOINFORMATICS AND BIOSTATISTICS : ... INTERNATIONAL MEETING, CIBB ... : REVISED SELECTED PAPERS. CIBB (MEETING) 2017; 10477:42-58. [PMID: 29601065 PMCID: PMC5870882 DOI: 10.1007/978-3-319-67834-4_4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Modern digital pathology departments have grown to produce whole-slide image data at petabyte scale, an unprecedented treasure chest for medical machine learning tasks. Unfortunately, most digital slides are not annotated at the image level, hindering large-scale application of supervised learning. Manual labeling is prohibitive, requiring pathologists with decades of training and outstanding clinical service responsibilities. This problem is further aggravated by the United States Food and Drug Administration's ruling that primary diagnosis must come from a glass slide rather than a digital image. We present the first end-to-end framework to overcome this problem, gathering annotations in a nonintrusive manner during a pathologist's routine clinical work: (i) microscope-specific 3D-printed commodity camera mounts are used to video record the glass-slide-based clinical diagnosis process; (ii) after routine scanning of the whole slide, the video frames are registered to the digital slide; (iii) motion and observation time are estimated to generate a spatial and temporal saliency map of the whole slide. Demonstrating the utility of these annotations, we train a convolutional neural network that detects diagnosis-relevant salient regions, then report accuracy of 85.15% in bladder and 91.40% in prostate, with 75.00% accuracy when training on prostate but predicting in bladder, despite different pathologists examining the different tissues. When training on one patient but testing on another, AUROC in bladder is 0.79±0.11 and in prostate is 0.96±0.04. Our tool is available at https://bitbucket.org/aschaumberg/deepscope.
Collapse
Affiliation(s)
- Andrew J Schaumberg
- Memorial Sloan Kettering Cancer Center and the Tri-Institutional Training Program in Computational Biology and Medicine, New York, NY, USA
- Weill Cornell Graduate School of Medical Sciences, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - S Joseph Sirintrapun
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Hikmat A Al-Ahmadie
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Peter J Schüffler
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Thomas J Fuchs
- Weill Cornell Graduate School of Medical Sciences, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
36
|
Shin D, Kovalenko M, Ersoy I, Li Y, Doll D, Shyu CR, Hammer R. PathEdEx - Uncovering High-explanatory Visual Diagnostics Heuristics Using Digital Pathology and Multiscale Gaze Data. J Pathol Inform 2017; 8:29. [PMID: 28828200 PMCID: PMC5545777 DOI: 10.4103/jpi.jpi_29_17] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 05/22/2017] [Indexed: 11/18/2022] Open
Abstract
Background: Visual heuristics of pathology diagnosis is a largely unexplored area where reported studies only provided a qualitative insight into the subject. Uncovering and quantifying pathology visual and nonvisual diagnostic patterns have great potential to improve clinical outcomes and avoid diagnostic pitfalls. Methods: Here, we present PathEdEx, an informatics computational framework that incorporates whole-slide digital pathology imaging with multiscale gaze-tracking technology to create web-based interactive pathology educational atlases and to datamine visual and nonvisual diagnostic heuristics. Results: We demonstrate the capabilities of PathEdEx for mining visual and nonvisual diagnostic heuristics using the first PathEdEx volume of a hematopathology atlas. We conducted a quantitative study on the time dynamics of zooming and panning operations utilized by experts and novices to come to the correct diagnosis. We then performed association rule mining to determine sets of diagnostic factors that consistently result in a correct diagnosis, and studied differences in diagnostic strategies across different levels of pathology expertise using Markov chain (MC) modeling and MC Monte Carlo simulations. To perform these studies, we translated raw gaze points to high-explanatory semantic labels that represent pathology diagnostic clues. Therefore, the outcome of these studies is readily transformed into narrative descriptors for direct use in pathology education and practice. Conclusion: PathEdEx framework can be used to capture best practices of pathology visual and nonvisual diagnostic heuristics that can be passed over to the next generation of pathologists and have potential to streamline implementation of precision diagnostics in precision medicine settings.
Collapse
Affiliation(s)
- Dmitriy Shin
- Department of Pathology and Anatomical Sciences, University of Missouri, Columbia, Missouri, USA.,MU Informatics Institute, University of Missouri, Columbia, Missouri, USA
| | - Mikhail Kovalenko
- Department of Pathology and Anatomical Sciences, University of Missouri, Columbia, Missouri, USA.,MU Informatics Institute, University of Missouri, Columbia, Missouri, USA
| | - Ilker Ersoy
- Department of Pathology and Anatomical Sciences, University of Missouri, Columbia, Missouri, USA.,MU Informatics Institute, University of Missouri, Columbia, Missouri, USA
| | - Yu Li
- Department of Computer Science, University of Missouri, Columbia, Missouri, USA
| | - Donald Doll
- Department of Medicine, University of Missouri, Columbia, Missouri, USA
| | - Chi-Ren Shyu
- MU Informatics Institute, University of Missouri, Columbia, Missouri, USA.,Department of Computer Science, University of Missouri, Columbia, Missouri, USA
| | - Richard Hammer
- Department of Pathology and Anatomical Sciences, University of Missouri, Columbia, Missouri, USA.,MU Informatics Institute, University of Missouri, Columbia, Missouri, USA
| |
Collapse
|
37
|
Abstract
The development of whole-slide imaging has paved the way for digitizing of glass slides that are the basis for surgical pathology. This transformative technology has changed the landscape in research applications and education but despite its tremendous potential, its adoption for clinical use has been slow. We review the various niche applications that initiated awareness of this technology, provide examples of clinical use cases, and discuss the requirements and challenges for full adoption in clinical diagnosis. The opportunities for applications of image analysis tools in a workflow will be changed by integration of whole-slide imaging into routine diagnosis.
Collapse
|
38
|
Christensen PA, Lee NE, Thrall MJ, Powell SZ, Chevez-Barrios P, Long SW. RecutClub.com: An Open Source, Whole Slide Image-based Pathology Education System. J Pathol Inform 2017; 8:10. [PMID: 28382224 PMCID: PMC5364738 DOI: 10.4103/jpi.jpi_72_16] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Accepted: 01/18/2017] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Our institution's pathology unknown conferences provide educational cases for our residents. However, the cases have not been previously available digitally, have not been collated for postconference review, and were not accessible to a wider audience. Our objective was to create an inexpensive whole slide image (WSI) education suite to address these limitations and improve the education of pathology trainees. MATERIALS AND METHODS We surveyed residents regarding their preference between four unique WSI systems. We then scanned weekly unknown conference cases and study set cases and uploaded them to our custom built WSI viewer located at RecutClub.com. We measured site utilization and conference participation. RESULTS Residents preferred our OpenLayers WSI implementation to Ventana Virtuoso, Google Maps API, and OpenSlide. Over 16 months, we uploaded 1366 cases from 77 conferences and ten study sets, occupying 793.5 GB of cloud storage. Based on resident evaluations, the interface was easy to use and demonstrated minimal latency. Residents are able to review cases from home and from their mobile devices. Worldwide, 955 unique IP addresses from 52 countries have viewed cases in our site. CONCLUSIONS We implemented a low-cost, publicly available repository of WSI slides for resident education. Our trainees are very satisfied with the freedom to preview either the glass slides or WSI and review the WSI postconference. Both local users and worldwide users actively and repeatedly view cases in our study set.
Collapse
Affiliation(s)
- Paul A Christensen
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | - Nathan E Lee
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | - Suzanne Z Powell
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | - Patricia Chevez-Barrios
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | - S Wesley Long
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| |
Collapse
|
39
|
Brunyé TT, Mercan E, Weaver DL, Elmore JG. Accuracy is in the eyes of the pathologist: The visual interpretive process and diagnostic accuracy with digital whole slide images. J Biomed Inform 2017; 66:171-179. [PMID: 28087402 DOI: 10.1016/j.jbi.2017.01.004] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 01/06/2017] [Accepted: 01/09/2017] [Indexed: 12/30/2022]
Abstract
Digital whole slide imaging is an increasingly common medium in pathology, with application to education, telemedicine, and rendering second opinions. It has also made it possible to use eye tracking devices to explore the dynamic visual inspection and interpretation of histopathological features of tissue while pathologists review cases. Using whole slide images, the present study examined how a pathologist's diagnosis is influenced by fixed case-level factors, their prior clinical experience, and their patterns of visual inspection. Participating pathologists interpreted one of two test sets, each containing 12 digital whole slide images of breast biopsy specimens. Cases represented four diagnostic categories as determined via expert consensus: benign without atypia, atypia, ductal carcinoma in situ (DCIS), and invasive cancer. Each case included one or more regions of interest (ROIs) previously determined as of critical diagnostic importance. During pathologist interpretation we tracked eye movements, viewer tool behavior (zooming, panning), and interpretation time. Models were built using logistic and linear regression with generalized estimating equations, testing whether variables at the level of the pathologists, cases, and visual interpretive behavior would independently and/or interactively predict diagnostic accuracy and efficiency. Diagnostic accuracy varied as a function of case consensus diagnosis, replicating earlier research. As would be expected, benign cases tended to elicit false positives, and atypia, DCIS, and invasive cases tended to elicit false negatives. Pathologist experience levels, case consensus diagnosis, case difficulty, eye fixation durations, and the extent to which pathologists' eyes fixated within versus outside of diagnostic ROIs, all independently or interactively predicted diagnostic accuracy. Higher zooming behavior predicted a tendency to over-interpret benign and atypia cases, but not DCIS cases. Efficiency was not predicted by pathologist- or visual search-level variables. Results provide new insights into the medical interpretive process and demonstrate the complex interactions between pathologists and cases that guide diagnostic decision-making. Implications for training, clinical practice, and computer-aided decision aids are considered.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain & Cognitive Sciences, Tufts University, Medford, MA, United States.
| | - Ezgi Mercan
- Department of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Donald L Weaver
- Department of Pathology and UVM Cancer Center, University of Vermont, Burlington, VT, United States
| | - Joann G Elmore
- Department of Medicine, University of Washington School of Medicine, Seattle, WA, United States
| |
Collapse
|
40
|
Brunyé TT, Eddy MD, Mercan E, Allison KH, Weaver DL, Elmore JG. Pupil diameter changes reflect difficulty and diagnostic accuracy during medical image interpretation. BMC Med Inform Decis Mak 2016; 16:77. [PMID: 27378371 PMCID: PMC4932753 DOI: 10.1186/s12911-016-0322-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 06/08/2016] [Indexed: 11/10/2022] Open
Abstract
Background No automated methods exist to objectively monitor and evaluate the diagnostic process while physicians review computerized medical images. The present study tested whether using eye tracking to monitor tonic and phasic pupil dynamics may prove valuable in tracking interpretive difficulty and predicting diagnostic accuracy. Methods Pathologists interpreted digitized breast biopsies varying in diagnosis and rated difficulty, while pupil diameter was monitored. Tonic diameter was recorded during the entire duration of interpretation, and phasic diameter was examined when the eyes fixated on a pre-determined diagnostic region during inspection. Results Tonic pupil diameter was higher with increasing rated difficulty levels of cases. Phasic diameter was interactively influenced by case difficulty and the eventual agreement with consensus diagnosis. More difficult cases produced increases in pupil diameter, but only when the pathologists’ diagnoses were ultimately correct. All results were robust after adjusting for the potential impact of screen brightness on pupil diameter. Conclusions Results contribute new understandings of the diagnostic process, theoretical positions regarding locus coeruleus-norepinephrine system function, and suggest novel approaches to monitoring, evaluating, and guiding medical image interpretation.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, 200 Boston Ave, Suite 3000, Medford, 02155, MA, USA. .,Department of Psychology, Tufts University, 490 Boston Ave, Medford, 02155, MA, USA.
| | - Marianna D Eddy
- Center for Applied Brain and Cognitive Sciences, 200 Boston Ave, Suite 3000, Medford, 02155, MA, USA.,Department of Psychology, Tufts University, 490 Boston Ave, Medford, 02155, MA, USA
| | - Ezgi Mercan
- Department of Computer Science and Engineering, University of Washington, Seattle, 98104, WA, USA
| | - Kimberly H Allison
- Department of Pathology, Stanford University School of Medicine, Palo Alto, 94305, CA, USA
| | - Donald L Weaver
- Department of Pathology and UVM Cancer Center, University of Vermont, Burlington, 05401, VT, USA
| | - Joann G Elmore
- Department of Medicine, University of Washington, Seattle, 98104, WA, USA
| |
Collapse
|