1
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
2
|
Fan J, Li X, Su X. Building Human Visual Attention Map for Construction Equipment Teleoperation. Front Neurosci 2022; 16:895126. [PMID: 35757532 PMCID: PMC9226901 DOI: 10.3389/fnins.2022.895126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 05/16/2022] [Indexed: 11/13/2022] Open
Abstract
Construction equipment teleoperation is a promising solution when the site environment is hazardous to operators. However, limited situational awareness of the operator exists as one of the major bottlenecks for its implementation. Virtual annotations (VAs) can use symbols to convey information about operating clues, thus improving an operator's situational awareness without introducing an overwhelming cognitive load. It is of primary importance to understand how an operator's visual system responds to different VAs from a human-centered perspective. This study investigates the effect of VA on teleoperation performance in excavating tasks. A visual attention map is generated to describe how an operator's attention is allocated when VAs are presented during operation. The result of this study can improve the understanding of how human vision works in virtual or augmented reality. It also informs the strategies on the practical implication of designing a user-friendly teleoperation system.
Collapse
Affiliation(s)
- Jiamin Fan
- College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, China
| | - Xiaomeng Li
- College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, China
| | - Xing Su
- College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, China
| |
Collapse
|
3
|
Corredor G, Toro P, Koyuncu C, Lu C, Buzzy C, Bera K, Fu P, Mehrad M, Ely KA, Mokhtari M, Yang K, Chute D, Adelstein DJ, Thompson LDR, Bishop JA, Faraji F, Thorstad W, Castro P, Sandulache V, Koyfman SA, Lewis JS, Madabhushi A. An Imaging Biomarker of Tumor-Infiltrating Lymphocytes to Risk-Stratify Patients With HPV-Associated Oropharyngeal Cancer. J Natl Cancer Inst 2021; 114:609-617. [PMID: 34850048 PMCID: PMC9002277 DOI: 10.1093/jnci/djab215] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 08/03/2021] [Accepted: 11/19/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Human papillomavirus (HPV)-associated oropharyngeal squamous cell carcinoma (OPSCC) has excellent control rates compared to nonvirally associated OPSCC. Multiple trials are actively testing whether de-escalation of treatment intensity for these patients can maintain oncologic equipoise while reducing treatment-related toxicity. We have developed OP-TIL, a biomarker that characterizes the spatial interplay between tumor-infiltrating lymphocytes (TILs) and surrounding cells in histology images. Herein, we sought to test whether OP-TIL can segregate stage I HPV-associated OPSCC patients into low-risk and high-risk groups and aid in patient selection for de-escalation clinical trials. METHODS Association between OP-TIL and patient outcome was explored on whole slide hematoxylin and eosin images from 439 stage I HPV-associated OPSCC patients across 6 institutional cohorts. One institutional cohort (n = 94) was used to identify the most prognostic features and train a Cox regression model to predict risk of recurrence and death. Survival analysis was used to validate the algorithm as a biomarker of recurrence or death in the remaining 5 cohorts (n = 345). All statistical tests were 2-sided. RESULTS OP-TIL separated stage I HPV-associated OPSCC patients with 30 or less pack-year smoking history into low-risk (2-year disease-free survival [DFS] = 94.2%; 5-year DFS = 88.4%) and high-risk (2-year DFS = 82.5%; 5-year DFS = 74.2%) groups (hazard ratio = 2.56, 95% confidence interval = 1.52 to 4.32; P < .001), even after adjusting for age, smoking status, T and N classification, and treatment modality on multivariate analysis for DFS (hazard ratio = 2.27, 95% confidence interval = 1.32 to 3.94; P = .003). CONCLUSIONS OP-TIL can identify stage I HPV-associated OPSCC patients likely to be poor candidates for treatment de-escalation. Following validation on previously completed multi-institutional clinical trials, OP-TIL has the potential to be a biomarker, beyond clinical stage and HPV status, that can be used clinically to optimize patient selection for de-escalation.
Collapse
Affiliation(s)
- Germán Corredor
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA,Louis Stokes Cleveland VA Medical Center, Cleveland, OH, USA
| | - Paula Toro
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Can Koyuncu
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Cheng Lu
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Christina Buzzy
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Kaustav Bera
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Mitra Mehrad
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kim A Ely
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mojgan Mokhtari
- Department of Biomedical Engineering, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | - Kailin Yang
- Department of Radiation Oncology, Cleveland Clinic, Cleveland, OH, USA
| | - Deborah Chute
- Department of Anatomic Pathology, Cleveland Clinic, Cleveland, OH, USA
| | - David J Adelstein
- Department of Medicine, School of Medicine, Case Western Reserve University, Cleveland, OH, USA
| | - Lester D R Thompson
- Department of Pathology, Southern California Permanente Medical Group, Woodland Hills, CA, USA
| | - Justin A Bishop
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Farhoud Faraji
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, UC San Diego Health, La Jolla, CA, USA
| | - Wade Thorstad
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MS, USA
| | - Patricia Castro
- Department of Pathology and Immunology, Baylor College of Medicine, Houston, TX, USA
| | - Vlad Sandulache
- Department of Otolaryngology-Head and Neck Surgery, Baylor College of Medicine, Houston, TX, USA,ENT Section, Operative Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA,Center for Translational Research on Inflammatory Disease (CTRID), Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
| | - Shlomo A Koyfman
- Department of Radiation Oncology, Cleveland Clinic, Cleveland, OH, USA
| | - James S Lewis
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Anant Madabhushi
- Correspondence to: Anant Madabhushi, PhD, Center of Computational Imaging and Personalized Diagnostics, Case Western Reserve University, 2071 Martin Luther King Drive, Cleveland, OH 44106-7207, USA (e-mail: )
| |
Collapse
|
4
|
Barisoni L, Lafata KJ, Hewitt SM, Madabhushi A, Balis UGJ. Digital pathology and computational image analysis in nephropathology. Nat Rev Nephrol 2020; 16:669-685. [PMID: 32848206 PMCID: PMC7447970 DOI: 10.1038/s41581-020-0321-6] [Citation(s) in RCA: 98] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2020] [Indexed: 12/17/2022]
Abstract
The emergence of digital pathology - an image-based environment for the acquisition, management and interpretation of pathology information supported by computational techniques for data extraction and analysis - is changing the pathology ecosystem. In particular, by virtue of our new-found ability to generate and curate digital libraries, the field of machine vision can now be effectively applied to histopathological subject matter by individuals who do not have deep expertise in machine vision techniques. Although these novel approaches have already advanced the detection, classification, and prognostication of diseases in the fields of radiology and oncology, renal pathology is just entering the digital era, with the establishment of consortia and digital pathology repositories for the collection, analysis and integration of pathology data with other domains. The development of machine-learning approaches for the extraction of information from image data, allows for tissue interrogation in a way that was not previously possible. The application of these novel tools are placing pathology centre stage in the process of defining new, integrated, biologically and clinically homogeneous disease categories, to identify patients at risk of progression, and shifting current paradigms for the treatment and prevention of kidney diseases.
Collapse
Affiliation(s)
- Laura Barisoni
- Department of Pathology, Duke University, Durham, NC, USA.
- Department of Medicine, Division of Nephrology, Duke University, Durham, NC, USA.
| | - Kyle J Lafata
- Department of Radiology, Duke University, Durham, NC, USA
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Stephen M Hewitt
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Louis Stokes Veterans Administration Medical Center, Cleveland, OH, USA
| | | |
Collapse
|
5
|
Tomita N, Abdollahi B, Wei J, Ren B, Suriawinata A, Hassanpour S. Attention-Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus Tissue on Histopathological Slides. JAMA Netw Open 2019; 2:e1914645. [PMID: 31693124 PMCID: PMC6865275 DOI: 10.1001/jamanetworkopen.2019.14645] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 09/15/2019] [Indexed: 12/21/2022] Open
Abstract
Importance Deep learning-based methods, such as the sliding window approach for cropped-image classification and heuristic aggregation for whole-slide inference, for analyzing histological patterns in high-resolution microscopy images have shown promising results. These approaches, however, require a laborious annotation process and are fragmented. Objective To evaluate a novel deep learning method that uses tissue-level annotations for high-resolution histological image analysis for Barrett esophagus (BE) and esophageal adenocarcinoma detection. Design, Setting, and Participants This diagnostic study collected deidentified high-resolution histological images (N = 379) for training a new model composed of a convolutional neural network and a grid-based attention network. Histological images of patients who underwent endoscopic esophagus and gastroesophageal junction mucosal biopsy between January 1, 2016, and December 31, 2018, at Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire) were collected. Main Outcomes and Measures The model was evaluated on an independent testing set of 123 histological images with 4 classes: normal, BE-no-dysplasia, BE-with-dysplasia, and adenocarcinoma. Performance of this model was measured and compared with that of the current state-of-the-art sliding window approach using the following standard machine learning metrics: accuracy, recall, precision, and F1 score. Results Of the independent testing set of 123 histological images, 30 (24.4%) were in the BE-no-dysplasia class, 14 (11.4%) in the BE-with-dysplasia class, 21 (17.1%) in the adenocarcinoma class, and 58 (47.2%) in the normal class. Classification accuracies of the proposed model were 0.85 (95% CI, 0.81-0.90) for the BE-no-dysplasia class, 0.89 (95% CI, 0.84-0.92) for the BE-with-dysplasia class, and 0.88 (95% CI, 0.84-0.92) for the adenocarcinoma class. The proposed model achieved a mean accuracy of 0.83 (95% CI, 0.80-0.86) and marginally outperformed the sliding window approach on the same testing set. The F1 scores of the attention-based model were at least 8% higher for each class compared with the sliding window approach: 0.68 (95% CI, 0.61-0.75) vs 0.61 (95% CI, 0.53-0.68) for the normal class, 0.72 (95% CI, 0.63-0.80) vs 0.58 (95% CI, 0.45-0.69) for the BE-no-dysplasia class, 0.30 (95% CI, 0.11-0.48) vs 0.22 (95% CI, 0.11-0.33) for the BE-with-dysplasia class, and 0.67 (95% CI, 0.54-0.77) vs 0.58 (95% CI, 0.44-0.70) for the adenocarcinoma class. However, this outperformance was not statistically significant. Conclusions and Relevance Results of this study suggest that the proposed attention-based deep neural network framework for BE and esophageal adenocarcinoma detection is important because it is based solely on tissue-level annotations, unlike existing methods that are based on regions of interest. This new model is expected to open avenues for applying deep learning to digital pathology.
Collapse
Affiliation(s)
- Naofumi Tomita
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
| | - Behnaz Abdollahi
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
| | - Jason Wei
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
- Department of Computer Science, Dartmouth College, Hanover, New Hampshire
| | - Bing Ren
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Arief Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
- Department of Computer Science, Dartmouth College, Hanover, New Hampshire
- Department of Epidemiology, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
| |
Collapse
|
6
|
St-Pierre C, Madore WJ, De Montigny E, Trudel D, Boudoux C, Godbout N, Mes-Masson AM, Rahimi K, Leblond F. Dimension reduction technique using a multilayered descriptor for high-precision classification of ovarian cancer tissue using optical coherence tomography: a feasibility study. J Med Imaging (Bellingham) 2017; 4:041306. [PMID: 29057287 DOI: 10.1117/1.jmi.4.4.041306] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 09/14/2017] [Indexed: 12/20/2022] Open
Abstract
Optical coherence tomography (OCT) yields microscopic volumetric images representing tissue structures based on the contrast provided by elastic light scattering. Multipatient studies using OCT for detection of tissue abnormalities can lead to large datasets making quantitative and unbiased assessment of classification algorithms performance difficult without the availability of automated analytical schemes. We present a mathematical descriptor reducing the dimensionality of a classifier's input data, while preserving essential volumetric features from reconstructed three-dimensional optical volumes. This descriptor is used as the input of classification algorithms allowing a detailed exploration of the features space leading to optimal and reliable classification models based on support vector machine techniques. Using imaging dataset of paraffin-embedded tissue samples from 38 ovarian cancer patients, we report accuracies for cancer detection [Formula: see text] for binary classification between healthy fallopian tube and ovarian samples containing cancer cells. Furthermore, multiples classes of statistical models are presented demonstrating [Formula: see text] accuracy for the detection of high-grade serous, endometroid, and clear cells cancers. The classification approach reduces the computational complexity and needed resources to achieve highly accurate classification, making it possible to contemplate other applications, including intraoperative surgical guidance, as well as other depth sectioning techniques for fresh tissue imaging.
Collapse
Affiliation(s)
- Catherine St-Pierre
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada.,Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada
| | - Wendy-Julie Madore
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada.,Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada.,Institut du cancer de Montréal, Montreal, Canada
| | - Etienne De Montigny
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada.,Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada
| | - Dominique Trudel
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada.,Institut du cancer de Montréal, Montreal, Canada
| | - Caroline Boudoux
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada
| | - Nicolas Godbout
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada
| | - Anne-Marie Mes-Masson
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada.,Institut du cancer de Montréal, Montreal, Canada
| | - Kurosh Rahimi
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada.,Institut du cancer de Montréal, Montreal, Canada
| | - Frédéric Leblond
- Polytechnique Montreal, Department of Engineering Physics, Montreal, Québec, Canada.,Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, Québec, Canada
| |
Collapse
|