1
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
2
|
Grignaffini F, Barbuto F, Troiano M, Piazzo L, Simeoni P, Mangini F, De Stefanis C, Onetti Muda A, Frezza F, Alisi A. The Use of Artificial Intelligence in the Liver Histopathology Field: A Systematic Review. Diagnostics (Basel) 2024; 14:388. [PMID: 38396427 PMCID: PMC10887838 DOI: 10.3390/diagnostics14040388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.
Collapse
Affiliation(s)
- Flavia Grignaffini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Francesco Barbuto
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Maurizio Troiano
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Patrizio Simeoni
- National Transport Authority (NTA), D02 WT20 Dublin, Ireland;
- Faculty of Lifelong Learning, South East Technological University (SETU), R93 V960 Carlow, Ireland
| | - Fabio Mangini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Cristiano De Stefanis
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | | | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Anna Alisi
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| |
Collapse
|
3
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
4
|
Fatemi MY, Lu Y, Diallo AB, Srinivasan G, Azher ZL, Christensen BC, Salas LA, Tsongalis GJ, Palisoul SM, Perreard L, Kolling FW, Vaickus LJ, Levy JJ. The Overlooked Role of Specimen Preparation in Bolstering Deep Learning-Enhanced Spatial Transcriptomics Workflows. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.10.09.23296700. [PMID: 37873287 PMCID: PMC10593052 DOI: 10.1101/2023.10.09.23296700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
The application of deep learning methods to spatial transcriptomics has shown promise in unraveling the complex relationships between gene expression patterns and tissue architecture as they pertain to various pathological conditions. Deep learning methods that can infer gene expression patterns directly from tissue histomorphology can expand the capability to discern spatial molecular markers within tissue slides. However, current methods utilizing these techniques are plagued by substantial variability in tissue preparation and characteristics, which can hinder the broader adoption of these tools. Furthermore, training deep learning models using spatial transcriptomics on small study cohorts remains a costly endeavor. Necessitating novel tissue preparation processes enhance assay reliability, resolution, and scalability. This study investigated the impact of an enhanced specimen processing workflow for facilitating a deep learning-based spatial transcriptomics assessment. The enhanced workflow leveraged the flexibility of the Visium CytAssist assay to permit automated H&E staining (e.g., Leica Bond) of tissue slides, whole-slide imaging at 40x-resolution, and multiplexing of tissue sections from multiple patients within individual capture areas for spatial transcriptomics profiling. Using a cohort of thirteen pT3 stage colorectal cancer (CRC) patients, we compared the efficacy of deep learning models trained on slide prepared using an enhanced workflow as compared to the traditional workflow which leverages manual tissue staining and standard imaging of tissue slides. Leveraging Inceptionv3 neural networks, we aimed to predict gene expression patterns across matched serial tissue sections, each stemming from a distinct workflow but aligned based on persistent histological structures. Findings indicate that the enhanced workflow considerably outperformed the traditional spatial transcriptomics workflow. Gene expression profiles predicted from enhanced tissue slides also yielded expression patterns more topologically consistent with the ground truth. This led to enhanced statistical precision in pinpointing biomarkers associated with distinct spatial structures. These insights can potentially elevate diagnostic and prognostic biomarker detection by broadening the range of spatial molecular markers linked to metastasis and recurrence. Future endeavors will further explore these findings to enrich our comprehension of various diseases and uncover molecular pathways with greater nuance. Combining deep learning with spatial transcriptomics provides a compelling avenue to enrich our understanding of tumor biology and improve clinical outcomes. For results of the highest fidelity, however, effective specimen processing is crucial, and fostering collaboration between histotechnicians, pathologists, and genomics specialists is essential to herald this new era in spatial transcriptomics-driven cancer research.
Collapse
|
5
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
6
|
Chen G, Zhao X, Dankovskyy M, Ansah-Zame A, Alghamdi U, Liu D, Wei R, Zhao J, Zhou A. A novel role of RNase L in the development of nonalcoholic steatohepatitis. FASEB J 2023; 37:e23158. [PMID: 37615181 DOI: 10.1096/fj.202300621r] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 06/29/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nonalcoholic fatty liver disease (NAFLD) is the most common chronic liver disease and affects about 25% of the population globally. NAFLD has the potential to cause significant liver damage in many patients because it can progress to nonalcoholic steatohepatitis (NASH) and cirrhosis, which substantially increases disease morbidity and mortality. Despite the key role of innate immunity in the disease progression, the underlying molecular and pathogenic mechanisms remain to be elucidated. RNase L is a key enzyme in interferon action against viral infection and displays pleiotropic biological functions such as control of cell proliferation, apoptosis, and autophagy. Recent studies have demonstrated that RNase L is involved in innate immunity. In this study, we revealed that RNase L contributed to the development of NAFLD, which further progressed to NASH in a time-dependent fashion after RNase L wild-type (WT) and knockout mice were fed with a high-fat and high-cholesterol diet. RNase L WT mice showed significantly more severe NASH, evidenced by widespread macro-vesicular steatosis, hepatocyte ballooning degeneration, inflammation, and fibrosis, although physiological and biochemical data indicated that both types of mice developed obesity, hyperglycemia, hypercholesterolemia, dysfunction of the liver, and systemic inflammation at different extents. Further investigation demonstrated that RNase L was responsible for the expression of some key genes in lipid metabolism, inflammation, and fibrosis signaling. Taken together, our results suggest that a novel therapeutic intervention for NAFLD may be developed based on regulating the expression and activity of RNase L.
Collapse
Affiliation(s)
- Guanmin Chen
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Xiaotong Zhao
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Maksym Dankovskyy
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Abigail Ansah-Zame
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Uthman Alghamdi
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Danting Liu
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Ruhan Wei
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
| | - Jianjun Zhao
- Department of Cancer Biology, Cleveland Clinic, Cleveland, Ohio, USA
| | - Aimin Zhou
- Department of Chemistry, Cleveland State University, Cleveland, Ohio, USA
- Center for Gene Regulation in Health and Diseases, Cleveland State University, Cleveland, Ohio, USA
| |
Collapse
|
7
|
Yan R, He Q, Liu Y, Ye P, Zhu L, Shi S, Gou J, He Y, Guan T, Zhou G. Unpaired virtual histological staining using prior-guided generative adversarial networks. Comput Med Imaging Graph 2023; 105:102185. [PMID: 36764189 DOI: 10.1016/j.compmedimag.2023.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 01/05/2023] [Accepted: 01/05/2023] [Indexed: 01/24/2023]
Abstract
Fibrosis is an inevitable stage in the development of chronic liver disease and has an irreplaceable role in characterizing the degree of progression of chronic liver disease. Histopathological diagnosis is the gold standard for the interpretation of fibrosis parameters. Conventional hematoxylin-eosin (H&E) staining can only reflect the gross structure of the tissue and the distribution of hepatocytes, while Masson trichrome can highlight specific types of collagen fiber structure, thus providing the necessary structural information for fibrosis scoring. However, the expensive costs of time, economy, and patient specimens as well as the non-uniform preparation and staining process make the conversion of existing H&E staining into virtual Masson trichrome staining a solution for fibrosis evaluation. Existing translation approaches fail to extract fiber features accurately enough, and the decoder of staining is unable to converge due to the inconsistent color of physical staining. In this work, we propose a prior-guided generative adversarial network, based on unpaired data for effective Masson trichrome stained image generation from the corresponding H&E stained image. Conducted on a small training set, our method takes full advantage of prior knowledge to set up better constraints on both the encoder and the decoder. Experiments indicate the superior performance of our method that surpasses the previous approaches. For various liver diseases, our results demonstrate a high correlation between the staging of real and virtual stains (ρ=0.82; 95% CI: 0.73-0.89). In addition, our finetuning strategy is able to standardize the staining color and release the memory and computational burden, which can be employed in clinical assessment.
Collapse
Affiliation(s)
- Renao Yan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Qiming He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Yiqing Liu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Peng Ye
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Lianghui Zhu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Shanshan Shi
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Jizhou Gou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Tian Guan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China.
| | - Guangde Zhou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China.
| |
Collapse
|
8
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
9
|
Unstained Tissue Imaging and Virtual Hematoxylin and Eosin Staining of Histologic Whole Slide Images. J Transl Med 2023; 103:100070. [PMID: 36801642 DOI: 10.1016/j.labinv.2023.100070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/11/2023] [Accepted: 01/19/2023] [Indexed: 01/27/2023] Open
Abstract
Tissue structures, phenotypes, and pathology are routinely investigated based on histology. This includes chemically staining the transparent tissue sections to make them visible to the human eye. Although chemical staining is fast and routine, it permanently alters the tissue and often consumes hazardous reagents. On the other hand, on using adjacent tissue sections for combined measurements, the cell-wise resolution is lost owing to sections representing different parts of the tissue. Hence, techniques providing visual information of the basic tissue structure enabling additional measurements from the exact same tissue section are required. Here we tested unstained tissue imaging for the development of computational hematoxylin and eosin (HE) staining. We used unsupervised deep learning (CycleGAN) and whole slide images of prostate tissue sections to compare the performance of imaging tissue in paraffin, as deparaffinized in air, and as deparaffinized in mounting medium with section thicknesses varying between 3 and 20 μm. We showed that although thicker sections increase the information content of tissue structures in the images, thinner sections generally perform better in providing information that can be reproduced in virtual staining. According to our results, tissue imaged in paraffin and as deparaffinized provides a good overall representation of the tissue for virtually HE-stained images. Further, using a pix2pix model, we showed that the reproduction of overall tissue histology can be clearly improved with image-to-image translation using supervised learning and pixel-wise ground truth. We also showed that virtual HE staining can be used for various tissues and used with both 20× and 40× imaging magnifications. Although the performance and methods of virtual staining need further development, our study provides evidence of the feasibility of whole slide unstained microscopy as a fast, cheap, and feasible approach to producing virtual staining of tissue histology while sparing the exact same tissue section ready for subsequent utilization with follow-up methods at single-cell resolution.
Collapse
|
10
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington.,Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
11
|
McAlpine E, Michelow P, Liebenberg E, Celik T. Are synthetic cytology images ready for prime time? A comparative assessment of real and synthetic urine cytology images. J Am Soc Cytopathol 2022; 12:126-135. [PMID: 37013344 DOI: 10.1016/j.jasc.2022.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/17/2022] [Accepted: 10/01/2022] [Indexed: 11/06/2022]
Abstract
INTRODUCTION The use of synthetic data in pathology has, to date, predominantly been augmenting existing pathology data to improve supervised machine learning algorithms. We present an alternative use case-using synthetic images to augment cytology training when the availability of real-world examples is limited. Moreover, we compare the assessment of real and synthetic urine cytology images by pathology personnel to explore the usefulness of this technology in a real-world setting. MATERIALS AND METHODS Synthetic urine cytology images were generated using a custom-trained conditional StyleGAN3 model. A morphologically balanced 60-image data set of real and synthetic urine cytology images was created for an online image survey system to allow for the assessment of the differences in visual perception between real and synthetic urine cytology images by pathology personnel. RESULTS A total of 12 participants were recruited to answer the 60-image survey. The study population had a median age of 36.5 years and a median of 5 years of pathology experience. There was no significant difference in diagnostic error rates between real and synthetic images, nor was there a significant difference between subjective image quality scores between real and synthetic images when assessed on an individual observer basis. CONCLUSIONS The ability of Generative Adversarial Networks technology to generate highly realistic urine cytology images was demonstrated. Furthermore, there was no difference in how pathology personnel perceived the subjective quality of synthetic images, nor was there a difference in diagnostic error rates between real and synthetic urine cytology images. This has important implications for the application of Generative Adversarial Networks technology to cytology teaching and learning.
Collapse
Affiliation(s)
- Ewen McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Ampath National Laboratories, Johannesburg, South Africa.
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; National Health Laboratory Services, Johannesburg, South Africa
| | - Eric Liebenberg
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering and Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
12
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
13
|
McAlpine E, Michelow P, Liebenberg E, Celik T. Is it real or not? Toward artificial intelligence-based realistic synthetic cytology image generation to augment teaching and quality assurance in pathology. J Am Soc Cytopathol 2022; 11:123-132. [PMID: 35249862 DOI: 10.1016/j.jasc.2022.02.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 01/20/2022] [Accepted: 02/03/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Urine cytology offers a rapid and relatively inexpensive method to diagnose urothelial neoplasia. In our setting of a public sector laboratory in South Africa, urothelial neoplasia is rare, compromising pathology training in this specific aspect of cytology. Artificial intelligence-based synthetic image generation-specifically the use of generative adversarial networks (GANs)-offers a solution to this problem. MATERIALS AND METHODS A limited, but morphologically diverse, dataset of 1000 malignant urothelial cytology images was used to train a StyleGAN3 model to create completely novel, synthetic examples of malignant urine cytology using computer resources within reach of most pathology departments worldwide. RESULTS We have presented the results of our trained GAN model, which was able to generate realistic, morphologically diverse examples of malignant urine cytology images when trained using a modest dataset. Although the trained model is capable of generating realistic images, we have also presented examples for which unrealistic and artifactual images were generated-illustrating the need for manual curation when using this technology in a training context. CONCLUSIONS We have presented a proof-of-concept illustration of creating synthetic malignant urine cytology images using machine learning technology to augment cytology training when real-world examples are sparse. We have shown that despite significant morphologic diversity in terms of staining variations, slide background, variations in the diagnostic malignant cellular elements, the presence of other nondiagnostic cellular elements, and artifacts, visually acceptable and varied results are achievable using limited data and computing resources.
Collapse
Affiliation(s)
- Ewen McAlpine
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa.
| | - Pamela Michelow
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Eric Liebenberg
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering and Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
14
|
Yoo TK, Kim BY, Jeong HK, Kim HK, Yang D, Ryu IH. Simple Code Implementation for Deep Learning-Based Segmentation to Evaluate Central Serous Chorioretinopathy in Fundus Photography. Transl Vis Sci Technol 2022; 11:22. [PMID: 35147661 PMCID: PMC8842634 DOI: 10.1167/tvst.11.2.22] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose Central serous chorioretinopathy (CSC) is a retinal disease that frequently shows resolution and recurrence with serous detachment of the neurosensory retina. Here, we present a deep learning analysis of subretinal fluid (SRF) lesion segmentation in fundus photographs to evaluate CSC. Methods We collected 194 fundus photographs of SRF lesions from the patients with CSC. Three graders manually annotated of the entire SRF area in the retinal images. The dataset was randomly separated into training (90%) and validation (10%) datasets. We used the U-Net segmentation model based on conditional generative adversarial networks (pix2pix) to detect the SRF lesions. The algorithms were trained and validated using Google Colaboratory. Researchers did not need prior knowledge of coding skills or computing resources to implement this code. Results The validation results showed that the Jaccard index and Dice coefficient scores were 0.619 and 0.763, respectively. In most cases, the segmentation results overlapped with most of the reference areas in the annotated images. However, cases with exceptional SRFs were not accurate in terms of prediction. Using Colaboratory, the proposed segmentation task ran easily in a web-based environment without setup or personal computing resources. Conclusions The results suggest that the deep learning model based on U-Net from the pix2pix algorithm is suitable for the automatic segmentation of SRF lesions to evaluate CSC. Translational Relevance Our code implementation has the potential to facilitate ophthalmology research; in particular, deep learning–based segmentation can assist in the development of pathological lesion detection solutions.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Korea Air Force, Cheongju, South Korea.,B&VIIT Eye Center, Seoul, South Korea
| | - Bo Yi Kim
- Department of Ophthalmology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyun Kyo Jeong
- Department of Ophthalmology, 10 th Fighter Wing, Republic of Korea Air Force, Suwon, South Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Donghyun Yang
- Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea.,Visuworks, Seoul, South Korea
| |
Collapse
|
15
|
McAlpine ED, Michelow P, Celik T. The Utility of Unsupervised Machine Learning in Anatomic Pathology. Am J Clin Pathol 2022; 157:5-14. [PMID: 34302331 DOI: 10.1093/ajcp/aqab085] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/18/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVES Developing accurate supervised machine learning algorithms is hampered by the lack of representative annotated datasets. Most data in anatomic pathology are unlabeled and creating large, annotated datasets is a time consuming and laborious process. Unsupervised learning, which does not require annotated data, possesses the potential to assist with this challenge. This review aims to introduce the concept of unsupervised learning and illustrate how clustering, generative adversarial networks (GANs) and autoencoders have the potential to address the lack of annotated data in anatomic pathology. METHODS A review of unsupervised learning with examples from the literature was carried out. RESULTS Clustering can be used as part of semisupervised learning where labels are propagated from a subset of annotated data points to remaining unlabeled data points in a dataset. GANs may assist by generating large amounts of synthetic data and performing color normalization. Autoencoders allow training of a network on a large, unlabeled dataset and transferring learned representations to a classifier using a smaller, labeled subset (unsupervised pretraining). CONCLUSIONS Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.
Collapse
Affiliation(s)
- Ewen D McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, South Africa
- Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
16
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
17
|
Levy J, Haudenschild C, Barwick C, Christensen B, Vaickus L. Topological Feature Extraction and Visualization of Whole Slide Images using Graph Neural Networks. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2021; 26:285-296. [PMID: 33691025 PMCID: PMC7959046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Whole-slide images (WSI) are digitized representations of thin sections of stained tissue from various patient sources (biopsy, resection, exfoliation, fluid) and often exceed 100,000 pixels in any given spatial dimension. Deep learning approaches to digital pathology typically extract information from sub-images (patches) and treat the sub-images as independent entities, ignoring contributing information from vital large-scale architectural relationships. Modeling approaches that can capture higher-order dependencies between neighborhoods of tissue patches have demonstrated the potential to improve predictive accuracy while capturing the most essential slide-level information for prognosis, diagnosis and integration with other omics modalities. Here, we review two promising methods for capturing macro and micro architecture of histology images, Graph Neural Networks, which contextualize patch level information from their neighbors through message passing, and Topological Data Analysis, which distills contextual information into its essential components. We introduce a modeling framework, WSI-GTFE that integrates these two approaches in order to identify and quantify key pathogenic information pathways. To demonstrate a simple use case, we utilize these topological methods to develop a tumor invasion score to stage colon cancer.
Collapse
Affiliation(s)
- Joshua Levy
- Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756, USA* To whom correspondence should be addressed.,
| | | | | | | | | |
Collapse
|