1
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
2
|
Chen Z, Wong IHM, Dai W, Lo CTK, Wong TTW. Lung Cancer Diagnosis on Virtual Histologically Stained Tissue Using Weakly Supervised Learning. Mod Pathol 2024; 37:100487. [PMID: 38588884 DOI: 10.1016/j.modpat.2024.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 03/05/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024]
Abstract
Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Weixing Dai
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Claudia T K Lo
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| |
Collapse
|
3
|
Lee S, Lee E, Yang H, Park K, Min E, Jung W. Digital histological staining of tissue slide images from optical coherence microscopy. BIOMEDICAL OPTICS EXPRESS 2024; 15:3807-3816. [PMID: 38867770 PMCID: PMC11166446 DOI: 10.1364/boe.520683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 06/14/2024]
Abstract
The convergence of staining-free optical imaging and digital staining technologies has become a central focus in digital pathology, presenting significant advantages in streamlining specimen preparation and expediting the rapid acquisition of histopathological information. Despite the inherent merits of optical coherence microscopy (OCM) as a staining-free technique, its widespread application in observing histopathological slides has been constrained. This study introduces a novel approach by combining wide-field OCM with digital staining technology for the imaging of histopathological slides. Through the optimization of the histology slide production process satisfying the ground growth for digital staining as well as pronounced contrast for OCM imaging, successful imaging of various mouse tissues was achieved. Comparative analyses with conventional staining-based bright field images were executed to evaluate the proposed methodology's efficacy. Moreover, the study investigates the generalization of digital staining color appearance to ensure consistent histopathology, considering tissue-specific and thickness-dependent variations.
Collapse
Affiliation(s)
- Sangjin Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Eunji Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Hyunmo Yang
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Kibeom Park
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Eunjung Min
- Korea Photonics Technology Institute, Gwangju 61007, Republic of Korea
| | - Woonggyu Jung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| |
Collapse
|
4
|
Zhang Y, Lee RY, Tan CW, Guo X, Yim WWY, Lim JC, Wee FY, Yang WU, Kharbanda M, Lee JYJ, Ngo NT, Leow WQ, Loo LH, Lim TK, Sobota RM, Lau MC, Davis MJ, Yeong J. Spatial omics techniques and data analysis for cancer immunotherapy applications. Curr Opin Biotechnol 2024; 87:103111. [PMID: 38520821 DOI: 10.1016/j.copbio.2024.103111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 03/01/2024] [Accepted: 03/03/2024] [Indexed: 03/25/2024]
Abstract
In-depth profiling of cancer cells/tissues is expanding our understanding of the genomic, epigenomic, transcriptomic, and proteomic landscape of cancer. However, the complexity of the cancer microenvironment, particularly its immune regulation, has made it difficult to exploit the potential of cancer immunotherapy. High-throughput spatial omics technologies and analysis pipelines have emerged as powerful tools for tackling this challenge. As a result, a potential revolution in cancer diagnosis, prognosis, and treatment is on the horizon. In this review, we discuss the technological advances in spatial profiling of cancer around and beyond the central dogma to harness the full benefits of immunotherapy. We also discuss the promise and challenges of spatial data analysis and interpretation and provide an outlook for the future.
Collapse
Affiliation(s)
- Yue Zhang
- Duke-NUS Medical School, Singapore 169856, Singapore
| | - Ren Yuan Lee
- Yong Loo Lin School of Medicine, National University of Singapore, 169856 Singapore; Singapore Thong Chai Medical Institution, Singapore 169874, Singapore
| | - Chin Wee Tan
- Bioinformatics Division, The Walter and Eliza Hall Institute of Medical Research, Parkville, Melbourne, Victoria 3052, Australia; Department of Medical Biology, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, Victoria 3010, Australia; Frazer Institute, Faculty of Medicine, The University of Queensland, Brisbane, Queensland 4102, Australia
| | - Xue Guo
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - Willa W-Y Yim
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - Jeffrey Ct Lim
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - Felicia Yt Wee
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - W U Yang
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - Malvika Kharbanda
- Bioinformatics Division, The Walter and Eliza Hall Institute of Medical Research, Parkville, Melbourne, Victoria 3052, Australia; Department of Medical Biology, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, Victoria 3010, Australia; immunoGENomics Cancer Institute (SAiGENCI), Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia 5005, Australia
| | - Jia-Ying J Lee
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A⁎STAR), Singapore 138671, Singapore
| | - Nye Thane Ngo
- Department of Anatomical Pathology, Singapore General Hospital, Singapore 169856, Singapore
| | - Wei Qiang Leow
- Department of Anatomical Pathology, Singapore General Hospital, Singapore 169856, Singapore
| | - Lit-Hsin Loo
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A⁎STAR), Singapore 138671, Singapore
| | - Tony Kh Lim
- Department of Anatomical Pathology, Singapore General Hospital, Singapore 169856, Singapore
| | - Radoslaw M Sobota
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore
| | - Mai Chan Lau
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A⁎STAR), Singapore 138671, Singapore; Singapore Immunology Network (SIgN), Agency for Science, Technology and Research (A⁎STAR), Singapore 138648, Singapore
| | - Melissa J Davis
- Bioinformatics Division, The Walter and Eliza Hall Institute of Medical Research, Parkville, Melbourne, Victoria 3052, Australia; Department of Medical Biology, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, Victoria 3010, Australia; Frazer Institute, Faculty of Medicine, The University of Queensland, Brisbane, Queensland 4102, Australia; immunoGENomics Cancer Institute (SAiGENCI), Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia 5005, Australia; Department of Clinical Pathology, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, Victoria 3010, Australia
| | - Joe Yeong
- Institute of Molecular Cell Biology (IMCB), Agency of Science, Technology and Research (A⁎STAR), Singapore 169856, Singapore; Bioinformatics Institute (BII), Agency for Science, Technology and Research (A⁎STAR), Singapore 138671, Singapore.
| |
Collapse
|
5
|
Tweel JED, Ecclestone BR, Boktor M, Dinakaran D, Mackey JR, Reza PH. Automated Whole Slide Imaging for Label-Free Histology Using Photon Absorption Remote Sensing Microscopy. IEEE Trans Biomed Eng 2024; 71:1901-1912. [PMID: 38231822 DOI: 10.1109/tbme.2024.3355296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
OBJECTIVE Pathologists rely on histochemical stains to impart contrast in thin translucent tissue samples, revealing tissue features necessary for identifying pathological conditions. However, the chemical labeling process is destructive and often irreversible or challenging to undo, imposing practical limits on the number of stains that can be applied to the same tissue section. Here we present an automated label-free whole slide scanner using a PARS microscope designed for imaging thin, transmissible samples. METHODS Peak SNR and in-focus acquisitions are achieved across entire tissue sections using the scattering signal from the PARS detection beam to measure the optimal focal plane. Whole slide images (WSI) are seamlessly stitched together using a custom contrast leveling algorithm. Identical tissue sections are subsequently H&E stained and brightfield imaged. The one-to-one WSIs from both modalities are visually and quantitatively compared. RESULTS PARS WSIs are presented at standard 40x magnification in malignant human breast and skin samples. We show correspondence of subcellular diagnostic details in both PARS and H&E WSIs and demonstrate virtual H&E staining of an entire PARS WSI. The one-to-one WSI from both modalities show quantitative similarity in nuclear features and structural information. CONCLUSION PARS WSIs are compatible with existing digital pathology tools, and samples remain suitable for histochemical, immunohistochemical, and other staining techniques. SIGNIFICANCE This work is a critical advance for integrating label-free optical methods into standard histopathology workflows.
Collapse
|
6
|
Song AH, Williams M, Williamson DFK, Chow SSL, Jaume G, Gao G, Zhang A, Chen B, Baras AS, Serafin R, Colling R, Downes MR, Farré X, Humphrey P, Verrill C, True LD, Parwani AV, Liu JTC, Mahmood F. Analysis of 3D pathology samples using weakly supervised AI. Cell 2024; 187:2502-2520.e17. [PMID: 38729110 PMCID: PMC11168832 DOI: 10.1016/j.cell.2024.03.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 01/15/2024] [Accepted: 03/25/2024] [Indexed: 05/12/2024]
Abstract
Human tissue, which is inherently three-dimensional (3D), is traditionally examined through standard-of-care histopathology as limited two-dimensional (2D) cross-sections that can insufficiently represent the tissue due to sampling bias. To holistically characterize histomorphology, 3D imaging modalities have been developed, but clinical translation is hampered by complex manual evaluation and lack of computational platforms to distill clinical insights from large, high-resolution datasets. We present TriPath, a deep-learning platform for processing tissue volumes and efficiently predicting clinical outcomes based on 3D morphological features. Recurrence risk-stratification models were trained on prostate cancer specimens imaged with open-top light-sheet microscopy or microcomputed tomography. By comprehensively capturing 3D morphologies, 3D volume-based prognostication achieves superior performance to traditional 2D slice-based approaches, including clinical/histopathological baselines from six certified genitourinary pathologists. Incorporating greater tissue volume improves prognostic performance and mitigates risk prediction variability from sampling bias, further emphasizing the value of capturing larger extents of heterogeneous morphology.
Collapse
Affiliation(s)
- Andrew H Song
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Mane Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Sarah S L Chow
- Department of Mechanical Engineering, Bioengineering, and Laboratory Medicine & Pathology, University of Washington, Seattle, WA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Gan Gao
- Department of Mechanical Engineering, Bioengineering, and Laboratory Medicine & Pathology, University of Washington, Seattle, WA, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Alexander S Baras
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Robert Serafin
- Department of Mechanical Engineering, Bioengineering, and Laboratory Medicine & Pathology, University of Washington, Seattle, WA, USA
| | - Richard Colling
- Nuffield Department of Surgical Sciences, University of Oxford, UK; Department of Cellular Pathology, Oxford University Hospitals NHS Foundations Trust, John Radcliffe Hospital, Oxford, UK
| | - Michelle R Downes
- Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | - Xavier Farré
- Public Health Agency of Catalonia, Lleida, Spain
| | - Peter Humphrey
- Department of Pathology, Yale School of Medicine, New Haven, CT, USA
| | - Clare Verrill
- Nuffield Department of Surgical Sciences, University of Oxford, UK; Department of Cellular Pathology, Oxford University Hospitals NHS Foundations Trust, John Radcliffe Hospital, Oxford, UK; NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Lawrence D True
- Department of Laboratory Medicine & Pathology, University of Washington School of Medicine, Seattle, WA, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Jonathan T C Liu
- Department of Mechanical Engineering, Bioengineering, and Laboratory Medicine & Pathology, University of Washington, Seattle, WA, USA.
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
| |
Collapse
|
7
|
Savage N. AI's keen diagnostic eye. Nature 2024:10.1038/d41586-024-01132-2. [PMID: 38637706 DOI: 10.1038/d41586-024-01132-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
|
8
|
Jin L, Tang Y, Coole JB, Tan MT, Zhao X, Badaoui H, Robinson JT, Williams MD, Vigneswaran N, Gillenwater AM, Richards-Kortum RR, Veeraraghavan A. DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology. Nat Commun 2024; 15:2935. [PMID: 38580633 PMCID: PMC10997797 DOI: 10.1038/s41467-024-47065-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/19/2024] [Indexed: 04/07/2024] Open
Abstract
Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings.
Collapse
Affiliation(s)
- Lingbo Jin
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Jackson B Coole
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Melody T Tan
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Xuan Zhao
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Hawraa Badaoui
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Jacob T Robinson
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Michelle D Williams
- Department of Pathology, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Nadarajah Vigneswaran
- Department of Diagnostic and Biomedical Sciences, University of Texas Health Science Center at Houston School of Dentistry, 7500 Cambridge St, Houston, TX, USA
| | - Ann M Gillenwater
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | | | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA.
| |
Collapse
|
9
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
10
|
Abraham TM, Levenson R. Current Landscape of Advanced Imaging Tools for Pathology Diagnostics. Mod Pathol 2024; 37:100443. [PMID: 38311312 DOI: 10.1016/j.modpat.2024.100443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 12/13/2023] [Accepted: 01/26/2024] [Indexed: 02/10/2024]
Abstract
Histopathology relies on century-old workflows of formalin fixation, paraffin embedding, sectioning, and staining tissue specimens on glass slides. Despite being robust, this conventional process is slow, labor-intensive, and limited to providing two-dimensional views. Emerging technologies promise to enhance and accelerate histopathology. Slide-free microscopy allows rapid imaging of fresh, unsectioned specimens, overcoming slide preparation delays. Methods such as fluorescence confocal microscopy, multiphoton microscopy, along with more recent innovations including microscopy with UV surface excitation and fluorescence-imitating brightfield imaging can generate images resembling conventional histology directly from the surface of tissue specimens. Slide-free microscopy enable applications such as rapid intraoperative margin assessment and, with appropriate technology, three-dimensional histopathology. Multiomics profiling techniques, including imaging mass spectrometry and Raman spectroscopy, provide highly multiplexed molecular maps of tissues, although clinical translation remains challenging. Artificial intelligence is aiding the adoption of new imaging modalities via virtual staining, which converts methods such as slide-free microscopy into synthetic brightfield-like or even molecularly informed images. Although not yet commonplace, these emerging technologies collectively demonstrate the potential to modernize histopathology. Artificial intelligence-assisted workflows will ease the transition to new imaging modalities. With further validation, these advances may transform the century-old conventional histopathology pipeline to better serve 21st-century medicine. This review provides an overview of these enabling technology platforms and discusses their potential impact.
Collapse
Affiliation(s)
- Tanishq Mathew Abraham
- Department of Biomedical Engineering, University of California, Davis, Davis, California
| | - Richard Levenson
- Department of Pathology and Laboratory Medicine, UC Davis Health, Sacramento, California.
| |
Collapse
|
11
|
Joshi S, Forjaz A, Han KS, Shen Y, Queiroga V, Xenes D, Matelsk J, Wester B, Barrutia AM, Kiemen AL, Wu PH, Wirtz D. Generative interpolation and restoration of images using deep learning for improved 3D tissue mapping. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.07.583909. [PMID: 38496512 PMCID: PMC10942457 DOI: 10.1101/2024.03.07.583909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
The development of novel imaging platforms has improved our ability to collect and analyze large three-dimensional (3D) biological imaging datasets. Advances in computing have led to an ability to extract complex spatial information from these data, such as the composition, morphology, and interactions of multi-cellular structures, rare events, and integration of multi-modal features combining anatomical, molecular, and transcriptomic (among other) information. Yet, the accuracy of these quantitative results is intrinsically limited by the quality of the input images, which can contain missing or damaged regions, or can be of poor resolution due to mechanical, temporal, or financial constraints. In applications ranging from intact imaging (e.g. light-sheet microscopy and magnetic resonance imaging) to sectioning based platforms (e.g. serial histology and serial section transmission electron microscopy), the quality and resolution of imaging data has become paramount. Here, we address these challenges by leveraging frame interpolation for large image motion (FILM), a generative AI model originally developed for temporal interpolation, for spatial interpolation of a range of 3D image types. Comparative analysis demonstrates the superiority of FILM over traditional linear interpolation to produce functional synthetic images, due to its ability to better preserve biological information including microanatomical features and cell counts, as well as image quality, such as contrast, variance, and luminance. FILM repairs tissue damages in images and reduces stitching artifacts. We show that FILM can decrease imaging time by synthesizing skipped images. We demonstrate the versatility of our method with a wide range of imaging modalities (histology, tissue-clearing/light-sheet microscopy, magnetic resonance imaging, serial section transmission electron microscopy), species (human, mouse), healthy and diseased tissues (pancreas, lung, brain), staining techniques (IHC, H&E), and pixel resolutions (8 nm, 2 μm, 1mm). Overall, we demonstrate the potential of generative AI in improving the resolution, throughput, and quality of biological image datasets, enabling improved 3D imaging.
Collapse
Affiliation(s)
- Saurabh Joshi
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
| | - André Forjaz
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
| | - Kyu Sang Han
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
| | - Yu Shen
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
- Departments of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins School of Medicine, Baltimore, MD
| | - Vasco Queiroga
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
| | - Daniel Xenes
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD
| | - Jordan Matelsk
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD
| | - Brock Wester
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD
| | - Arrate Munoz Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid and Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Ashley L. Kiemen
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
- Departments of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins School of Medicine, Baltimore, MD
| | - Pei-Hsun Wu
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
| | - Denis Wirtz
- Department of Chemical & Biomolecular Engineering, Johns Hopkins University, Baltimore MD
- The Johns Hopkins Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD
- Departments of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins School of Medicine, Baltimore, MD
- Department of Oncology, Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
12
|
Dolezal JM, Kochanny S, Dyer E, Ramesh S, Srisuwananukorn A, Sacco M, Howard FM, Li A, Mohan P, Pearson AT. Slideflow: deep learning for digital histopathology with real-time whole-slide visualization. BMC Bioinformatics 2024; 25:134. [PMID: 38539070 PMCID: PMC10967068 DOI: 10.1186/s12859-024-05758-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 03/20/2024] [Indexed: 05/04/2024] Open
Abstract
Deep learning methods have emerged as powerful tools for analyzing histopathological images, but current methods are often specialized for specific domains and software environments, and few open-source options exist for deploying models in an interactive interface. Experimenting with different deep learning approaches typically requires switching software libraries and reprocessing data, reducing the feasibility and practicality of experimenting with new architectures. We developed a flexible deep learning library for histopathology called Slideflow, a package which supports a broad array of deep learning methods for digital pathology and includes a fast whole-slide interface for deploying trained models. Slideflow includes unique tools for whole-slide image data processing, efficient stain normalization and augmentation, weakly-supervised whole-slide classification, uncertainty quantification, feature generation, feature space analysis, and explainability. Whole-slide image processing is highly optimized, enabling whole-slide tile extraction at 40x magnification in 2.5 s per slide. The framework-agnostic data processing pipeline enables rapid experimentation with new methods built with either Tensorflow or PyTorch, and the graphical user interface supports real-time visualization of slides, predictions, heatmaps, and feature space characteristics on a variety of hardware devices, including ARM-based devices such as the Raspberry Pi.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Emma Dyer
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Andrew Srisuwananukorn
- Division of Hematology, Department of Internal Medicine, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA
| | - Matteo Sacco
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Anran Li
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Prajval Mohan
- Department of Computer Science, University of Chicago, Chicago, IL, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| |
Collapse
|
13
|
Cheng S, Chang S, Li Y, Novoseltseva A, Lin S, Wu Y, Zhu J, McKee AC, Rosene DL, Wang H, Bigio IJ, Boas DA, Tian L. Enhanced Multiscale Human Brain Imaging by Semi-supervised Digital Staining and Serial Sectioning Optical Coherence Tomography. RESEARCH SQUARE 2024:rs.3.rs-4014687. [PMID: 38562721 PMCID: PMC10984089 DOI: 10.21203/rs.3.rs-4014687/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
A major challenge in neuroscience is to visualize the structure of the human brain at different scales. Traditional histology reveals micro- and meso-scale brain features, but suffers from staining variability, tissue damage and distortion that impedes accurate 3D reconstructions. Here, we present a new 3D imaging framework that combines serial sectioning optical coherence tomography (S-OCT) with a deep-learning digital staining (DS) model. We develop a novel semi-supervised learning technique to facilitate DS model training on weakly paired images. The DS model performs translation from S-OCT to Gallyas silver staining. We demonstrate DS on various human cerebral cortex samples with consistent staining quality. Additionally, we show that DS enhances contrast across cortical layer boundaries. Furthermore, we showcase geometry-preserving 3D DS on cubic-centimeter tissue blocks and visualization of meso-scale vessel networks in the white matter. We believe that our technique offers the potential for high-throughput, multiscale imaging of brain tissues and may facilitate studies of brain structures.
Collapse
Affiliation(s)
- Shiyi Cheng
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Shuaibin Chang
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical Engineering and Computer Sciences, University of California, Cory Hall, Berkeley, California, 94720, USA
| | - Anna Novoseltseva
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
| | - Sunni Lin
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
| | - Yicun Wu
- Department of Computer Science, Boston University, 665 Commonwealth Ave, Boston, MA, 02215, USA
| | - Jiahui Zhu
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Ann C. McKee
- Boston University Alzheimer’s Disease Research Center and CTE Center, Boston University, Chobanian and Avedisian School of Medicine, Boston, MA, 02118, USA
- Department of Neurology, Boston University, Chobanian and Avedisian School of Medicine, Boston, MA, 02118, USA
- VA Boston Healthcare System, U.S. Department of Veteran Affairs, Jamaica Plain, MA, 02130, USA
- Department of Psychiatry and Ophthalmology, Boston University School of Medicine, Boston, MA, 02118, USA
- Department of Pathology and Laboratory Medicine, Boston University School of Medicine, Boston, MA, 02118, USA
| | - Douglas L. Rosene
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Hui Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, 02129, USA
| | - Irving J. Bigio
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - David A. Boas
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| |
Collapse
|
14
|
Li X, Liu H, Song X, Marboe CC, Brott BC, Litovsky SH, Gan Y. Structurally constrained and pathology-aware convolutional transformer generative adversarial network for virtual histology staining of human coronary optical coherence tomography images. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:036004. [PMID: 38532927 PMCID: PMC10964178 DOI: 10.1117/1.jbo.29.3.036004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/26/2024] [Accepted: 03/11/2024] [Indexed: 03/28/2024]
Abstract
Significance There is a significant need for the generation of virtual histological information from coronary optical coherence tomography (OCT) images to better guide the treatment of coronary artery disease (CAD). However, existing methods either require a large pixel-wise paired training dataset or have limited capability to map pathological regions. Aim The aim of this work is to generate virtual histological information from coronary OCT images, without a pixel-wise paired training dataset while capable of providing pathological patterns. Approach We design a structurally constrained, pathology-aware, transformer generative adversarial network, namely structurally constrained pathology-aware convolutional transformer generative adversarial network (SCPAT-GAN), to generate virtual stained H&E histology from OCT images. We quantitatively evaluate the quality of virtual stained histology images by measuring the Fréchet inception distance (FID) and perceptual hash value (PHV). Moreover, we invite experienced pathologists to evaluate the virtual stained images. Furthermore, we visually inspect the virtual stained image generated by SCPAT-GAN. Also, we perform an ablation study to validate the design of the proposed SCPAT-GAN. Finally, we demonstrate 3D virtual stained histology images. Results Compared to previous research, the proposed SCPAT-GAN achieves better FID and PHV scores. The visual inspection suggests that the virtual histology images generated by SCPAT-GAN resemble both normal and pathological features without artifacts. As confirmed by the pathologists, the virtual stained images have good quality compared to real histology images. The ablation study confirms the effectiveness of the combination of proposed pathological awareness and structural constraining modules. Conclusions The proposed SCPAT-GAN is the first to demonstrate the feasibility of generating both normal and pathological patterns without pixel-wisely supervised training. We expect the SCPAT-GAN to assist in the clinical evaluation of treating the CAD by providing 2D and 3D histopathological visualizations.
Collapse
Affiliation(s)
- Xueshen Li
- Stevens Institute of Technology, Department of Biomedical Engineering, Hoboken, New Jersey, United States
- Stevens Institute of Technology, Semcer Center for Healthcare Innovation, Hoboken, New Jersey, United States
| | - Hongshan Liu
- Stevens Institute of Technology, Department of Biomedical Engineering, Hoboken, New Jersey, United States
- Stevens Institute of Technology, Semcer Center for Healthcare Innovation, Hoboken, New Jersey, United States
| | - Xiaoyu Song
- Icahn School of Medicine at Mount Sinai, New York, New York, United States
| | - Charles C. Marboe
- Columbia University Medical Center, New York, New York, United States
| | - Brigitta C. Brott
- The University of Alabama at Birmingham, School of Medicine, Birmingham, Alabama, United States
| | - Silvio H. Litovsky
- The University of Alabama at Birmingham, School of Medicine, Birmingham, Alabama, United States
| | - Yu Gan
- Stevens Institute of Technology, Department of Biomedical Engineering, Hoboken, New Jersey, United States
- Stevens Institute of Technology, Semcer Center for Healthcare Innovation, Hoboken, New Jersey, United States
| |
Collapse
|
15
|
Shen B, Li Z, Pan Y, Guo Y, Yin Z, Hu R, Qu J, Liu L. Noninvasive Nonlinear Optical Computational Histology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308630. [PMID: 38095543 PMCID: PMC10916666 DOI: 10.1002/advs.202308630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 11/28/2023] [Indexed: 03/07/2024]
Abstract
Cancer remains a global health challenge, demanding early detection and accurate diagnosis for improved patient outcomes. An intelligent paradigm is introduced that elevates label-free nonlinear optical imaging with contrastive patch-wise learning, yielding stain-free nonlinear optical computational histology (NOCH). NOCH enables swift, precise diagnostic analysis of fresh tissues, reducing patient anxiety and healthcare costs. Nonlinear modalities are evaluated, including stimulated Raman scattering and multiphoton imaging, for their ability to enhance tumor microenvironment sensitivity, pathological analysis, and cancer examination. Quantitative analysis confirmed that NOCH images accurately reproduce nuclear morphometric features across different cancer stages. Key diagnostic features, such as nuclear morphology, size, and nuclear-cytoplasmic contrast, are well preserved. NOCH models also demonstrate promising generalization when applied to other pathological tissues. The study unites label-free nonlinear optical imaging with histopathology using contrastive learning to establish stain-free computational histology. NOCH provides a rapid, non-invasive, and precise approach to surgical pathology, holding immense potential for revolutionizing cancer diagnosis and surgical interventions.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Zhenglin Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Ying Pan
- China–Japan Union Hospital of Jilin UniversityChangchun130033China
| | - Yuan Guo
- Shaanxi Provincial Cancer HospitalXi'an710065China
| | - Zongyi Yin
- Shenzhen University General HospitalShenzhen518055China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
16
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
17
|
Pirone D, Bianco V, Miccio L, Memmolo P, Psaltis D, Ferraro P. Beyond fluorescence: advances in computational label-free full specificity in 3D quantitative phase microscopy. Curr Opin Biotechnol 2024; 85:103054. [PMID: 38142647 DOI: 10.1016/j.copbio.2023.103054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/23/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Despite remarkable progresses in quantitative phase imaging (QPI) microscopes, their wide acceptance is limited due to the lack of specificity compared with the well-established fluorescence microscopy. In fact, the absence of fluorescent tag prevents to identify subcellular structures in single cells, making challenging the interpretation of label-free 2D and 3D phase-contrast data. Great effort has been made by many groups worldwide to address and overcome such limitation. Different computational methods have been proposed and many more are currently under investigation to achieve label-free microscopic imaging at single-cell level to recognize and quantify different subcellular compartments. This route promises to bridge the gap between QPI and FM for real-world applications.
Collapse
Affiliation(s)
- Daniele Pirone
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Demetri Psaltis
- EPFL, Ecole Polytechnique Fédérale de Lausanne, Optics Laboratory, CH-1015 Lausanne, Switzerland
| | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| |
Collapse
|
18
|
McNeil C, Wong PF, Sridhar N, Wang Y, Santori C, Wu CH, Homyk A, Gutierrez M, Behrooz A, Tiniakos D, Burt AD, Pai RK, Tekiela K, Patel H, Cameron Chen PH, Fischer L, Martins EB, Seyedkazemi S, Freedman D, Kim CC, Cimermancic P. An End-to-End Platform for Digital Pathology Using Hyperspectral Autofluorescence Microscopy and Deep Learning-Based Virtual Histology. Mod Pathol 2024; 37:100377. [PMID: 37926422 DOI: 10.1016/j.modpat.2023.100377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/07/2023]
Abstract
Conventional histopathology involves expensive and labor-intensive processes that often consume tissue samples, rendering them unavailable for other analyses. We present a novel end-to-end workflow for pathology powered by hyperspectral microscopy and deep learning. First, we developed a custom hyperspectral microscope to nondestructively image the autofluorescence of unstained tissue sections. We then trained a deep learning model to use autofluorescence to generate virtual histologic stains, which avoids the cost and variability of chemical staining procedures and conserves tissue samples. We showed that the virtual images reproduce the histologic features present in the real-stained images using a randomized nonalcoholic steatohepatitis (NASH) scoring comparison study, where both real and virtual stains are scored by pathologists (D.T., A.D.B., R.K.P.). The test showed moderate-to-good concordance between pathologists' scoring on corresponding real and virtual stains. Finally, we developed deep learning-based models for automated NASH Clinical Research Network score prediction. We showed that the end-to-end automated pathology platform is comparable with an independent panel of pathologists for NASH Clinical Research Network scoring when evaluated against the expert pathologist consensus scores. This study provides proof of concept for this virtual staining strategy, which could improve cost, efficiency, and reliability in pathology and enable novel approaches to spatial biology research.
Collapse
Affiliation(s)
- Carson McNeil
- Verily Life Sciences LLC, South San Francisco, California.
| | - Pok Fai Wong
- Verily Life Sciences LLC, South San Francisco, California
| | | | - Yang Wang
- Verily Life Sciences LLC, South San Francisco, California
| | | | - Cheng-Hsun Wu
- Verily Life Sciences LLC, South San Francisco, California
| | - Andrew Homyk
- Verily Life Sciences LLC, South San Francisco, California
| | | | - Ali Behrooz
- Verily Life Sciences LLC, South San Francisco, California
| | - Dina Tiniakos
- Newcastle University, Newcastle upon Tyne, United Kingdom; Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | | | | | | | - Hardik Patel
- Verily Life Sciences LLC, South San Francisco, California
| | | | | | | | | | | | - Charles C Kim
- Verily Life Sciences LLC, South San Francisco, California
| | | |
Collapse
|
19
|
Asaf MZ, Rao B, Akram MU, Khawaja SG, Khan S, Truong TM, Sekhon P, Khan IJ, Abbasi MS. Dual contrastive learning based image-to-image translation of unstained skin tissue into virtually stained H&E images. Sci Rep 2024; 14:2335. [PMID: 38282056 DOI: 10.1038/s41598-024-52833-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.
Collapse
Affiliation(s)
- Muhammad Zeeshan Asaf
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Babar Rao
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Dermatology, Weill Cornell Medicine, New York, NY, 10021, USA
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan.
| | - Sajid Gul Khawaja
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Samavia Khan
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
| | - Thu Minh Truong
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Pathology, Immunology and Laboratory Medicine, New Jersey Medical School, 185 South Orange Ave, Newark, NJ, 07103, USA
| | - Palveen Sekhon
- EIV Diagnostics, Fresno, CA, USA
- University of California, San Francisco School of Medicine, San Francisco, USA
| | - Irfan J Khan
- Department of Pathology, St. Luke's University Health Network, Bethlehem, PA, 18015, USA
| | | |
Collapse
|
20
|
Sloboda T, Hudec L, Halinkovič M, Benesova W. Attention-Enhanced Unpaired xAI-GANs for Transformation of Histological Stain Images. J Imaging 2024; 10:32. [PMID: 38392081 PMCID: PMC10889577 DOI: 10.3390/jimaging10020032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 01/23/2024] [Accepted: 01/23/2024] [Indexed: 02/24/2024] Open
Abstract
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is a crucial development, considering the high costs and tissue risks associated with direct p63 staining. Our approach employs an advanced CycleGAN architecture, xAI-CycleGAN, enhanced with context-based loss to maintain structural integrity. The inclusion of convolutional attention in our model distinguishes between structural and color details more effectively, thus significantly enhancing the visual quality of the results. This approach shows a marked improvement over the base xAI-CycleGAN and standard CycleGAN models, offering the benefits of a more compact network and faster training even with the inclusion of attention.
Collapse
Affiliation(s)
- Tibor Sloboda
- Faculty of Informatics and Information Technology, Slovak Technical University, Ilkovičova 2, 842 16 Bratislava, Slovakia
| | - Lukáš Hudec
- Faculty of Informatics and Information Technology, Slovak Technical University, Ilkovičova 2, 842 16 Bratislava, Slovakia
| | - Matej Halinkovič
- Faculty of Informatics and Information Technology, Slovak Technical University, Ilkovičova 2, 842 16 Bratislava, Slovakia
| | - Wanda Benesova
- Faculty of Informatics and Information Technology, Slovak Technical University, Ilkovičova 2, 842 16 Bratislava, Slovakia
| |
Collapse
|
21
|
Boktor M, Tweel JED, Ecclestone BR, Ye JA, Fieguth P, Haji Reza P. Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images. Sci Rep 2024; 14:2009. [PMID: 38263394 PMCID: PMC10805725 DOI: 10.1038/s41598-024-52588-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/20/2024] [Indexed: 01/25/2024] Open
Abstract
Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications.
Collapse
Affiliation(s)
- Marian Boktor
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - James E D Tweel
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Benjamin R Ecclestone
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Jennifer Ai Ye
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Paul Fieguth
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Parsin Haji Reza
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
22
|
Sun J, Yang B, Koukourakis N, Guck J, Czarske JW. AI-driven projection tomography with multicore fibre-optic cell rotation. Nat Commun 2024; 15:147. [PMID: 38167247 PMCID: PMC10762230 DOI: 10.1038/s41467-023-44280-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024] Open
Abstract
Optical tomography has emerged as a non-invasive imaging method, providing three-dimensional insights into subcellular structures and thereby enabling a deeper understanding of cellular functions, interactions, and processes. Conventional optical tomography methods are constrained by a limited illumination scanning range, leading to anisotropic resolution and incomplete imaging of cellular structures. To overcome this problem, we employ a compact multi-core fibre-optic cell rotator system that facilitates precise optical manipulation of cells within a microfluidic chip, achieving full-angle projection tomography with isotropic resolution. Moreover, we demonstrate an AI-driven tomographic reconstruction workflow, which can be a paradigm shift from conventional computational methods, often demanding manual processing, to a fully autonomous process. The performance of the proposed cell rotation tomography approach is validated through the three-dimensional reconstruction of cell phantoms and HL60 human cancer cells. The versatility of this learning-based tomographic reconstruction workflow paves the way for its broad application across diverse tomographic imaging modalities, including but not limited to flow cytometry tomography and acoustic rotation tomography. Therefore, this AI-driven approach can propel advancements in cell biology, aiding in the inception of pioneering therapeutics, and augmenting early-stage cancer diagnostics.
Collapse
Affiliation(s)
- Jiawei Sun
- Shanghai Artificial Intelligence Laboratory, Longwen Road 129, Xuhui District, 200232, Shanghai, China.
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany.
| | - Bin Yang
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany
| | - Nektarios Koukourakis
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany
| | - Jochen Guck
- Max Planck Institute for the Science of Light & Max Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Juergen W Czarske
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany.
- Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany.
- Institute of Applied Physics, TU Dresden, Dresden, Germany.
| |
Collapse
|
23
|
Liu JTC, Chow SSL, Colling R, Downes MR, Farré X, Humphrey P, Janowczyk A, Mirtti T, Verrill C, Zlobec I, True LD. Engineering the future of 3D pathology. J Pathol Clin Res 2024; 10:e347. [PMID: 37919231 PMCID: PMC10807588 DOI: 10.1002/cjp2.347] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/06/2023] [Accepted: 10/15/2023] [Indexed: 11/04/2023]
Abstract
In recent years, technological advances in tissue preparation, high-throughput volumetric microscopy, and computational infrastructure have enabled rapid developments in nondestructive 3D pathology, in which high-resolution histologic datasets are obtained from thick tissue specimens, such as whole biopsies, without the need for physical sectioning onto glass slides. While 3D pathology generates massive datasets that are attractive for automated computational analysis, there is also a desire to use 3D pathology to improve the visual assessment of tissue histology. In this perspective, we discuss and provide examples of potential advantages of 3D pathology for the visual assessment of clinical specimens and the challenges of dealing with large 3D datasets (of individual or multiple specimens) that pathologists have not been trained to interpret. We discuss the need for artificial intelligence triaging algorithms and explainable analysis methods to assist pathologists or other domain experts in the interpretation of these novel, often complex, large datasets.
Collapse
Affiliation(s)
- Jonathan TC Liu
- Department of Mechanical EngineeringUniversity of WashingtonSeattleWAUSA
- Department of Laboratory Medicine & PathologyUniversity of Washington School of MedicineSeattleUSA
- Department of BioengineeringUniversity of WashingtonSeattleUSA
| | - Sarah SL Chow
- Department of Mechanical EngineeringUniversity of WashingtonSeattleWAUSA
| | | | | | | | - Peter Humphrey
- Department of UrologyYale School of MedicineNew HavenCTUSA
| | - Andrew Janowczyk
- Wallace H Coulter Department of Biomedical EngineeringEmory University and Georgia Institute of TechnologyAtlantaGAUSA
- Geneva University HospitalsGenevaSwitzerland
| | - Tuomas Mirtti
- Helsinki University Hospital and University of HelsinkiHelsinkiFinland
- Emory University School of MedicineAtlantaGAUSA
| | - Clare Verrill
- John Radcliffe HospitalUniversity of OxfordOxfordUK
- NIHR Oxford Biomedical Research CentreOxford University Hospitals NHS Foundation TrustOxfordUK
| | - Inti Zlobec
- Institute for Tissue Medicine and PathologyUniversity of BernBernSwitzerland
| | - Lawrence D True
- Department of Laboratory Medicine & PathologyUniversity of Washington School of MedicineSeattleUSA
| |
Collapse
|
24
|
Ivanov IE, Hirata-Miyasaki E, Chandler T, Kovilakam RC, Liu Z, Liu C, Leonetti MD, Huang B, Mehta SB. Mantis: high-throughput 4D imaging and analysis of the molecular and physical architecture of cells. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.19.572435. [PMID: 38187521 PMCID: PMC10769231 DOI: 10.1101/2023.12.19.572435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
High-throughput dynamic imaging of cells and organelles is important for parsing complex cellular responses. We report a high-throughput 4D microscope, named Mantis, that combines two complementary, gentle, live-imaging technologies: remote-refocus label-free microscopy and oblique light-sheet fluorescence microscopy. We also report open-source software for automated acquisition, registration, and reconstruction, and virtual staining software for single-cell segmentation and phenotyping. Mantis enabled high-content correlative imaging of molecular components and the physical architecture of 20 cell lines every 15 minutes over 7.5 hours, and also detailed measurements of the impacts of viral infection on the architecture of host cells and host proteins. The Mantis platform can enable high-throughput profiling of intracellular dynamics, long-term imaging and analysis of cellular responses to stress, and live cell optical screens to dissect gene regulatory networks.
Collapse
Affiliation(s)
- Ivan E. Ivanov
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Talon Chandler
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Rasmi Cheloor Kovilakam
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Ziwen Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Chad Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Bo Huang
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Shalin B. Mehta
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| |
Collapse
|
25
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2023:S0962-8924(23)00228-3. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
26
|
Kim J, Choi W, Yoo D, Kim M, Cho H, Sung HJ, Choi G, Uh J, Kim J, Go H, Choi KH. Solution-free and simplified H&E staining using a hydrogel-based stamping technology. Front Bioeng Biotechnol 2023; 11:1292785. [PMID: 38026905 PMCID: PMC10665566 DOI: 10.3389/fbioe.2023.1292785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 10/30/2023] [Indexed: 12/01/2023] Open
Abstract
Hematoxylin and eosin (H&E) staining has been widely used as a fundamental and essential tool for diagnosing diseases and understanding biological phenomena by observing cellular arrangements and tissue morphological changes. However, conventional staining methods commonly involve solution-based, complex, multistep processes that are susceptible to user-handling errors. Moreover, inconsistent staining results owing to staining artifacts pose real challenges for accurate diagnosis. This study introduces a solution-free H&E staining method based on agarose hydrogel patches that is expected to represent a valuable tool to overcome the limitations of the solution-based approach. Using two agarose gel-based hydrogel patches containing hematoxylin and eosin dyes, H&E staining can be performed through serial stamping processes, minimizing color variation from handling errors. This method allows easy adjustments of the staining color by controlling the stamping time, effectively addressing variations in staining results caused by various artifacts, such as tissue processing and thickness. Moreover, the solution-free approach eliminates the need for water, making it applicable even in environmentally limited middle- and low-income countries, while still achieving a staining quality equivalent to that of the conventional method. In summary, this hydrogel-based H&E staining method can be used by researchers and medical professionals in resource-limited settings as a powerful tool to diagnose and understand biological phenomena.
Collapse
Affiliation(s)
- Jinho Kim
- Noul Co., Ltd., Yongin-si, Republic of Korea
| | | | - Dahyeon Yoo
- Noul Co., Ltd., Yongin-si, Republic of Korea
| | - Mijin Kim
- Noul Co., Ltd., Yongin-si, Republic of Korea
| | - Haeyon Cho
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyun-Jung Sung
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Gyuheon Choi
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jisu Uh
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jinseong Kim
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Heounjeong Go
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | |
Collapse
|
27
|
Yilmaz A, Aydin T, Varol R. Virtual staining for pixel-wise and quantitative analysis of single cell images. Sci Rep 2023; 13:19178. [PMID: 37932315 PMCID: PMC10628122 DOI: 10.1038/s41598-023-45150-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/16/2023] [Indexed: 11/08/2023] Open
Abstract
Immunocytochemical staining of microorganisms and cells has long been a popular method for examining their specific subcellular structures in greater detail. Recently, generative networks have emerged as an alternative to traditional immunostaining techniques. These networks infer fluorescence signatures from various imaging modalities and then virtually apply staining to the images in a digital environment. In numerous studies, virtual staining models have been trained on histopathology slides or intricate subcellular structures to enhance their accuracy and applicability. Despite the advancements in virtual staining technology, utilizing this method for quantitative analysis of microscopic images still poses a significant challenge. To address this issue, we propose a straightforward and automated approach for pixel-wise image-to-image translation. Our primary objective in this research is to leverage advanced virtual staining techniques to accurately measure the DNA fragmentation index in unstained sperm images. This not only offers a non-invasive approach to gauging sperm quality, but also paves the way for streamlined and efficient analyses without the constraints and potential biases introduced by traditional staining processes. This novel approach takes into account the limitations of conventional techniques and incorporates improvements to bolster the reliability of the virtual staining process. To further refine the results, we discuss various denoising techniques that can be employed to reduce the impact of background noise on the digital images. Additionally, we present a pixel-wise image matching algorithm designed to minimize the error caused by background noise and to prevent the introduction of bias into the analysis. By combining these approaches, we aim to develop a more effective and reliable method for quantitative analysis of virtually stained microscopic images, ultimately enhancing the study of microorganisms and cells at the subcellular level.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
- Imperial College London, London, SW7 2BX, United Kingdom
| | - Tuelay Aydin
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
| | | |
Collapse
|
28
|
Waqas A, Bui MM, Glassy EF, El Naqa I, Borkowski P, Borkowski AA, Rasool G. Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models. J Transl Med 2023; 103:100255. [PMID: 37757969 DOI: 10.1016/j.labinv.2023.100255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 09/06/2023] [Accepted: 09/19/2023] [Indexed: 09/29/2023] Open
Abstract
Digital pathology has transformed the traditional pathology practice of analyzing tissue under a microscope into a computer vision workflow. Whole-slide imaging allows pathologists to view and analyze microscopic images on a computer monitor, enabling computational pathology. By leveraging artificial intelligence (AI) and machine learning (ML), computational pathology has emerged as a promising field in recent years. Recently, task-specific AI/ML (eg, convolutional neural networks) has risen to the forefront, achieving above-human performance in many image-processing and computer vision tasks. The performance of task-specific AI/ML models depends on the availability of many annotated training datasets, which presents a rate-limiting factor for AI/ML development in pathology. Task-specific AI/ML models cannot benefit from multimodal data and lack generalization, eg, the AI models often struggle to generalize to new datasets or unseen variations in image acquisition, staining techniques, or tissue types. The 2020s are witnessing the rise of foundation models and generative AI. A foundation model is a large AI model trained using sizable data, which is later adapted (or fine-tuned) to perform different tasks using a modest amount of task-specific annotated data. These AI models provide in-context learning, can self-correct mistakes, and promptly adjust to user feedback. In this review, we provide a brief overview of recent advances in computational pathology enabled by task-specific AI, their challenges and limitations, and then introduce various foundation models. We propose to create a pathology-specific generative AI based on multimodal foundation models and present its potentially transformative role in digital pathology. We describe different use cases, delineating how it could serve as an expert companion of pathologists and help them efficiently and objectively perform routine laboratory tasks, including quantifying image analysis, generating pathology reports, diagnosis, and prognosis. We also outline the potential role that foundation models and generative AI can play in standardizing the pathology laboratory workflow, education, and training.
Collapse
Affiliation(s)
- Asim Waqas
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Electrical Engineering, University of South Florida, Tampa, Florida.
| | - Marilyn M Bui
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Pathology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Eric F Glassy
- Affiliated Pathologists Medical Group, Inc., Rancho Dominguez, California
| | - Issam El Naqa
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Piotr Borkowski
- Quest Diagnostics/Ameripath, Tampa, Florida; Center of Excellence for Digital and AI-Empowered Pathology, Quest Diagnostics, Tampa, Florida
| | - Andrew A Borkowski
- University of South Florida, Morsani College of Medicine, Tampa, Florida; James A. Haley Veterans' Hospital, Tampa, Florida; National Artificial Intelligence Institute, Washington, District of Columbia
| | - Ghulam Rasool
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Electrical Engineering, University of South Florida, Tampa, Florida; University of South Florida, Morsani College of Medicine, Tampa, Florida; Department of Neuro-Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| |
Collapse
|
29
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
30
|
Fanous MJ, Pillar N, Ozcan A. Digital staining facilitates biomedical microscopy. FRONTIERS IN BIOINFORMATICS 2023; 3:1243663. [PMID: 37564725 PMCID: PMC10411189 DOI: 10.3389/fbinf.2023.1243663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023] Open
Abstract
Traditional staining of biological specimens for microscopic imaging entails time-consuming, laborious, and costly procedures, in addition to producing inconsistent labeling and causing irreversible sample damage. In recent years, computational "virtual" staining using deep learning techniques has evolved into a robust and comprehensive application for streamlining the staining process without typical histochemical staining-related drawbacks. Such virtual staining techniques can also be combined with neural networks designed to correct various microscopy aberrations, such as out-of-focus or motion blur artifacts, and improve upon diffracted-limited resolution. Here, we highlight how such methods lead to a host of new opportunities that can significantly improve both sample preparation and imaging in biomedical microscopy.
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, United States
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
31
|
Prabhakaran S, Yapp C, Baker GJ, Beyer J, Chang YH, Creason AL, Krueger R, Muhlich J, Patterson NH, Sidak K, Sudar D, Taylor AJ, Ternes L, Troidl J, Xie Y, Sokolov A, Tyson DR. Addressing persistent challenges in digital image analysis of cancerous tissues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.21.548450. [PMID: 37547011 PMCID: PMC10401923 DOI: 10.1101/2023.07.21.548450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The National Cancer Institute (NCI) supports many research programs and consortia, many of which use imaging as a major modality for characterizing cancerous tissue. A trans-consortia Image Analysis Working Group (IAWG) was established in 2019 with a mission to disseminate imaging-related work and foster collaborations. In 2022, the IAWG held a virtual hackathon focused on addressing challenges of analyzing high dimensional datasets from fixed cancerous tissues. Standard image processing techniques have automated feature extraction, but the next generation of imaging data requires more advanced methods to fully utilize the available information. In this perspective, we discuss current limitations of the automated analysis of multiplexed tissue images, the first steps toward deeper understanding of these limitations, what possible solutions have been developed, any new or refined approaches that were developed during the Image Analysis Hackathon 2022, and where further effort is required. The outstanding problems addressed in the hackathon fell into three main themes: 1) challenges to cell type classification and assessment, 2) translation and visual representation of spatial aspects of high dimensional data, and 3) scaling digital image analyses to large (multi-TB) datasets. We describe the rationale for each specific challenge and the progress made toward addressing it during the hackathon. We also suggest areas that would benefit from more focus and offer insight into broader challenges that the community will need to address as new technologies are developed and integrated into the broad range of image-based modalities and analytical resources already in use within the cancer research community.
Collapse
|
32
|
Stanciu SG, König K, Song YM, Wolf L, Charitidis CA, Bianchini P, Goetz M. Toward next-generation endoscopes integrating biomimetic video systems, nonlinear optical microscopy, and deep learning. BIOPHYSICS REVIEWS 2023; 4:021307. [PMID: 38510341 PMCID: PMC10903409 DOI: 10.1063/5.0133027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 05/26/2023] [Indexed: 03/22/2024]
Abstract
According to the World Health Organization, the proportion of the world's population over 60 years will approximately double by 2050. This progressive increase in the elderly population will lead to a dramatic growth of age-related diseases, resulting in tremendous pressure on the sustainability of healthcare systems globally. In this context, finding more efficient ways to address cancers, a set of diseases whose incidence is correlated with age, is of utmost importance. Prevention of cancers to decrease morbidity relies on the identification of precursor lesions before the onset of the disease, or at least diagnosis at an early stage. In this article, after briefly discussing some of the most prominent endoscopic approaches for gastric cancer diagnostics, we review relevant progress in three emerging technologies that have significant potential to play pivotal roles in next-generation endoscopy systems: biomimetic vision (with special focus on compound eye cameras), non-linear optical microscopies, and Deep Learning. Such systems are urgently needed to enhance the three major steps required for the successful diagnostics of gastrointestinal cancers: detection, characterization, and confirmation of suspicious lesions. In the final part, we discuss challenges that lie en route to translating these technologies to next-generation endoscopes that could enhance gastrointestinal imaging, and depict a possible configuration of a system capable of (i) biomimetic endoscopic vision enabling easier detection of lesions, (ii) label-free in vivo tissue characterization, and (iii) intelligently automated gastrointestinal cancer diagnostic.
Collapse
Affiliation(s)
- Stefan G. Stanciu
- Center for Microscopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | | | | | - Lior Wolf
- School of Computer Science, Tel Aviv University, Tel-Aviv, Israel
| | - Costas A. Charitidis
- Research Lab of Advanced, Composite, Nano-Materials and Nanotechnology, School of Chemical Engineering, National Technical University of Athens, Athens, Greece
| | - Paolo Bianchini
- Nanoscopy and NIC@IIT, Italian Institute of Technology, Genoa, Italy
| | - Martin Goetz
- Medizinische Klinik IV-Gastroenterologie/Onkologie, Kliniken Böblingen, Klinikverbund Südwest, Böblingen, Germany
| |
Collapse
|
33
|
Tsai HF, Podder S, Chen PY. Microsystem Advances through Integration with Artificial Intelligence. MICROMACHINES 2023; 14:826. [PMID: 37421059 DOI: 10.3390/mi14040826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 04/04/2023] [Accepted: 04/06/2023] [Indexed: 07/09/2023]
Abstract
Microfluidics is a rapidly growing discipline that involves studying and manipulating fluids at reduced length scale and volume, typically on the scale of micro- or nanoliters. Under the reduced length scale and larger surface-to-volume ratio, advantages of low reagent consumption, faster reaction kinetics, and more compact systems are evident in microfluidics. However, miniaturization of microfluidic chips and systems introduces challenges of stricter tolerances in designing and controlling them for interdisciplinary applications. Recent advances in artificial intelligence (AI) have brought innovation to microfluidics from design, simulation, automation, and optimization to bioanalysis and data analytics. In microfluidics, the Navier-Stokes equations, which are partial differential equations describing viscous fluid motion that in complete form are known to not have a general analytical solution, can be simplified and have fair performance through numerical approximation due to low inertia and laminar flow. Approximation using neural networks trained by rules of physical knowledge introduces a new possibility to predict the physicochemical nature. The combination of microfluidics and automation can produce large amounts of data, where features and patterns that are difficult to discern by a human can be extracted by machine learning. Therefore, integration with AI introduces the potential to revolutionize the microfluidic workflow by enabling the precision control and automation of data analysis. Deployment of smart microfluidics may be tremendously beneficial in various applications in the future, including high-throughput drug discovery, rapid point-of-care-testing (POCT), and personalized medicine. In this review, we summarize key microfluidic advances integrated with AI and discuss the outlook and possibilities of combining AI and microfluidics.
Collapse
Affiliation(s)
- Hsieh-Fu Tsai
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
- Center for Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
| | - Soumyajit Podder
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
| | - Pin-Yuan Chen
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
| |
Collapse
|