1
|
Zeng S, Li X, Liu Y, Huang Q, He Y. Automatic Annotation Diagnostic Framework for Nasopharyngeal Carcinoma via Pathology-Fidelity GAN and Prior-Driven Classification. Bioengineering (Basel) 2024; 11:739. [PMID: 39061821 PMCID: PMC11273917 DOI: 10.3390/bioengineering11070739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/17/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024] Open
Abstract
Non-keratinizing carcinoma is the most common subtype of nasopharyngeal carcinoma (NPC). Its poorly differentiated tumor cells and complex microenvironment present challenges to pathological diagnosis. AI-based pathological models have demonstrated potential in diagnosing NPC, but the reliance on costly manual annotation hinders development. To address the challenges, this paper proposes a deep learning-based framework for diagnosing NPC without manual annotation. The framework includes a novel unpaired generative network and a prior-driven image classification system. With pathology-fidelity constraints, the generative network achieves accurate digital staining from H&E to EBER images. The classification system leverages staining specificity and pathological prior knowledge to annotate training data automatically and to classify images for NPC diagnosis. This work used 232 cases for study. The experimental results show that the classification system reached a 99.59% accuracy in classifying EBER images, which closely matched the diagnostic results of pathologists. Utilizing PF-GAN as the backbone of the framework, the system attained a specificity of 0.8826 in generating EBER images, markedly outperforming that of other GANs (0.6137, 0.5815). Furthermore, the F1-Score of the framework for patch level diagnosis was 0.9143, exceeding those of fully supervised models (0.9103, 0.8777). To further validate its clinical efficacy, the framework was compared with experienced pathologists at the WSI level, showing comparable NPC diagnosis performance. This low-cost and precise diagnostic framework optimizes the early pathological diagnosis method for NPC and provides an innovative strategic direction for AI-based cancer diagnosis.
Collapse
Affiliation(s)
- Siqi Zeng
- Medical Optical Technology R&D Center, Research Institute of Tsinghua, Pearl River Delta, Guangzhou 510700, China;
| | - Xinwei Li
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| | - Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China;
| | - Qiang Huang
- Shenzhen Shengqiang Technology Co., Ltd., Shenzhen 518055, China;
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China;
| |
Collapse
|
2
|
Iyer RR, Applegate CC, Arogundade OH, Bangru S, Berg IC, Emon B, Porras-Gomez M, Hsieh PH, Jeong Y, Kim Y, Knox HJ, Moghaddam AO, Renteria CA, Richard C, Santaliz-Casiano A, Sengupta S, Wang J, Zambuto SG, Zeballos MA, Pool M, Bhargava R, Gaskins HR. Inspiring a convergent engineering approach to measure and model the tissue microenvironment. Heliyon 2024; 10:e32546. [PMID: 38975228 PMCID: PMC11226808 DOI: 10.1016/j.heliyon.2024.e32546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 05/22/2024] [Accepted: 06/05/2024] [Indexed: 07/09/2024] Open
Abstract
Understanding the molecular and physical complexity of the tissue microenvironment (TiME) in the context of its spatiotemporal organization has remained an enduring challenge. Recent advances in engineering and data science are now promising the ability to study the structure, functions, and dynamics of the TiME in unprecedented detail; however, many advances still occur in silos that rarely integrate information to study the TiME in its full detail. This review provides an integrative overview of the engineering principles underlying chemical, optical, electrical, mechanical, and computational science to probe, sense, model, and fabricate the TiME. In individual sections, we first summarize the underlying principles, capabilities, and scope of emerging technologies, the breakthrough discoveries enabled by each technology and recent, promising innovations. We provide perspectives on the potential of these advances in answering critical questions about the TiME and its role in various disease and developmental processes. Finally, we present an integrative view that appreciates the major scientific and educational aspects in the study of the TiME.
Collapse
Affiliation(s)
- Rishyashring R. Iyer
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Catherine C. Applegate
- Division of Nutritional Sciences, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Opeyemi H. Arogundade
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Sushant Bangru
- Department of Biochemistry, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Ian C. Berg
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Bashar Emon
- Department of Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Marilyn Porras-Gomez
- Department of Materials Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Pei-Hsuan Hsieh
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Yoon Jeong
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Yongdeok Kim
- Department of Materials Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Hailey J. Knox
- Department of Chemistry, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Amir Ostadi Moghaddam
- Department of Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Carlos A. Renteria
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Craig Richard
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Ashlie Santaliz-Casiano
- Division of Nutritional Sciences, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Sourya Sengupta
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Jason Wang
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Samantha G. Zambuto
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Maria A. Zeballos
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - Marcia Pool
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
| | - Rohit Bhargava
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Chemistry, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Chemical and Biochemical Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
- NIH/NIBIB P41 Center for Label-free Imaging and Multiscale Biophotonics (CLIMB), University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| | - H. Rex Gaskins
- Division of Nutritional Sciences, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Animal Sciences, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Biomedical and Translational Sciences, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Pathobiology, University of Illinois Urbana-Champaign, Urbana, IL, 61801, USA
| |
Collapse
|
3
|
Azam AB, Wee F, Väyrynen JP, Yim WWY, Xue YZ, Chua BL, Lim JCT, Somasundaram AC, Tan DSW, Takano A, Chow CY, Khor LY, Lim TKH, Yeong J, Lau MC, Cai Y. Training immunophenotyping deep learning models with the same-section ground truth cell label derivation method improves virtual staining accuracy. Front Immunol 2024; 15:1404640. [PMID: 39007128 PMCID: PMC11239356 DOI: 10.3389/fimmu.2024.1404640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/14/2024] [Indexed: 07/16/2024] Open
Abstract
Introduction Deep learning (DL) models predicting biomarker expression in images of hematoxylin and eosin (H&E)-stained tissues can improve access to multi-marker immunophenotyping, crucial for therapeutic monitoring, biomarker discovery, and personalized treatment development. Conventionally, these models are trained on ground truth cell labels derived from IHC-stained tissue sections adjacent to H&E-stained ones, which might be less accurate than labels from the same section. Although many such DL models have been developed, the impact of ground truth cell label derivation methods on their performance has not been studied. Methodology In this study, we assess the impact of cell label derivation on H&E model performance, with CD3+ T-cells in lung cancer tissues as a proof-of-concept. We compare two Pix2Pix generative adversarial network (P2P-GAN)-based virtual staining models: one trained with cell labels obtained from the same tissue section as the H&E-stained section (the 'same-section' model) and one trained on cell labels from an adjacent tissue section (the 'serial-section' model). Results We show that the same-section model exhibited significantly improved prediction performance compared to the 'serial-section' model. Furthermore, the same-section model outperformed the serial-section model in stratifying lung cancer patients within a public lung cancer cohort based on survival outcomes, demonstrating its potential clinical utility. Discussion Collectively, our findings suggest that employing ground truth cell labels obtained through the same-section approach boosts immunophenotyping DL solutions.
Collapse
Affiliation(s)
- Abu Bakr Azam
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| | - Felicia Wee
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Juha P. Väyrynen
- Translational Medicine Research Unit, Medical Research Center Oulu, Oulu University Hospital, and University of Oulu, Oulu, Finland
| | - Willa Wen-You Yim
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Yue Zhen Xue
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Bok Leong Chua
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| | - Jeffrey Chun Tatt Lim
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | | | | | - Angela Takano
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Chun Yuen Chow
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Li Yan Khor
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Tony Kiat Hon Lim
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Joe Yeong
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Mai Chan Lau
- Bioinformatics Institute, Agency for Science, Technology and Research, Matrix, Singapore, Singapore
- Singapore Immunology Network, Agency for Science, Technology and Research, Immunos, Singapore, Singapore
| | - Yiyu Cai
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
4
|
Tweel JED, Ecclestone BR, Boktor M, Dinakaran D, Mackey JR, Reza PH. Automated Whole Slide Imaging for Label-Free Histology Using Photon Absorption Remote Sensing Microscopy. IEEE Trans Biomed Eng 2024; 71:1901-1912. [PMID: 38231822 DOI: 10.1109/tbme.2024.3355296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
OBJECTIVE Pathologists rely on histochemical stains to impart contrast in thin translucent tissue samples, revealing tissue features necessary for identifying pathological conditions. However, the chemical labeling process is destructive and often irreversible or challenging to undo, imposing practical limits on the number of stains that can be applied to the same tissue section. Here we present an automated label-free whole slide scanner using a PARS microscope designed for imaging thin, transmissible samples. METHODS Peak SNR and in-focus acquisitions are achieved across entire tissue sections using the scattering signal from the PARS detection beam to measure the optimal focal plane. Whole slide images (WSI) are seamlessly stitched together using a custom contrast leveling algorithm. Identical tissue sections are subsequently H&E stained and brightfield imaged. The one-to-one WSIs from both modalities are visually and quantitatively compared. RESULTS PARS WSIs are presented at standard 40x magnification in malignant human breast and skin samples. We show correspondence of subcellular diagnostic details in both PARS and H&E WSIs and demonstrate virtual H&E staining of an entire PARS WSI. The one-to-one WSI from both modalities show quantitative similarity in nuclear features and structural information. CONCLUSION PARS WSIs are compatible with existing digital pathology tools, and samples remain suitable for histochemical, immunohistochemical, and other staining techniques. SIGNIFICANCE This work is a critical advance for integrating label-free optical methods into standard histopathology workflows.
Collapse
|
5
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
6
|
Luan S, Ji Y, Liu Y, Zhu L, Zhou H, Ouyang J, Yang X, Zhao H, Zhu B. Real-Time Reconstruction of HIFU Focal Temperature Field Based on Deep Learning. BME FRONTIERS 2024; 5:0037. [PMID: 38515637 PMCID: PMC10956737 DOI: 10.34133/bmef.0037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 02/07/2024] [Indexed: 03/23/2024] Open
Abstract
Objective and Impact Statement: High-intensity focused ultrasound (HIFU) therapy is a promising noninvasive method that induces coagulative necrosis in diseased tissues through thermal and cavitation effects, while avoiding surrounding damage to surrounding normal tissues. Introduction: Accurate and real-time acquisition of the focal region temperature field during HIFU treatment marked enhances therapeutic efficacy, holding paramount scientific and practical value in clinical cancer therapy. Methods: In this paper, we initially designed and assembled an integrated HIFU system incorporating diagnostic, therapeutic, and temperature measurement functionalities to collect ultrasound echo signals and temperature variations during HIFU therapy. Furthermore, we introduced a novel multimodal teacher-student model approach, which utilizes the shared self-expressive coefficients and the deep canonical correlation analysis layer to aggregate each modality data, then through knowledge distillation strategies, transfers the knowledge from the teacher model to the student model. Results: By investigating the relationship between the phantoms, in vitro, and in vivo ultrasound echo signals and temperatures, we successfully achieved real-time reconstruction of the HIFU focal 2D temperature field region with a maximum temperature error of less than 2.5 °C. Conclusion: Our method effectively monitored the distribution of the HIFU temperature field in real time, providing scientifically precise predictive schemes for HIFU therapy, laying a theoretical foundation for subsequent personalized treatment dose planning, and providing efficient guidance for noninvasive, nonionizing cancer treatment.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuits, Laboratory for Optoelectronics,
Huazhong University of Science and Technology, Wuhan, China
| | - Yongshuo Ji
- HIFU Center of Oncology Department,
Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Yumei Liu
- HIFU Center of Oncology Department,
Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Linling Zhu
- HIFU Center of Oncology Department,
Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Haoyu Zhou
- School of Integrated Circuits, Laboratory for Optoelectronics,
Huazhong University of Science and Technology, Wuhan, China
| | - Jun Ouyang
- School of Integrated Circuits, Laboratory for Optoelectronics,
Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofei Yang
- School of Integrated Circuits, Laboratory for Optoelectronics,
Huazhong University of Science and Technology, Wuhan, China
| | - Hong Zhao
- HIFU Center of Oncology Department,
Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Benpeng Zhu
- School of Integrated Circuits, Laboratory for Optoelectronics,
Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
7
|
Ayana G, Lee E, Choe SW. Vision Transformers for Breast Cancer Human Epidermal Growth Factor Receptor 2 Expression Staging without Immunohistochemical Staining. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:402-414. [PMID: 38096984 DOI: 10.1016/j.ajpath.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/10/2023] [Accepted: 11/20/2023] [Indexed: 12/31/2023]
Abstract
Accurate staging of human epidermal growth factor receptor 2 (HER2) expression is vital for evaluating breast cancer treatment efficacy. However, it typically involves costly and complex immunohistochemical staining, along with hematoxylin and eosin staining. This work presents customized vision transformers for staging HER2 expression in breast cancer using only hematoxylin and eosin-stained images. The proposed algorithm comprised three modules: a localization module for weakly localizing critical image features using spatial transformers, an attention module for global learning via vision transformers, and a loss module to determine proximity to a HER2 expression level based on input images by calculating ordinal loss. Results, reported with 95% CIs, reveal the proposed approach's success in HER2 expression staging: area under the receiver operating characteristic curve, 0.9202 ± 0.01; precision, 0.922 ± 0.01; sensitivity, 0.876 ± 0.01; and specificity, 0.959 ± 0.02 over fivefold cross-validation. Comparatively, this approach significantly outperformed conventional vision transformer models and state-of-the-art convolutional neural network models (P < 0.001). Furthermore, it surpassed existing methods when evaluated on an independent test data set. This work holds great importance, aiding HER2 expression staging in breast cancer treatment while circumventing the costly and time-consuming immunohistochemical staining procedure, thereby addressing diagnostic disparities in low-resource settings and low-income countries.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Eonjin Lee
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
8
|
Ko J, Song J, Choi N, Kim HN. Patient-Derived Microphysiological Systems for Precision Medicine. Adv Healthc Mater 2024; 13:e2303161. [PMID: 38010253 DOI: 10.1002/adhm.202303161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Indexed: 11/29/2023]
Abstract
Patient-derived microphysiological systems (P-MPS) have emerged as powerful tools in precision medicine that provide valuable insight into individual patient characteristics. This review discusses the development of P-MPS as an integration of patient-derived samples, including patient-derived cells, organoids, and induced pluripotent stem cells, into well-defined MPSs. Emphasizing the necessity of P-MPS development, its significance as a nonclinical assessment approach that bridges the gap between traditional in vitro models and clinical outcomes is highlighted. Additionally, guidance is provided for engineering approaches to develop microfluidic devices and high-content analysis for P-MPSs, enabling high biological relevance and high-throughput experimentation. The practical implications of the P-MPS are further examined by exploring the clinically relevant outcomes obtained from various types of patient-derived samples. The construction and analysis of these diverse samples within the P-MPS have resulted in physiologically relevant data, paving the way for the development of personalized treatment strategies. This study describes the significance of the P-MPS in precision medicine, as well as its unique capacity to offer valuable insights into individual patient characteristics.
Collapse
Affiliation(s)
- Jihoon Ko
- Department of BioNano Technology, Gachon University, Seongnam-si, Gyeonggi-do, 13120, Republic of Korea
| | - Jiyoung Song
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
| | - Nakwon Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
- Division of Bio-Medical Science & Technology, KIST School, Seoul, 02792, Republic of Korea
- KU-KIST Graduate School of Converging Science and Technology, Korea University, Seoul, 02841, Republic of Korea
| | - Hong Nam Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
- Division of Bio-Medical Science & Technology, KIST School, Seoul, 02792, Republic of Korea
- School of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
- Yonsei-KIST Convergence Research Institute, Yonsei University, Seoul, 03722, Republic of Korea
| |
Collapse
|
9
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
10
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
11
|
Sounart H, Lázár E, Masarapu Y, Wu J, Várkonyi T, Glasz T, Kiss A, Borgström E, Hill A, Rezene S, Gupta S, Jurek A, Niesnerová A, Druid H, Bergmann O, Giacomello S. Dual spatially resolved transcriptomics for human host-pathogen colocalization studies in FFPE tissue sections. Genome Biol 2023; 24:237. [PMID: 37858234 PMCID: PMC10588020 DOI: 10.1186/s13059-023-03080-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 10/02/2023] [Indexed: 10/21/2023] Open
Abstract
Technologies to study localized host-pathogen interactions are urgently needed. Here, we present a spatial transcriptomics approach to simultaneously capture host and pathogen transcriptome-wide spatial gene expression information from human formalin-fixed paraffin-embedded (FFPE) tissue sections at a near single-cell resolution. We demonstrate this methodology in lung samples from COVID-19 patients and validate our spatial detection of SARS-CoV-2 against RNAScope and in situ sequencing. Host-pathogen colocalization analysis identified putative modulators of SARS-CoV-2 infection in human lung cells. Our approach provides new insights into host response to pathogen infection through the simultaneous, unbiased detection of two transcriptomes in FFPE samples.
Collapse
Affiliation(s)
- Hailey Sounart
- Department of Gene Technology, KTH Royal Institute of Technology, SciLifeLab, Stockholm, Sweden
| | - Enikő Lázár
- Department of Gene Technology, KTH Royal Institute of Technology, SciLifeLab, Stockholm, Sweden
- Department of Cell and Molecular Biology, Karolinska Institutet, Stockholm, Sweden
| | - Yuvarani Masarapu
- Department of Gene Technology, KTH Royal Institute of Technology, SciLifeLab, Stockholm, Sweden
| | - Jian Wu
- Department of Gene Technology, KTH Royal Institute of Technology, SciLifeLab, Stockholm, Sweden
| | - Tibor Várkonyi
- 2nd Department of Pathology, Semmelweis University, Budapest, Hungary
| | - Tibor Glasz
- 2nd Department of Pathology, Semmelweis University, Budapest, Hungary
| | - András Kiss
- 2nd Department of Pathology, Semmelweis University, Budapest, Hungary
| | | | | | - Sefanit Rezene
- Department of Laboratory Medicine, Karolinska Institutet, Stockholm, Sweden
| | - Soham Gupta
- Department of Laboratory Medicine, Karolinska Institutet, Stockholm, Sweden
| | | | | | - Henrik Druid
- Department of Oncology-Pathology, Karolinska Institutet, 17177, Stockholm, Sweden
| | - Olaf Bergmann
- Department of Cell and Molecular Biology, Karolinska Institutet, Stockholm, Sweden
- Center for Regenerative Therapies Dresden (CRTD), TU Dresden, Dresden, Germany
- Universitätsmedizin Göttingen, Institute of Pharmacology and Toxicology, Göttingen, Germany
| | - Stefania Giacomello
- Department of Gene Technology, KTH Royal Institute of Technology, SciLifeLab, Stockholm, Sweden.
| |
Collapse
|
12
|
Fanous MJ, Pillar N, Ozcan A. Digital staining facilitates biomedical microscopy. FRONTIERS IN BIOINFORMATICS 2023; 3:1243663. [PMID: 37564725 PMCID: PMC10411189 DOI: 10.3389/fbinf.2023.1243663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023] Open
Abstract
Traditional staining of biological specimens for microscopic imaging entails time-consuming, laborious, and costly procedures, in addition to producing inconsistent labeling and causing irreversible sample damage. In recent years, computational "virtual" staining using deep learning techniques has evolved into a robust and comprehensive application for streamlining the staining process without typical histochemical staining-related drawbacks. Such virtual staining techniques can also be combined with neural networks designed to correct various microscopy aberrations, such as out-of-focus or motion blur artifacts, and improve upon diffracted-limited resolution. Here, we highlight how such methods lead to a host of new opportunities that can significantly improve both sample preparation and imaging in biomedical microscopy.
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, United States
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
13
|
Salido J, Vallez N, González-López L, Deniz O, Bueno G. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107528. [PMID: 37040684 DOI: 10.1016/j.cmpb.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/27/2023] [Accepted: 04/03/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.
Collapse
Affiliation(s)
- Jesus Salido
- IEEAC Dept. (ESI-UCLM), P de la Universidad 4, Ciudad Real, 13071, Spain.
| | - Noelia Vallez
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Lucía González-López
- Hospital Gral. Universitario de C.Real (HGUCR), C. Obispo Rafael Torija s/n, Ciudad Real, 13005, Spain
| | - Oscar Deniz
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Gloria Bueno
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| |
Collapse
|
14
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
15
|
Pillar N, Ozcan A. Virtual tissue staining in pathology using machine learning. Expert Rev Mol Diagn 2022; 22:987-989. [PMID: 36440487 DOI: 10.1080/14737159.2022.2153040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA, USA
| |
Collapse
|
16
|
Bai B, Wang H, Li Y, de Haan K, Colonnese F, Wan Y, Zuo J, Doan NB, Zhang X, Zhang Y, Li J, Yang X, Dong W, Darrow MA, Kamangar E, Lee HS, Rivenson Y, Ozcan A. Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning. BME FRONTIERS 2022; 2022:9786242. [PMID: 37850170 PMCID: PMC10521710 DOI: 10.34133/2022/9786242] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/25/2022] [Indexed: 10/19/2023] Open
Abstract
The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | | | - Yujie Wan
- Physics and Astronomy Department, University of California, Los Angeles, CA 90095, USA
| | - Jingyi Zuo
- Computer Science Department, University of California, Los Angeles, CA, USA
| | - Ngan B. Doan
- Translational Pathology Core Laboratory, University of California, Los Angeles, CA 90095, USA
| | - Xiaoran Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Wenjie Dong
- Statistics Department, University of California, Los Angeles, CA 90095, USA
| | - Morgan Angus Darrow
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Elham Kamangar
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Han Sung Lee
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|