1
|
Zhang Z, Zhou X, Fang Y, Xiong Z, Zhang T. AI-driven 3D bioprinting for regenerative medicine: From bench to bedside. Bioact Mater 2025; 45:201-230. [PMID: 39651398 PMCID: PMC11625302 DOI: 10.1016/j.bioactmat.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 11/01/2024] [Accepted: 11/16/2024] [Indexed: 12/11/2024] Open
Abstract
In recent decades, 3D bioprinting has garnered significant research attention due to its ability to manipulate biomaterials and cells to create complex structures precisely. However, due to technological and cost constraints, the clinical translation of 3D bioprinted products (BPPs) from bench to bedside has been hindered by challenges in terms of personalization of design and scaling up of production. Recently, the emerging applications of artificial intelligence (AI) technologies have significantly improved the performance of 3D bioprinting. However, the existing literature remains deficient in a methodological exploration of AI technologies' potential to overcome these challenges in advancing 3D bioprinting toward clinical application. This paper aims to present a systematic methodology for AI-driven 3D bioprinting, structured within the theoretical framework of Quality by Design (QbD). This paper commences by introducing the QbD theory into 3D bioprinting, followed by summarizing the technology roadmap of AI integration in 3D bioprinting, including multi-scale and multi-modal sensing, data-driven design, and in-line process control. This paper further describes specific AI applications in 3D bioprinting's key elements, including bioink formulation, model structure, printing process, and function regulation. Finally, the paper discusses current prospects and challenges associated with AI technologies to further advance the clinical translation of 3D bioprinting.
Collapse
Affiliation(s)
- Zhenrui Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Xianhao Zhou
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Yongcong Fang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| | - Zhuo Xiong
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Ting Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| |
Collapse
|
2
|
Julian DR, Bahramy A, Neal M, Pearce TM, Kofler J. Current Advancements in Digital Neuropathology and Machine Learning for the Study of Neurodegenerative Diseases. THE AMERICAN JOURNAL OF PATHOLOGY 2025:S0002-9440(25)00046-X. [PMID: 39954963 DOI: 10.1016/j.ajpath.2024.12.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 12/16/2024] [Accepted: 12/30/2024] [Indexed: 02/17/2025]
Abstract
Computational neurodegenerative neuropathology represents a transformative approach in the analysis and understanding of neurodegenerative diseases through the utilization of whole slide images (WSI) and advanced machine learning/artificial intelligence (ML/AI) techniques. This review explores the emerging field of computational neurodegenerative neuropathology, emphasizing its potential to enhance neuropathological assessment, diagnosis, and research. Recent advancements in ML/AI technologies have significantly impacted image-based medical fields, including anatomic pathology, by automating disease staging, identifying novel morphological biomarkers, and uncovering new clinical insights via multi-modal AI approaches. Despite its promise, the field faces several challenges, including limited expert annotations, slide scanning inaccessibility, inter-institutional variability, and the complexities of sharing large WSI datasets. This review discusses the importance of improving deep learning model accuracy and efficiency for better interpretation of neuropathological data. It highlights the potential of unsupervised learning to identify patterns in unannotated data. Furthermore, the development of explainable AI models is crucial for experimental neuropathology. Through addressing these challenges and leveraging cutting-edge AI techniques, computational neurodegenerative neuropathology has the potential to revolutionize the field and significantly advance our understanding of disease.
Collapse
Affiliation(s)
- Dana R Julian
- Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Afshin Bahramy
- Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, PA; Department of Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA
| | - Makayla Neal
- Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Thomas M Pearce
- Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Julia Kofler
- Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, PA; Clinical and Translational Science Institute, University of Pittsburgh School of Medicine, Pittsburgh, PA.
| |
Collapse
|
3
|
Guang Z, Jacobs A, Costa PC, Li Z, Robles FE. Acetic acid enabled nuclear contrast enhancement in epi-mode quantitative phase imaging. JOURNAL OF BIOMEDICAL OPTICS 2025; 30:026501. [PMID: 39906483 PMCID: PMC11792252 DOI: 10.1117/1.jbo.30.2.026501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/12/2024] [Accepted: 01/09/2025] [Indexed: 02/06/2025]
Abstract
Significance The acetowhitening effect of acetic acid (AA) enhances light scattering of cell nuclei, an effect that has been widely leveraged to facilitate tissue inspection for (pre)cancerous lesions. Here, we show that a concomitant effect of acetowhitening-changes in refractive index composition-yields nuclear contrast enhancement in quantitative phase imaging (QPI) of thick tissue samples. Aim We aim to explore how changes in refractive index composition during acetowhitening can be captured through a novel epi-mode 3D QPI technique called quantitative oblique back-illumination microscopy (qOBM). We also aim to demonstrate the potential of using a machine learning-based approach to convert qOBM images of fresh tissues into virtually AA-stained images. Approach We implemented qOBM, an imaging technique that allows for epi-mode 3D QPI to observe phase changes induced by AA in thick tissue samples. We focus on detecting nuclear contrast changes caused by AA in mouse brain samples. As a proof of concept, we also applied a Cycle-GAN algorithm to convert the acquired qOBM images into virtually AA-stained images, simulating the effect of AA staining. Results Our findings demonstrate that AA-induced acetowhitening leads to significant nuclear contrast enhancement in qOBM images of thick tissue samples. In addition, the Cycle-GAN algorithm successfully converted qOBM images into virtually AA-stained images, further facilitating the nuclear enhancement process without any physical stains. Conclusions We show that the acetowhitening effect of acetic acid induces changes in refractive index composition that significantly enhance nuclear contrast in QPI. The application of qOBM with AA, along with the use of a Cycle-GAN algorithm to virtually stain tissues, highlights the potential of this approach for advancing label-free and slide-free, ex vivo, and in vivo histology.
Collapse
Affiliation(s)
- Zhe Guang
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - Amunet Jacobs
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- University of Kentucky, College of Medicine, Lexington, Kentucky, United States
| | - Paloma Casteleiro Costa
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| | - Zhenmin Li
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| | - Francisco E. Robles
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| |
Collapse
|
4
|
Nazir A, Hussain A, Singh M, Assad A. A novel approach in cancer diagnosis: integrating holography microscopic medical imaging and deep learning techniques-challenges and future trends. Biomed Phys Eng Express 2025; 11:022002. [PMID: 39671712 DOI: 10.1088/2057-1976/ad9eb7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Accepted: 12/13/2024] [Indexed: 12/15/2024]
Abstract
Medical imaging is pivotal in early disease diagnosis, providing essential insights that enable timely and accurate detection of health anomalies. Traditional imaging techniques, such as Magnetic Resonance Imaging (MRI), Computer Tomography (CT), ultrasound, and Positron Emission Tomography (PET), offer vital insights into three-dimensional structures but frequently fall short of delivering a comprehensive and detailed anatomical analysis, capturing only amplitude details. Three-dimensional holography microscopic medical imaging provides a promising solution by capturing the amplitude (brightness) and phase (structural information) details of biological structures. In this study, we investigate the novel collaborative potential of Deep Learning (DL) and holography microscopic phase imaging for cancer diagnosis. The study comprehensively examines existing literature, analyzes advancements, identifies research gaps, and proposes future research directions in cancer diagnosis through the integrated Quantitative Phase Imaging (QPI) and DL methodology. This novel approach addresses a critical limitation of traditional imaging by capturing detailed structural information, paving the way for more accurate diagnostics. The proposed approach comprises tissue sample collection, holographic image scanning, preprocessing in case of imbalanced datasets, and training on annotated datasets using DL architectures like U-Net and Vision Transformer(ViT's). Furthermore, sophisticated concepts in DL, like the incorporation of Explainable AI (XAI) techniques, are suggested for comprehensive disease diagnosis and identification. The study thoroughly investigates the advantages of integrating holography imaging and DL for precise cancer diagnosis. Additionally, meticulous insights are presented by identifying the challenges associated with this integration methodology.
Collapse
Affiliation(s)
- Asifa Nazir
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, Pulwama, 192122, J&K, India
| | - Ahsan Hussain
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, Pulwama, 192122, J&K, India
| | - Mandeep Singh
- Department of Physics, Islamic University of Science and Technology, Awantipora, Kashmir, 192122, J&K, India †
| | - Assif Assad
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, Pulwama, 192122, J&K, India
| |
Collapse
|
5
|
Virdi A, Joglekar AP. Cell-APP: A generalizable method for microscopic cell annotation, segmentation, and classification. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.23.634498. [PMID: 39896521 PMCID: PMC11785174 DOI: 10.1101/2025.01.23.634498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/04/2025]
Abstract
High throughput fluorescence microscopy is an essential tool in systems biological studies of eukaryotic cells. Its power can be fully realized when all cells in a field of view and the entire time series can be accurately localized and quantified. These tasks can be mapped to the common paradigm in computer vision: instance segmentation. Recently, supervised deep learning-based methods have become state-of-the-art for cellular instance segmentation. However, these methods require large amounts of high-quality training data. This requirement challenges our ability to train increasingly performant object detectors due to the limited availability of annotated training data, which is typically assembled via laborious hand annotation. Here, we present a generalizable method for generating large instance segmentation training datasets for tissue-culture cells in transmitted light microscopy images. We use datasets created by this method to train vision transformer (ViT) based Mask-RCNNs (Region-based Convolutional Neural Networks) that produce instance segmentations wherein cells are classified as "m-phase" (dividing) or "interphase" (non-dividing). While training these models, we also address the dataset class imbalance between m-phase and interphase cell annotations, which arises for biological reasons, using probabilistically weighted loss functions and partisan training data collection methods. We demonstrate the validity of these approaches by producing highly accurate object detectors that can serve as general tools for the segmentation and classification of morphologically diverse cells. Since the methodology depends only on generic cellular features, we hypothesize that it can be further generalized to most adherent tissue culture cell lines.
Collapse
Affiliation(s)
- Anish Virdi
- Department of Biophysics, University of Michigan
| | - Ajit P. Joglekar
- Department of Biophysics, University of Michigan
- Cell & Developmental Biology, University of Michigan Medical School
| |
Collapse
|
6
|
Cheng S, Chang S, Li Y, Novoseltseva A, Lin S, Wu Y, Zhu J, McKee AC, Rosene DL, Wang H, Bigio IJ, Boas DA, Tian L. Enhanced multiscale human brain imaging by semi-supervised digital staining and serial sectioning optical coherence tomography. LIGHT, SCIENCE & APPLICATIONS 2025; 14:57. [PMID: 39833166 PMCID: PMC11746934 DOI: 10.1038/s41377-024-01658-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 09/29/2024] [Accepted: 10/11/2024] [Indexed: 01/22/2025]
Abstract
A major challenge in neuroscience is visualizing the structure of the human brain at different scales. Traditional histology reveals micro- and meso-scale brain features but suffers from staining variability, tissue damage, and distortion, which impedes accurate 3D reconstructions. The emerging label-free serial sectioning optical coherence tomography (S-OCT) technique offers uniform 3D imaging capability across samples but has poor histological interpretability despite its sensitivity to cortical features. Here, we present a novel 3D imaging framework that combines S-OCT with a deep-learning digital staining (DS) model. This enhanced imaging modality integrates high-throughput 3D imaging, low sample variability and high interpretability, making it suitable for 3D histology studies. We develop a novel semi-supervised learning technique to facilitate DS model training on weakly paired images for translating S-OCT to Gallyas silver staining. We demonstrate DS on various human cerebral cortex samples, achieving consistent staining quality and enhancing contrast across cortical layer boundaries. Additionally, we show that DS preserves geometry in 3D on cubic-centimeter tissue blocks, allowing for visualization of meso-scale vessel networks in the white matter. We believe that our technique has the potential for high-throughput, multiscale imaging of brain tissues and may facilitate studies of brain structures.
Collapse
Affiliation(s)
- Shiyi Cheng
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Shuaibin Chang
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, 94720, USA
| | - Anna Novoseltseva
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Sunni Lin
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Yicun Wu
- Department of Computer Science, Boston University, Boston, MA, 02215, USA
| | - Jiahui Zhu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Ann C McKee
- Boston University Alzheimer's Disease Research Center and CTE Center, Boston University School of Medicine, Boston, MA, 02118, USA
- Department of Neurology, Boston University School of Medicine, Boston, MA, 02118, USA
- VA Boston Healthcare System, U.S. Department of Veteran Affairs, Boston, MA, 02130, USA
- Department of Pathology and Laboratory Medicine, Boston University School of Medicine, Boston, MA, 02118, USA
| | - Douglas L Rosene
- Department of Anatomy & Neurobiology, Boston University School of Medicine, Boston, MA, USA
| | - Hui Wang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, 02129, USA
| | - Irving J Bigio
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - David A Boas
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
7
|
Lin S, Zhou H, Watson M, Govindan R, Cote RJ, Yang C. Impact of stain variation and color normalization for prognostic predictions in pathology. Sci Rep 2025; 15:2369. [PMID: 39827151 PMCID: PMC11742970 DOI: 10.1038/s41598-024-83267-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 12/12/2024] [Indexed: 01/22/2025] Open
Abstract
In recent years, deep neural networks (DNNs) have demonstrated remarkable performance in pathology applications, potentially even outperforming expert pathologists due to their ability to learn subtle features from large datasets. One complication in preparing digital pathology datasets for DNN tasks is the variation in tinctorial qualities. A common way to address this is to perform stain normalization on the images. In this study, we show that a well-trained DNN model trained on one batch of histological slides failed to generalize to another batch prepared at a different time from the same tissue blocks, even when stain normalization methods were applied. This study used sample data from a previously reported DNN that was able to identify patients with early-stage non-small cell lung cancer (NSCLC) whose tumors did and did not metastasize, with high accuracy, based on training and then testing of digital images from H&E stained primary tumor tissue sections processed at the same time. In this study, we obtained a new series of histologic slides from the adjacent recuts of the same tissue blocks processed in the same lab but at a different time. We found that the DNN trained on either batch of slides/images was unable to generalize and failed to predict progression in the other batch of slides/images (AUCcross-batch = 0.52 - 0.53 compared to AUCsame-batch = 0.74 - 0.81). The failure to generalize did not improve even when the tinctorial difference corrections were made through either traditional color-tuning or stain normalization with the help of a Cycle Generative Adversarial Network (CycleGAN) process. This highlights the need to develop an entirely new way to process and collect consistent microscopy images from histologic slides that can be used to both train and allow for the general application of predictive DNN algorithms.
Collapse
Affiliation(s)
- Siyu Lin
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Haowen Zhou
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Mark Watson
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Ramaswamy Govindan
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Richard J Cote
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Changhuei Yang
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA.
| |
Collapse
|
8
|
Işıl Ç, Koydemir HC, Eryilmaz M, de Haan K, Pillar N, Mentesoglu K, Unal AF, Rivenson Y, Chandrasekaran S, Garner OB, Ozcan A. Virtual Gram staining of label-free bacteria using dark-field microscopy and deep learning. SCIENCE ADVANCES 2025; 11:eads2757. [PMID: 39772690 PMCID: PMC11803577 DOI: 10.1126/sciadv.ads2757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/03/2024] [Indexed: 01/11/2025]
Abstract
Gram staining has been a frequently used staining protocol in microbiology. It is vulnerable to staining artifacts due to, e.g., operator errors and chemical variations. Here, we introduce virtual Gram staining of label-free bacteria using a trained neural network that digitally transforms dark-field images of unstained bacteria into their Gram-stained equivalents matching bright-field image contrast. After a one-time training, the virtual Gram staining model processes an axial stack of dark-field microscopy images of label-free bacteria (never seen before) to rapidly generate Gram staining, bypassing several chemical steps involved in the conventional staining process. We demonstrated the success of virtual Gram staining on label-free bacteria samples containing Escherichia coli and Listeria innocua by quantifying the staining accuracy of the model and comparing the chromatic and morphological features of the virtually stained bacteria against their chemically stained counterparts. This virtual bacterial staining framework bypasses the traditional Gram staining protocol and its challenges, including stain standardization, operator errors, and sensitivity to chemical variations.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Hatice Ceylan Koydemir
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
- Center for Remote Health Technologies and Systems, Texas A&M Engineering Experiment Station, College Station, TX 77843, USA
| | - Merve Eryilmaz
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Koray Mentesoglu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Aras Firat Unal
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Sukantha Chandrasekaran
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Omai B. Garner
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
9
|
Dai B, You S, Wang K, Long Y, Chen J, Upreti N, Peng J, Zheng L, Chang C, Huang TJ, Guan Y, Zhuang S, Zhang D. Deep learning-enabled filter-free fluorescence microscope. SCIENCE ADVANCES 2025; 11:eadq2494. [PMID: 39742468 DOI: 10.1126/sciadv.adq2494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 11/25/2024] [Indexed: 01/03/2025]
Abstract
Optical filtering is an indispensable part of fluorescence microscopy for selectively highlighting molecules labeled with a specific fluorophore and suppressing background noise. However, the utilization of optical filtering sets increases the complexity, size, and cost of microscopic systems, making them less suitable for multifluorescence channel, high-speed imaging. Here, we present filter-free fluorescence microscopic imaging enabled with deep learning-based digital spectral filtering. This approach allows for automatic fluorescence channel selection after image acquisition and accurate prediction of fluorescence by computing color changes due to spectral shifts with the presence of excitation scattering. Fluorescence prediction for cells and tissues labeled with various fluorophores was demonstrated under different magnification powers. The technique offers accurate identification of labeling with robust sensitivity and specificity, achieving consistent results with the reference standard. Beyond fluorescence microscopy, the deep learning-enabled spectral filtering strategy has the potential to drive the development of other biomedical applications, including cytometry and endoscopy.
Collapse
Affiliation(s)
- Bo Dai
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Shaojie You
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kan Wang
- Department of Neurology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200127, China
| | - Yan Long
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Junyi Chen
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Neil Upreti
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27709, USA
| | - Jing Peng
- Department of Neurology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200127, China
| | - Lulu Zheng
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chenliang Chang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Tony Jun Huang
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27709, USA
| | - Yangtai Guan
- Department of Neurology, Punan Branch of Renji Hospital, School of Medicine, Shanghai Jiaotong University (Punan Hospital in Pudong New District, Shanghai), Shanghai 200125, China
| | - Songlin Zhuang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Dawei Zhang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
10
|
Chang S, Wintergerst GA, Carlson C, Yin H, Scarpato KR, Luckenbaugh AN, Chang SS, Kolouri S, Bowden AK. Low-cost and label-free blue light cystoscopy through digital staining of white light cystoscopy videos. COMMUNICATIONS MEDICINE 2024; 4:269. [PMID: 39695331 DOI: 10.1038/s43856-024-00705-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 12/10/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND Bladder cancer is the 10th most common malignancy and carries the highest treatment cost among all cancers. The elevated cost stems from its high recurrence rate, which necessitates frequent surveillance. White light cystoscopy (WLC), the standard of care surveillance tool to examine the bladder for lesions, has limited sensitivity for early-stage bladder cancer. Blue light cystoscopy (BLC) utilizes a fluorescent dye to induce contrast in cancerous regions, improving the sensitivity of detection by 43%. Nevertheless, the added equipment cost and lengthy dwell time of the dye limits the availability of BLC. METHODS Here, we report the first demonstration of digital staining as a promising strategy to convert WLC images collected with standard-of-care clinical equipment into accurate BLC-like images, providing enhanced sensitivity for WLC without the associated labor or equipment cost. RESULTS By introducing key pre-processing steps to circumvent color and brightness variations in clinical datasets needed for successful model performance, the results achieve a staining accuracy of 80.58% and show excellent qualitative and quantitative agreement of the digitally stained WLC (dsWLC) images with ground truth BLC images, including color consistency. CONCLUSIONS In short, dsWLC can affordably provide the fluorescent contrast needed to improve the detection sensitivity of bladder cancer, thereby increasing the accessibility of BLC contrast for bladder cancer surveillance. The broader implications of this work suggest digital staining is a cost-effective alternative to contrast-based endoscopy for other clinical scenarios outside of urology that can democratize access to better healthcare.
Collapse
Affiliation(s)
- Shuang Chang
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA
| | | | - Camella Carlson
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA
| | - Haoli Yin
- Vanderbilt University, Department of Computer Science, Nashville, TN, 37232, USA
| | - Kristen R Scarpato
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Amy N Luckenbaugh
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Sam S Chang
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Soheil Kolouri
- Vanderbilt University, Department of Computer Science, Nashville, TN, 37232, USA
| | - Audrey K Bowden
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA.
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, TN, 37232, USA.
| |
Collapse
|
11
|
Chen M, Liu YT, Khan FS, Fox MC, Reichenberg JS, Lopes FCPS, Sebastian KR, Markey MK, Tunnell JW. Single color digital H&E staining with In-and-Out Net. Comput Med Imaging Graph 2024; 118:102468. [PMID: 39579455 DOI: 10.1016/j.compmedimag.2024.102468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 11/06/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
Digital staining streamlines traditional staining procedures by digitally generating stained images from unstained or differently stained images. While conventional staining methods involve time-consuming chemical processes, digital staining offers an efficient and low-infrastructure alternative. Researchers can expedite tissue analysis without physical sectioning by leveraging microscopy-based techniques, such as confocal microscopy. However, interpreting grayscale or pseudo-color microscopic images remains challenging for pathologists and surgeons accustomed to traditional histologically stained images. To fill this gap, various studies explore digitally simulating staining to mimic targeted histological stains. This paper introduces a novel network, In-and-Out Net, designed explicitly for digital staining tasks. Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin (H&E) stained images. Using aluminum chloride preprocessing for skin tissue, we enhance nuclei contrast in RCM images. We trained the model with digital H&E labels featuring two fluorescence channels, eliminating the need for image registration and providing pixel-level ground truth. Our contributions include proposing an optimal training strategy, conducting a comparative analysis demonstrating state-of-the-art performance, validating the model through an ablation study, and collecting perfectly matched input and ground truth images without registration. In-and-Out Net showcases promising results, offering a valuable tool for digital staining tasks and advancing the field of histological image analysis.
Collapse
Affiliation(s)
- Mengkun Chen
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Yen-Tung Liu
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Fadeel Sher Khan
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Matthew C Fox
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Jason S Reichenberg
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Fabiana C P S Lopes
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Katherine R Sebastian
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Mia K Markey
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States; The University of Texas MD Anderson Cancer Center, Department of Imaging Physics, 1400 Pressler Street, Houston, 77030, TX, United States
| | - James W Tunnell
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States.
| |
Collapse
|
12
|
van Staalduine SE, Bianco V, Ferraro P, Menzel M. Deciphering Structural Complexity of Brain, Joint, and Muscle Tissues Using Fourier Ptychographic Scattered Light Microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.28.625428. [PMID: 39651271 PMCID: PMC11623658 DOI: 10.1101/2024.11.28.625428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Fourier Ptychographic Microscopy (FPM) provides high-resolution imaging and morphological information over large fields of view, while Computational Scattered Light Imaging (ComSLI) excels at mapping interwoven fiber organization in unstained tissue sections. This study introduces Fourier Ptychographic Scattered Light Microscopy (FP-SLM), a new multi-modal approach that combines FPM and ComSLI analyses to create both high-resolution phase-contrast images and fiber orientation maps from a single measurement. The method is demonstrated on brain sections (frog, monkey) and sections from thigh muscle and knee (mouse). FP-SLM delivers high-resolution images while revealing fiber organization in nerve, muscle, tendon, cartilage, and bone tissues. The approach is validated by comparing the computed fiber orientations with those derived from structure tensor analysis of the high-resolution images. The comparison shows that FPM and ComSLI are compatible with each other and yield fully consistent results. Remarkably, this combination surpasses the sum of its parts, so that applying ComSLI analysis to FPM recordings and vice-versa outperforms both methods alone. This cross-analysis approach can be retrospectively applied to analyze any existing FPM or ComSLI dataset (acquired with LED array and low numerical aperture), significantly expanding the application range of both techniques and enhancing the study of complex tissue architectures in biomedical research.
Collapse
|
13
|
Xie J, Xie H, Kong CZ, Ling T. Quadri-wave lateral shearing interferometry: a versatile tool for quantitative phase imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:C137-C156. [PMID: 39889087 DOI: 10.1364/josaa.534348] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 10/16/2024] [Indexed: 02/02/2025]
Abstract
Quantitative phase imaging (QPI) has emerged as a powerful tool in label-free bioimaging, in situ microstructure characterization for advanced manufacturing, and high-speed imaging of material property changes. Among various QPI methods, quadri-wave lateral shearing interferometry (QWLSI) stands out for its unique advantages in compactness, robustness, and high temporal resolution, making it an ideal choice for a wide range of applications. The compact design of QWLSI allows for easy integration with existing microscopy systems, while its robustness is manifested in the ability to maintain precise interferometric sensitivity even in high-vibration environments. Moreover, QWLSI also enables single-shot measurements that facilitate the capture of fast dynamic processes. This paper provides an in-depth exploration of the technical aspects of QWLSI, focusing on the evolution of its optical system and the primary algorithms used in wavefront reconstruction. The review also showcases significant applications of QWLSI, with a particular emphasis on its contributions to biomedical imaging. By discussing the advantages, limitations, and potential future developments of QWLSI, this paper aims to provide a comprehensive overview of this powerful QPI technique and its impact on various research fields.
Collapse
|
14
|
Chen J, Chen R, Chen L, Zhang L, Wang W, Zeng X. Kidney medicine meets computer vision: a bibliometric analysis. Int Urol Nephrol 2024; 56:3361-3380. [PMID: 38814370 DOI: 10.1007/s11255-024-04082-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/16/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Rapid advances in computer vision (CV) have the potential to facilitate the examination, diagnosis, and treatment of diseases of the kidney. The bibliometric study aims to explore the research landscape and evolving research focus of the application of CV in kidney medicine research. METHODS The Web of Science Core Collection was utilized to identify publications related to the research or applications of CV technology in the field of kidney medicine from January 1, 1900, to December 31, 2022. We analyzed emerging research trends, highly influential publications and journals, prolific researchers, countries/regions, research institutions, co-authorship networks, and co-occurrence networks. Bibliographic information was analyzed and visualized using Python, Matplotlib, Seaborn, HistCite, and Vosviewer. RESULTS There was an increasing trend in the number of publications on CV-based kidney medicine research. These publications mainly focused on medical image processing, surgical procedures, medical image analysis/diagnosis, as well as the application and innovation of CV technology in medical imaging. The United States is currently the leading country in terms of the quantities of published articles and international collaborations, followed by China. Deep learning-based segmentation and machine learning-based texture analysis are the most commonly used techniques in this field. Regarding research hotspot trends, CV algorithms are shifting toward artificial intelligence, and research objects are expanding to encompass a wider range of kidney-related objects, with data dimensions used in research transitioning from 2D to 3D while simultaneously incorporating more diverse data modalities. CONCLUSION The present study provides a scientometric overview of the current progress in the research and application of CV technology in kidney medicine research. Through the use of bibliometric analysis and network visualization, we elucidate emerging trends, key sources, leading institutions, and popular topics. Our findings and analysis are expected to provide valuable insights for future research on the use of CV in kidney medicine research.
Collapse
Affiliation(s)
- Junren Chen
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Rui Chen
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Liangyin Chen
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Lei Zhang
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Wei Wang
- School of Automation, Chengdu University of Information Technology, Chengdu, 610225, Sichuan, China
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
| | - Xiaoxi Zeng
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China.
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
15
|
Chechekhina E, Voloshin N, Kulebyakin K, Tyurin-Kuzmin P. Code-Free Machine Learning Solutions for Microscopy Image Processing: Deep Learning. Tissue Eng Part A 2024; 30:627-639. [PMID: 38556835 DOI: 10.1089/ten.tea.2024.0014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024] Open
Abstract
In recent years, there has been a significant expansion in the realm of processing microscopy images, thanks to the advent of machine learning techniques. These techniques offer diverse applications for image processing. Currently, numerous methods are used for processing microscopy images in the field of biology, ranging from conventional machine learning algorithms to sophisticated deep learning artificial neural networks with millions of parameters. However, a comprehensive grasp of the intricacies of these methods usually necessitates proficiency in programming and advanced mathematics. In our comprehensive review, we explore various widely used deep learning approaches tailored for the processing of microscopy images. Our emphasis is on algorithms that have gained popularity in the field of biology and have been adapted to cater to users lacking programming expertise. In essence, our target audience comprises biologists interested in exploring the potential of deep learning algorithms, even without programming skills. Throughout the review, we elucidate each algorithm's fundamental concepts and capabilities without delving into mathematical and programming complexities. Crucially, all the highlighted algorithms are accessible on open platforms without requiring code, and we provide detailed descriptions and links within our review. It's essential to recognize that addressing each specific problem demands an individualized approach. Consequently, our focus is not on comparing algorithms but on delineating the problems they are adept at solving. In practical scenarios, researchers typically select multiple algorithms suited to their tasks and experimentally determine the most effective one. It is worth noting that microscopy extends beyond the realm of biology; its applications span diverse fields such as geology and material science. Although our review predominantly centers on biomedical applications, the algorithms and principles outlined here are equally applicable to other scientific domains. Furthermore, a number of the proposed solutions can be modified for use in entirely distinct computer vision cases.
Collapse
Affiliation(s)
- Elizaveta Chechekhina
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Nikita Voloshin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Konstantin Kulebyakin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Pyotr Tyurin-Kuzmin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
16
|
Lin S(S, Zhou H, Cote RJ, Watson M, Govindan R, Yang C. Impact of Stain Variation and Color Normalization for Prognostic Predictions in Pathology. ARXIV 2024:arXiv:2409.08338v1. [PMID: 39314500 PMCID: PMC11419173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
In recent years, deep neural networks (DNNs) have demonstrated remarkable performance in pathology applications, potentially even outperforming expert pathologists due to their ability to learn subtle features from large datasets. One complication in preparing digital pathology datasets for DNN tasks is variation in tinctorial qualities. A common way to address this is to perform stain normalization on the images. In this study, we show that a well-trained DNN model trained on one batch of histological slides failed to generalize to another batch prepared at a different time from the same tissue blocks, even when stain normalization methods were applied. This study used sample data from a previously reported DNN that was able to identify patients with early stage non-small cell lung cancer (NSCLC) whose tumors did and did not metastasize, with high accuracy, based on training and then testing of digital images from H&E stained primary tumor tissue sections processed at the same time. In this study we obtained a new series of histologic slides from the adjacent recuts of same tissue blocks processed in the same lab but at a different time. We found that the DNN trained on the either batch of slides/images was unable to generalize and failed to predict progression in the other batch of slides/images (AUCcross-batch = 0.52 - 0.53 compared to AUCsame-batch = 0.74 - 0.81). The failure to generalize did not improve even when the tinctorial difference correction were made through either traditional color-tuning or stain normalization with the help of a Cycle Generative Adversarial Network (CycleGAN) process. This highlights the need to develop an entirely new way to process and collect consistent microscopy images from histologic slides that can be used to both train and allow for the general application of predictive DNN algorithms.
Collapse
Affiliation(s)
- Siyu (Steven) Lin
- California Institute of Technology, Department of Electrical Engineering, Pasadena CA 91125 USA
| | - Haowen Zhou
- California Institute of Technology, Department of Electrical Engineering, Pasadena CA 91125 USA
| | - Richard J. Cote
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis MO 63110 USA
| | - Mark Watson
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis MO 63110 USA
| | - Ramaswamy Govindan
- Washington University School of Medicine, Department of Medicine, St. Louis MO 63110 USA
| | - Changhuei Yang
- California Institute of Technology, Department of Electrical Engineering, Pasadena CA 91125 USA
| |
Collapse
|
17
|
Yang X, Bai B, Zhang Y, Aydin M, Li Y, Selcuk SY, Casteleiro Costa P, Guo Z, Fishbein GA, Atlan K, Wallace WD, Pillar N, Ozcan A. Virtual birefringence imaging and histological staining of amyloid deposits in label-free tissue using autofluorescence microscopy and deep learning. Nat Commun 2024; 15:7978. [PMID: 39266547 PMCID: PMC11393327 DOI: 10.1038/s41467-024-52263-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 09/02/2024] [Indexed: 09/14/2024] Open
Abstract
Systemic amyloidosis involves the deposition of misfolded proteins in organs/tissues, leading to progressive organ dysfunction and failure. Congo red is the gold-standard chemical stain for visualizing amyloid deposits in tissue, showing birefringence under polarization microscopy. However, Congo red staining is tedious and costly to perform, and prone to false diagnoses due to variations in amyloid amount, staining quality and manual examination of tissue under a polarization microscope. We report virtual birefringence imaging and virtual Congo red staining of label-free human tissue to show that a single neural network can transform autofluorescence images of label-free tissue into brightfield and polarized microscopy images, matching their histochemically stained versions. Blind testing with quantitative metrics and pathologist evaluations on cardiac tissue showed that our virtually stained polarization and brightfield images highlight amyloid patterns in a consistent manner, mitigating challenges due to variations in chemical staining quality and manual imaging processes in the clinical workflow.
Collapse
Affiliation(s)
- Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Musa Aydin
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Computer Engineering, Fatih Sultan Mehmet Vakif University, Istanbul, 34038, Turkey
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sahan Yoruc Selcuk
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Zhen Guo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Gregory A Fishbein
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine at the University of California, Los Angeles, CA, 90095, USA
| | - Karine Atlan
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
18
|
Pati P, Karkampouna S, Bonollo F, Compérat E, Radić M, Spahn M, Martinelli A, Wartenberg M, Kruithof-de Julio M, Rapsomaniki M. Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling. NAT MACH INTELL 2024; 6:1077-1093. [PMID: 39309216 PMCID: PMC11415301 DOI: 10.1038/s42256-024-00889-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 07/29/2024] [Indexed: 09/25/2024]
Abstract
Understanding the spatial heterogeneity of tumours and its links to disease initiation and progression is a cornerstone of cancer biology. Presently, histopathology workflows heavily rely on hematoxylin and eosin and serial immunohistochemistry staining, a cumbersome, tissue-exhaustive process that results in non-aligned tissue images. We propose the VirtualMultiplexer, a generative artificial intelligence toolkit that effectively synthesizes multiplexed immunohistochemistry images for several antibody markers (namely AR, NKX3.1, CD44, CD146, p53 and ERG) from only an input hematoxylin and eosin image. The VirtualMultiplexer captures biologically relevant staining patterns across tissue scales without requiring consecutive tissue sections, image registration or extensive expert annotations. Thorough qualitative and quantitative assessment indicates that the VirtualMultiplexer achieves rapid, robust and precise generation of virtually multiplexed imaging datasets of high staining quality that are indistinguishable from the real ones. The VirtualMultiplexer is successfully transferred across tissue scales and patient cohorts with no need for model fine-tuning. Crucially, the virtually multiplexed images enabled training a graph transformer that simultaneously learns from the joint spatial distribution of several proteins to predict clinically relevant endpoints. We observe that this multiplexed learning scheme was able to greatly improve clinical prediction, as corroborated across several downstream tasks, independent patient cohorts and cancer types. Our results showcase the clinical relevance of artificial intelligence-assisted multiplexed tumour imaging, accelerating histopathology workflows and cancer biology.
Collapse
Affiliation(s)
| | - Sofia Karkampouna
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Francesco Bonollo
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Eva Compérat
- Department of Pathology, Medical University of Vienna, Vienna, Austria
| | - Martina Radić
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Martin Spahn
- Department of Urology, Lindenhofspital Bern, Bern, Switzerland
- Department of Urology, University Duisburg-Essen, Essen, Germany
| | - Adriano Martinelli
- IBM Research Europe, Rüschlikon, Switzerland
- ETH Zürich, Zürich, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
| | - Martin Wartenberg
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Marianna Kruithof-de Julio
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Translational Organoid Resource, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Marianna Rapsomaniki
- IBM Research Europe, Rüschlikon, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
19
|
Latonen L, Koivukoski S, Khan U, Ruusuvuori P. Virtual staining for histology by deep learning. Trends Biotechnol 2024; 42:1177-1191. [PMID: 38480025 DOI: 10.1016/j.tibtech.2024.02.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 09/07/2024]
Abstract
In pathology and biomedical research, histology is the cornerstone method for tissue analysis. Currently, the histological workflow consumes plenty of chemicals, water, and time for staining procedures. Deep learning is now enabling digital replacement of parts of the histological staining procedure. In virtual staining, histological stains are created by training neural networks to produce stained images from an unstained tissue image, or through transferring information from one stain to another. These technical innovations provide more sustainable, rapid, and cost-effective alternatives to traditional histological pipelines, but their development is in an early phase and requires rigorous validation. In this review we cover the basic concepts of virtual staining for histology and provide future insights into the utilization of artificial intelligence (AI)-enabled virtual histology.
Collapse
Affiliation(s)
- Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland.
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Turku, Finland
| | | |
Collapse
|
20
|
Biswas S, Barma S. Feature Fusion GAN Based Virtual Staining on Plant Microscopy Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:1264-1273. [PMID: 38517710 DOI: 10.1109/tcbb.2024.3380634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
Virtual staining of microscopy specimens using GAN-based methods could resolve critical concerns of manual staining process as displayed in recent studies on histopathology images. However, most of these works use basic-GAN framework ignoring microscopy image characteristics and their performance were evaluated based on structural and error statistics (SSIM and PSNR) between synthetic and ground-truth without considering any color space although virtual staining deals with color transformation. Besides, major aspects of staining, like color, contrast, focus, image-realness etc. were totally ignored. However, modifications of GAN architecture for virtual staining might be suitable by incorporating microscopy image features. Further, its successful implementation need to be examined by considering various aspects of staining process. Therefore, we designed, a new feature-fusion-GAN for virtual staining followed by performance assessment by framing a state-of-the-art multi-evaluation framework that includes numerous metrics in -qualitative (based on histogram-correlation of color and brightness); quantitative (SSIM and PSNR); focus aptitude (Brenner metrics and Spectral-Moments); and influence on perception (semantic perceptual influence score). For, experimental validation cell boundaries were highlighted by two different staining reagents, Safranin-O and Toluidine-Blue-O on plant microscopy images of potato tuber. We evaluated virtually stained image quality w.r.t ground-truth in RGB and YCbCr color spaces based on defined metrics and results are found very consistent. Further, impact of feature fusion has been demonstrated. Collectively, this study could be a baseline towards guiding architectural upgrading of deep pipelines for virtual staining of diverse microscopy modalities followed by future benchmark methodology or protocols.
Collapse
|
21
|
Rosen J, Alford S, Allan B, Anand V, Arnon S, Arockiaraj FG, Art J, Bai B, Balasubramaniam GM, Birnbaum T, Bisht NS, Blinder D, Cao L, Chen Q, Chen Z, Dubey V, Egiazarian K, Ercan M, Forbes A, Gopakumar G, Gao Y, Gigan S, Gocłowski P, Gopinath S, Greenbaum A, Horisaki R, Ierodiaconou D, Juodkazis S, Karmakar T, Katkovnik V, Khonina SN, Kner P, Kravets V, Kumar R, Lai Y, Li C, Li J, Li S, Li Y, Liang J, Manavalan G, Mandal AC, Manisha M, Mann C, Marzejon MJ, Moodley C, Morikawa J, Muniraj I, Narbutis D, Ng SH, Nothlawala F, Oh J, Ozcan A, Park Y, Porfirev AP, Potcoava M, Prabhakar S, Pu J, Rai MR, Rogalski M, Ryu M, Choudhary S, Salla GR, Schelkens P, Şener SF, Shevkunov I, Shimobaba T, Singh RK, Singh RP, Stern A, Sun J, Zhou S, Zuo C, Zurawski Z, Tahara T, Tiwari V, Trusiak M, Vinu RV, Volotovskiy SG, Yılmaz H, De Aguiar HB, Ahluwalia BS, Ahmad A. Roadmap on computational methods in optical imaging and holography [invited]. APPLIED PHYSICS. B, LASERS AND OPTICS 2024; 130:166. [PMID: 39220178 PMCID: PMC11362238 DOI: 10.1007/s00340-024-08280-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/10/2024] [Indexed: 09/04/2024]
Abstract
Computational methods have been established as cornerstones in optical imaging and holography in recent years. Every year, the dependence of optical imaging and holography on computational methods is increasing significantly to the extent that optical methods and components are being completely and efficiently replaced with computational methods at low cost. This roadmap reviews the current scenario in four major areas namely incoherent digital holography, quantitative phase imaging, imaging through scattering layers, and super-resolution imaging. In addition to registering the perspectives of the modern-day architects of the above research areas, the roadmap also reports some of the latest studies on the topic. Computational codes and pseudocodes are presented for computational methods in a plug-and-play fashion for readers to not only read and understand but also practice the latest algorithms with their data. We believe that this roadmap will be a valuable tool for analyzing the current trends in computational methods to predict and prepare the future of computational methods in optical imaging and holography. Supplementary Information The online version contains supplementary material available at 10.1007/s00340-024-08280-3.
Collapse
Affiliation(s)
- Joseph Rosen
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Simon Alford
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Blake Allan
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Vijayakumar Anand
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Shlomi Arnon
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Francis Gracy Arockiaraj
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Jonathan Art
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Ganesh M. Balasubramaniam
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Tobias Birnbaum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- Swave BV, Gaston Geenslaan 2, 3001 Leuven, Belgium
| | - Nandan S. Bisht
- Applied Optics and Spectroscopy Laboratory, Department of Physics, Soban Singh Jeena University Campus Almora, Almora, Uttarakhand 263601 India
| | - David Blinder
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Liangcai Cao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
| | - Ziyang Chen
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Vishesh Dubey
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Karen Egiazarian
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Mert Ercan
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Andrew Forbes
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - G. Gopakumar
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala India
| | - Yunhui Gao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Paweł Gocłowski
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | | | - Alon Greenbaum
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695 USA
| | - Ryoichi Horisaki
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
| | - Daniel Ierodiaconou
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Saulius Juodkazis
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Tanushree Karmakar
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Vladimir Katkovnik
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Svetlana N. Khonina
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
- Samara National Research University, 443086 Samara, Russia
| | - Peter Kner
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Vladislav Kravets
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Ravi Kumar
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Yingming Lai
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Chen Li
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Jiaji Li
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shaoheng Li
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Gokul Manavalan
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Manisha Manisha
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Christopher Mann
- Department of Applied Physics and Materials Science, Northern Arizona University, Flagstaff, AZ 86011 USA
- Center for Materials Interfaces in Research and Development, Northern Arizona University, Flagstaff, AZ 86011 USA
| | - Marcin J. Marzejon
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Chané Moodley
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Junko Morikawa
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Inbarasan Muniraj
- LiFE Lab, Department of Electronics and Communication Engineering, Alliance School of Applied Engineering, Alliance University, Bangalore, Karnataka 562106 India
| | - Donatas Narbutis
- Institute of Theoretical Physics and Astronomy, Faculty of Physics, Vilnius University, Sauletekio 9, 10222 Vilnius, Lithuania
| | - Soon Hock Ng
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Fazilah Nothlawala
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Jeonghun Oh
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
- Tomocube Inc., Daejeon, 34051 South Korea
| | - Alexey P. Porfirev
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Mariana Potcoava
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Shashi Prabhakar
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Jixiong Pu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Mani Ratnam Rai
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Mikołaj Rogalski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Meguya Ryu
- Research Institute for Material and Chemical Measurement, National Metrology Institute of Japan (AIST), 1-1-1 Umezono, Tsukuba, 305-8563 Japan
| | - Sakshi Choudhary
- Department Chemical Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Shiva, Israel
| | - Gangi Reddy Salla
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Peter Schelkens
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
| | - Sarp Feykun Şener
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Igor Shevkunov
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Tomoyoshi Shimobaba
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Rakesh K. Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Ravindra P. Singh
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Adrian Stern
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Jiasong Sun
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shun Zhou
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Chao Zuo
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Zack Zurawski
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Tatsuki Tahara
- Applied Electromagnetic Research Center, Radio Research Institute, National Institute of Information and Communications Technology (NICT), 4-2-1 Nukuikitamachi, Koganei, Tokyo 184-8795 Japan
| | - Vipin Tiwari
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - R. V. Vinu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Sergey G. Volotovskiy
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Hasan Yılmaz
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
| | - Hilton Barbosa De Aguiar
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Balpreet S. Ahluwalia
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| |
Collapse
|
22
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
23
|
Azam AB, Wee F, Väyrynen JP, Yim WWY, Xue YZ, Chua BL, Lim JCT, Somasundaram AC, Tan DSW, Takano A, Chow CY, Khor LY, Lim TKH, Yeong J, Lau MC, Cai Y. Training immunophenotyping deep learning models with the same-section ground truth cell label derivation method improves virtual staining accuracy. Front Immunol 2024; 15:1404640. [PMID: 39007128 PMCID: PMC11239356 DOI: 10.3389/fimmu.2024.1404640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/14/2024] [Indexed: 07/16/2024] Open
Abstract
Introduction Deep learning (DL) models predicting biomarker expression in images of hematoxylin and eosin (H&E)-stained tissues can improve access to multi-marker immunophenotyping, crucial for therapeutic monitoring, biomarker discovery, and personalized treatment development. Conventionally, these models are trained on ground truth cell labels derived from IHC-stained tissue sections adjacent to H&E-stained ones, which might be less accurate than labels from the same section. Although many such DL models have been developed, the impact of ground truth cell label derivation methods on their performance has not been studied. Methodology In this study, we assess the impact of cell label derivation on H&E model performance, with CD3+ T-cells in lung cancer tissues as a proof-of-concept. We compare two Pix2Pix generative adversarial network (P2P-GAN)-based virtual staining models: one trained with cell labels obtained from the same tissue section as the H&E-stained section (the 'same-section' model) and one trained on cell labels from an adjacent tissue section (the 'serial-section' model). Results We show that the same-section model exhibited significantly improved prediction performance compared to the 'serial-section' model. Furthermore, the same-section model outperformed the serial-section model in stratifying lung cancer patients within a public lung cancer cohort based on survival outcomes, demonstrating its potential clinical utility. Discussion Collectively, our findings suggest that employing ground truth cell labels obtained through the same-section approach boosts immunophenotyping DL solutions.
Collapse
Affiliation(s)
- Abu Bakr Azam
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| | - Felicia Wee
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Juha P. Väyrynen
- Translational Medicine Research Unit, Medical Research Center Oulu, Oulu University Hospital, and University of Oulu, Oulu, Finland
| | - Willa Wen-You Yim
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Yue Zhen Xue
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Bok Leong Chua
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| | - Jeffrey Chun Tatt Lim
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | | | | | - Angela Takano
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Chun Yuen Chow
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Li Yan Khor
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Tony Kiat Hon Lim
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Joe Yeong
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
- Department of Anatomical Pathology, Division of Pathology, Singapore General Hospital, Singapore, Singapore
| | - Mai Chan Lau
- Bioinformatics Institute, Agency for Science, Technology and Research, Matrix, Singapore, Singapore
- Singapore Immunology Network, Agency for Science, Technology and Research, Immunos, Singapore, Singapore
| | - Yiyu Cai
- School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
24
|
Wang Q, Akram AR, Dorward DA, Talas S, Monks B, Thum C, Hopgood JR, Javidi M, Vallejo M. Deep learning-based virtual H& E staining from label-free autofluorescence lifetime images. NPJ IMAGING 2024; 2:17. [PMID: 38948152 PMCID: PMC11213708 DOI: 10.1038/s44303-024-00021-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 06/11/2024] [Indexed: 07/02/2024]
Abstract
Label-free autofluorescence lifetime is a unique feature of the inherent fluorescence signals emitted by natural fluorophores in biological samples. Fluorescence lifetime imaging microscopy (FLIM) can capture these signals enabling comprehensive analyses of biological samples. Despite the fundamental importance and wide application of FLIM in biomedical and clinical sciences, existing methods for analysing FLIM images often struggle to provide rapid and precise interpretations without reliable references, such as histology images, which are usually unavailable alongside FLIM images. To address this issue, we propose a deep learning (DL)-based approach for generating virtual Hematoxylin and Eosin (H&E) staining. By combining an advanced DL model with a contemporary image quality metric, we can generate clinical-grade virtual H&E-stained images from label-free FLIM images acquired on unstained tissue samples. Our experiments also show that the inclusion of lifetime information, an extra dimension beyond intensity, results in more accurate reconstructions of virtual staining when compared to using intensity-only images. This advancement allows for the instant and accurate interpretation of FLIM images at the cellular level without the complexities associated with co-registering FLIM and histology images. Consequently, we are able to identify distinct lifetime signatures of seven different cell types commonly found in the tumour microenvironment, opening up new opportunities towards biomarker-free tissue histology using FLIM across multiple cancer types.
Collapse
Affiliation(s)
- Qiang Wang
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Ahsan R. Akram
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - David A. Dorward
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Sophie Talas
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Basil Monks
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Chee Thum
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - James R. Hopgood
- School of Engineering, The University of Edinburgh, Edinburgh, UK
| | - Malihe Javidi
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
- Department of Computer Engineering, Quchan University of Technology, Quchan, Iran
| | - Marta Vallejo
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
| |
Collapse
|
25
|
Huang Z, Cao L. Quantitative phase imaging based on holography: trends and new perspectives. LIGHT, SCIENCE & APPLICATIONS 2024; 13:145. [PMID: 38937443 PMCID: PMC11211409 DOI: 10.1038/s41377-024-01453-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 04/07/2024] [Accepted: 04/10/2024] [Indexed: 06/29/2024]
Abstract
In 1948, Dennis Gabor proposed the concept of holography, providing a pioneering solution to a quantitative description of the optical wavefront. After 75 years of development, holographic imaging has become a powerful tool for optical wavefront measurement and quantitative phase imaging. The emergence of this technology has given fresh energy to physics, biology, and materials science. Digital holography (DH) possesses the quantitative advantages of wide-field, non-contact, precise, and dynamic measurement capability for complex-waves. DH has unique capabilities for the propagation of optical fields by measuring light scattering with phase information. It offers quantitative visualization of the refractive index and thickness distribution of weak absorption samples, which plays a vital role in the pathophysiology of various diseases and the characterization of various materials. It provides a possibility to bridge the gap between the imaging and scattering disciplines. The propagation of wavefront is described by the complex amplitude. The complex-value in the complex-domain is reconstructed from the intensity-value measurement by camera in the real-domain. Here, we regard the process of holographic recording and reconstruction as a transformation between complex-domain and real-domain, and discuss the mathematics and physical principles of reconstruction. We review the DH in underlying principles, technical approaches, and the breadth of applications. We conclude with emerging challenges and opportunities based on combining holographic imaging with other methodologies that expand the scope and utility of holographic imaging even further. The multidisciplinary nature brings technology and application experts together in label-free cell biology, analytical chemistry, clinical sciences, wavefront sensing, and semiconductor production.
Collapse
Affiliation(s)
- Zhengzhong Huang
- Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Liangcai Cao
- Department of Precision Instrument, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
26
|
Guo X, Che W. Improvement of gram staining effect by ethanol pretreatment and quantization of staining image by unsupervised machine learning. Arch Microbiol 2024; 206:318. [PMID: 38904719 DOI: 10.1007/s00203-024-04045-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 06/08/2024] [Accepted: 06/11/2024] [Indexed: 06/22/2024]
Abstract
In this study, we propose an Ethanol Pretreatment Gram staining method that significantly enhances the color contrast of the stain, thereby improving the accuracy of judgement, and demonstrated the effectiveness of the modification by eliminating unaided-eye observational errors with unsupervised machine learning image analysis. By comparing the traditional Gram staining method with the improved method on various bacterial samples, results showed that the improved method offers distinct color contrast. Using multimodal assessment strategies, including unaided-eye observation, manual image segmentation, and advanced unsupervised machine learning automatic image segmentation, the practicality of ethanol pretreatment on Gram staining was comprehensively validated. In our quantitative analysis, the application of the CIEDE2000, and CMC color difference standards confirmed the significant effect of the method in enhancing the discrimination of Gram staining.This study not only improved the efficacy of Gram staining, but also provided a more accurate and standardized strategy for analyzing Gram staining results, which might provide an useful analytical tool in microbiological diagnostics.
Collapse
Affiliation(s)
- Xuan Guo
- Guangzhou Higher Education Mega Centre, School of Biology and Biological Engineering, South China University of Technology, Guangzhou, 510006, Guangdong, China.
| | - Wenming Che
- Guangzhou Higher Education Mega Centre, School of Biology and Biological Engineering, South China University of Technology, Guangzhou, 510006, Guangdong, China
| |
Collapse
|
27
|
Mürer FK, Tekseth KR, Chattopadhyay B, Olstad K, Akram MN, Breiby DW. Multimodal 2D and 3D microscopic mapping of growth cartilage by computational imaging techniques - a short review including new research. Biomed Phys Eng Express 2024; 10:045041. [PMID: 38744257 DOI: 10.1088/2057-1976/ad4b1f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 05/14/2024] [Indexed: 05/16/2024]
Abstract
Being able to image the microstructure of growth cartilage is important for understanding the onset and progression of diseases such as osteochondrosis and osteoarthritis, as well as for developing new treatments and implants. Studies of cartilage using conventional optical brightfield microscopy rely heavily on histological staining, where the added chemicals provide tissue-specific colours. Other microscopy contrast mechanisms include polarization, phase- and scattering contrast, enabling non-stained or 'label-free' imaging that significantly simplifies the sample preparation, thereby also reducing the risk of artefacts. Traditional high-performance microscopes tend to be both bulky and expensive.Computational imagingdenotes a range of techniques where computers with dedicated algorithms are used as an integral part of the image formation process. Computational imaging offers many advantages like 3D measurements, aberration correction and quantitative phase contrast, often combined with comparably cheap and compact hardware. X-ray microscopy is also progressing rapidly, in certain ways trailing the development of optical microscopy. In this study, we first briefly review the structures of growth cartilage and relevant microscopy characterization techniques, with an emphasis on Fourier ptychographic microscopy (FPM) and advanced x-ray microscopies. We next demonstrate with our own results computational imaging through FPM and compare the images with hematoxylin eosin and saffron (HES)-stained histology. Zernike phase contrast, and the nonlinear optical microscopy techniques of second harmonic generation (SHG) and two-photon excitation fluorescence (TPEF) are explored. Furthermore, X-ray attenuation-, phase- and diffraction-contrast computed tomography (CT) images of the very same sample are presented for comparisons. Future perspectives on the links to artificial intelligence, dynamic studies andin vivopossibilities conclude the article.
Collapse
Affiliation(s)
- Fredrik K Mürer
- Department of Physics, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, 7491 Trondheim, Norway
- SINTEF Helgeland AS, Halvor Heyerdahls vei 33, 8626 Mo i Rana, Norway
| | - Kim R Tekseth
- Department of Physics, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, 7491 Trondheim, Norway
| | - Basab Chattopadhyay
- Department of Physics, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, 7491 Trondheim, Norway
| | - Kristin Olstad
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences (NMBU), Equine section, PO Box 5003, 1432 Ås, Norway
| | - Muhammad Nadeem Akram
- Department of Microsystems, University of South-Eastern Norway (USN), 3184 Borre, Norway
| | - Dag W Breiby
- Department of Physics, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, 7491 Trondheim, Norway
- Department of Microsystems, University of South-Eastern Norway (USN), 3184 Borre, Norway
| |
Collapse
|
28
|
Chen Z, Wong IHM, Dai W, Lo CTK, Wong TTW. Lung Cancer Diagnosis on Virtual Histologically Stained Tissue Using Weakly Supervised Learning. Mod Pathol 2024; 37:100487. [PMID: 38588884 DOI: 10.1016/j.modpat.2024.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 03/05/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024]
Abstract
Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Weixing Dai
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Claudia T K Lo
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| |
Collapse
|
29
|
Lee S, Lee E, Yang H, Park K, Min E, Jung W. Digital histological staining of tissue slide images from optical coherence microscopy. BIOMEDICAL OPTICS EXPRESS 2024; 15:3807-3816. [PMID: 38867770 PMCID: PMC11166446 DOI: 10.1364/boe.520683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 06/14/2024]
Abstract
The convergence of staining-free optical imaging and digital staining technologies has become a central focus in digital pathology, presenting significant advantages in streamlining specimen preparation and expediting the rapid acquisition of histopathological information. Despite the inherent merits of optical coherence microscopy (OCM) as a staining-free technique, its widespread application in observing histopathological slides has been constrained. This study introduces a novel approach by combining wide-field OCM with digital staining technology for the imaging of histopathological slides. Through the optimization of the histology slide production process satisfying the ground growth for digital staining as well as pronounced contrast for OCM imaging, successful imaging of various mouse tissues was achieved. Comparative analyses with conventional staining-based bright field images were executed to evaluate the proposed methodology's efficacy. Moreover, the study investigates the generalization of digital staining color appearance to ensure consistent histopathology, considering tissue-specific and thickness-dependent variations.
Collapse
Affiliation(s)
- Sangjin Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Eunji Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Hyunmo Yang
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Kibeom Park
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Eunjung Min
- Korea Photonics Technology Institute, Gwangju 61007, Republic of Korea
| | - Woonggyu Jung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| |
Collapse
|
30
|
Liu Y, Uttam S. Perspective on quantitative phase imaging to improve precision cancer medicine. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S22705. [PMID: 38584967 PMCID: PMC10996848 DOI: 10.1117/1.jbo.29.s2.s22705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/03/2024] [Accepted: 03/15/2024] [Indexed: 04/09/2024]
Abstract
Significance Quantitative phase imaging (QPI) offers a label-free approach to non-invasively characterize cellular processes by exploiting their refractive index based intrinsic contrast. QPI captures this contrast by translating refractive index associated phase shifts into intensity-based quantifiable data with nanoscale sensitivity. It holds significant potential for advancing precision cancer medicine by providing quantitative characterization of the biophysical properties of cells and tissue in their natural states. Aim This perspective aims to discuss the potential of QPI to increase our understanding of cancer development and its response to therapeutics. It also explores new developments in QPI methods towards advancing personalized cancer therapy and early detection. Approach We begin by detailing the technical advancements of QPI, examining its implementations across transmission and reflection geometries and phase retrieval methods, both interferometric and non-interferometric. The focus then shifts to QPI's applications in cancer research, including dynamic cell mass imaging for drug response assessment, cancer risk stratification, and in-vivo tissue imaging. Results QPI has emerged as a crucial tool in precision cancer medicine, offering insights into tumor biology and treatment efficacy. Its sensitivity to detecting nanoscale changes holds promise for enhancing cancer diagnostics, risk assessment, and prognostication. The future of QPI is envisioned in its integration with artificial intelligence, morpho-dynamics, and spatial biology, broadening its impact in cancer research. Conclusions QPI presents significant potential in advancing precision cancer medicine and redefining our approach to cancer diagnosis, monitoring, and treatment. Future directions include harnessing high-throughput dynamic imaging, 3D QPI for realistic tumor models, and combining artificial intelligence with multi-omics data to extend QPI's capabilities. As a result, QPI stands at the forefront of cancer research and clinical application in cancer care.
Collapse
Affiliation(s)
- Yang Liu
- University of Illinois Urbana-Champaign, Beckman Institute for Advanced Science and Technology, Cancer Center at Illinois, Department of Bioengineering, Department of Electrical and Computer Engineering, Urbana, Illinois, United States
- University of Pittsburgh, Departments of Medicine and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Shikhar Uttam
- University of Pittsburgh, Department of Computational and Systems Biology, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
31
|
Tweel JED, Ecclestone BR, Boktor M, Dinakaran D, Mackey JR, Reza PH. Automated Whole Slide Imaging for Label-Free Histology Using Photon Absorption Remote Sensing Microscopy. IEEE Trans Biomed Eng 2024; 71:1901-1912. [PMID: 38231822 DOI: 10.1109/tbme.2024.3355296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
OBJECTIVE Pathologists rely on histochemical stains to impart contrast in thin translucent tissue samples, revealing tissue features necessary for identifying pathological conditions. However, the chemical labeling process is destructive and often irreversible or challenging to undo, imposing practical limits on the number of stains that can be applied to the same tissue section. Here we present an automated label-free whole slide scanner using a PARS microscope designed for imaging thin, transmissible samples. METHODS Peak SNR and in-focus acquisitions are achieved across entire tissue sections using the scattering signal from the PARS detection beam to measure the optimal focal plane. Whole slide images (WSI) are seamlessly stitched together using a custom contrast leveling algorithm. Identical tissue sections are subsequently H&E stained and brightfield imaged. The one-to-one WSIs from both modalities are visually and quantitatively compared. RESULTS PARS WSIs are presented at standard 40x magnification in malignant human breast and skin samples. We show correspondence of subcellular diagnostic details in both PARS and H&E WSIs and demonstrate virtual H&E staining of an entire PARS WSI. The one-to-one WSI from both modalities show quantitative similarity in nuclear features and structural information. CONCLUSION PARS WSIs are compatible with existing digital pathology tools, and samples remain suitable for histochemical, immunohistochemical, and other staining techniques. SIGNIFICANCE This work is a critical advance for integrating label-free optical methods into standard histopathology workflows.
Collapse
|
32
|
Goswami N, Anastasio MA, Popescu G. Quantitative phase imaging techniques for measuring scattering properties of cells and tissues: a review-part I. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S22713. [PMID: 39026612 PMCID: PMC11257415 DOI: 10.1117/1.jbo.29.s2.s22713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/30/2024] [Accepted: 05/20/2024] [Indexed: 07/20/2024]
Abstract
Significance Quantitative phase imaging (QPI) techniques offer intrinsic information about the sample of interest in a label-free, noninvasive manner and have an enormous potential for wide biomedical applications with negligible perturbations to the natural state of the sample in vitro. Aim We aim to present an in-depth review of the scattering formulation of light-matter interactions as applied to biological samples such as cells and tissues, discuss the relevant quantitative phase measurement techniques, and present a summary of various reported applications. Approach We start with scattering theory and scattering properties of biological samples followed by an exploration of various microscopy configurations for 2D QPI for measurement of structure and dynamics. Results We reviewed 157 publications and presented a range of QPI techniques and discussed suitable applications for each. We also presented the theoretical frameworks for phase reconstruction associated with the discussed techniques and highlighted their domains of validity. Conclusions We provide detailed theoretical as well as system-level information for a wide range of QPI techniques. Our study can serve as a guideline for new researchers looking for an exhaustive literature review of QPI methods and relevant applications.
Collapse
Affiliation(s)
- Neha Goswami
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
| | - Mark A. Anastasio
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
- University of Illinois Urbana-Champaign, Department of Electrical and Computer Engineering, Urbana, Illinois, United States
| | - Gabriel Popescu
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
- University of Illinois Urbana-Champaign, Department of Electrical and Computer Engineering, Urbana, Illinois, United States
| |
Collapse
|
33
|
Zhang L, Li S, Wang H, Jia X, Guo B, Yang Z, Fan C, Zhao H, Zhao Z, Zhang Z, Yuan L. The virtual staining method by quantitative phase imaging for label free lymphocytes based on self-supervised iteration cycle-consistent adversarial networks. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2024; 95:045103. [PMID: 38557883 DOI: 10.1063/5.0159400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 03/12/2024] [Indexed: 04/04/2024]
Abstract
Quantitative phase imaging (QPI) provides 3D structural and morphological information for label free living cells. Unfortunately, this quantitative phase information cannot meet doctors' diagnostic requirements of the clinical "gold standard," which displays stained cells' pathological states based on 2D color features. To make QPI results satisfy the clinical "gold standard," the virtual staining method by QPI for label free lymphocytes based on self-supervised iteration Cycle-Consistent Adversarial Networks (CycleGANs) is proposed herein. The 3D phase information of QPI is, therefore, trained and transferred to a kind of 2D "virtual staining" image that is well in agreement with "gold standard" results. To solve the problem that unstained QPI and stained "gold standard" results cannot be obtained for the same label free living cell, the self-supervised iteration for the CycleGAN deep learning algorithm is designed to obtain a trained stained result as the ground truth for error evaluation. The structural similarity index of our virtual staining experimental results for 8756 lymphocytes is 0.86. Lymphocytes' area errors after converting to 2D virtual stained results from 3D phase information are less than 3.59%. The mean error of the nuclear to cytoplasmic ratio is 2.69%, and the color deviation from the "gold standard" is less than 6.67%.
Collapse
Affiliation(s)
- Lu Zhang
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Shengjie Li
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Huijun Wang
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Xinhu Jia
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Bohuan Guo
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Zewen Yang
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Chen Fan
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Hong Zhao
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Zixin Zhao
- School of Instrument Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Zhenxi Zhang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Li Yuan
- Clinical Lab, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| |
Collapse
|
34
|
Cheng S, Chang S, Li Y, Novoseltseva A, Lin S, Wu Y, Zhu J, McKee AC, Rosene DL, Wang H, Bigio IJ, Boas DA, Tian L. Enhanced Multiscale Human Brain Imaging by Semi-supervised Digital Staining and Serial Sectioning Optical Coherence Tomography. RESEARCH SQUARE 2024:rs.3.rs-4014687. [PMID: 38562721 PMCID: PMC10984089 DOI: 10.21203/rs.3.rs-4014687/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
A major challenge in neuroscience is to visualize the structure of the human brain at different scales. Traditional histology reveals micro- and meso-scale brain features, but suffers from staining variability, tissue damage and distortion that impedes accurate 3D reconstructions. Here, we present a new 3D imaging framework that combines serial sectioning optical coherence tomography (S-OCT) with a deep-learning digital staining (DS) model. We develop a novel semi-supervised learning technique to facilitate DS model training on weakly paired images. The DS model performs translation from S-OCT to Gallyas silver staining. We demonstrate DS on various human cerebral cortex samples with consistent staining quality. Additionally, we show that DS enhances contrast across cortical layer boundaries. Furthermore, we showcase geometry-preserving 3D DS on cubic-centimeter tissue blocks and visualization of meso-scale vessel networks in the white matter. We believe that our technique offers the potential for high-throughput, multiscale imaging of brain tissues and may facilitate studies of brain structures.
Collapse
Affiliation(s)
- Shiyi Cheng
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Shuaibin Chang
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical Engineering and Computer Sciences, University of California, Cory Hall, Berkeley, California, 94720, USA
| | - Anna Novoseltseva
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
| | - Sunni Lin
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
| | - Yicun Wu
- Department of Computer Science, Boston University, 665 Commonwealth Ave, Boston, MA, 02215, USA
| | - Jiahui Zhu
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
| | - Ann C. McKee
- Boston University Alzheimer’s Disease Research Center and CTE Center, Boston University, Chobanian and Avedisian School of Medicine, Boston, MA, 02118, USA
- Department of Neurology, Boston University, Chobanian and Avedisian School of Medicine, Boston, MA, 02118, USA
- VA Boston Healthcare System, U.S. Department of Veteran Affairs, Jamaica Plain, MA, 02130, USA
- Department of Psychiatry and Ophthalmology, Boston University School of Medicine, Boston, MA, 02118, USA
- Department of Pathology and Laboratory Medicine, Boston University School of Medicine, Boston, MA, 02118, USA
| | - Douglas L. Rosene
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Hui Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, 02129, USA
| | - Irving J. Bigio
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - David A. Boas
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, 8 St Mary’s St, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| |
Collapse
|
35
|
Nolte DD. Coherent light scattering from cellular dynamics in living tissues. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:036601. [PMID: 38433567 DOI: 10.1088/1361-6633/ad2229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/24/2024] [Indexed: 03/05/2024]
Abstract
This review examines the biological physics of intracellular transport probed by the coherent optics of dynamic light scattering from optically thick living tissues. Cells and their constituents are in constant motion, composed of a broad range of speeds spanning many orders of magnitude that reflect the wide array of functions and mechanisms that maintain cellular health. From the organelle scale of tens of nanometers and upward in size, the motion inside living tissue is actively driven rather than thermal, propelled by the hydrolysis of bioenergetic molecules and the forces of molecular motors. Active transport can mimic the random walks of thermal Brownian motion, but mean-squared displacements are far from thermal equilibrium and can display anomalous diffusion through Lévy or fractional Brownian walks. Despite the average isotropic three-dimensional environment of cells and tissues, active cellular or intracellular transport of single light-scattering objects is often pseudo-one-dimensional, for instance as organelle displacement persists along cytoskeletal tracks or as membranes displace along the normal to cell surfaces, albeit isotropically oriented in three dimensions. Coherent light scattering is a natural tool to characterize such tissue dynamics because persistent directed transport induces Doppler shifts in the scattered light. The many frequency-shifted partial waves from the complex and dynamic media interfere to produce dynamic speckle that reveals tissue-scale processes through speckle contrast imaging and fluctuation spectroscopy. Low-coherence interferometry, dynamic optical coherence tomography, diffusing-wave spectroscopy, diffuse-correlation spectroscopy, differential dynamic microscopy and digital holography offer coherent detection methods that shed light on intracellular processes. In health-care applications, altered states of cellular health and disease display altered cellular motions that imprint on the statistical fluctuations of the scattered light. For instance, the efficacy of medical therapeutics can be monitored by measuring the changes they induce in the Doppler spectra of livingex vivocancer biopsies.
Collapse
Affiliation(s)
- David D Nolte
- Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, United States of America
| |
Collapse
|
36
|
Haputhanthri U, Herath K, Hettiarachchi R, Kariyawasam H, Ahmad A, Ahluwalia BS, Acharya G, Edussooriya CUS, Wadduwage DN. Towards ultrafast quantitative phase imaging via differentiable microscopy [Invited]. BIOMEDICAL OPTICS EXPRESS 2024; 15:1798-1812. [PMID: 38495703 PMCID: PMC10942716 DOI: 10.1364/boe.504954] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 12/15/2023] [Accepted: 02/09/2024] [Indexed: 03/19/2024]
Abstract
With applications ranging from metabolomics to histopathology, quantitative phase microscopy (QPM) is a powerful label-free imaging modality. Despite significant advances in fast multiplexed imaging sensors and deep-learning-based inverse solvers, the throughput of QPM is currently limited by the pixel-rate of the image sensors. Complementarily, to improve throughput further, here we propose to acquire images in a compressed form so that more information can be transferred beyond the existing hardware bottleneck of the image sensor. To this end, we present a numerical simulation of a learnable optical compression-decompression framework that learns content-specific features. The proposed differentiable quantitative phase microscopy (∂-QPM) first uses learnable optical processors as image compressors. The intensity representations produced by these optical processors are then captured by the imaging sensor. Finally, a reconstruction network running on a computer decompresses the QPM images post aquisition. In numerical experiments, the proposed system achieves compression of × 64 while maintaining the SSIM of ∼0.90 and PSNR of ∼30 dB on cells. The results demonstrated by our experiments open up a new pathway to QPM systems that may provide unprecedented throughput improvements.
Collapse
Affiliation(s)
- Udith Haputhanthri
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Sri Lanka
| | - Kithmini Herath
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Sri Lanka
| | - Ramith Hettiarachchi
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Sri Lanka
| | - Hasindu Kariyawasam
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Sri Lanka
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Balpreet S. Ahluwalia
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Ganesh Acharya
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
| | | | - Dushan N. Wadduwage
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
37
|
Shen B, Li Z, Pan Y, Guo Y, Yin Z, Hu R, Qu J, Liu L. Noninvasive Nonlinear Optical Computational Histology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308630. [PMID: 38095543 PMCID: PMC10916666 DOI: 10.1002/advs.202308630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 11/28/2023] [Indexed: 03/07/2024]
Abstract
Cancer remains a global health challenge, demanding early detection and accurate diagnosis for improved patient outcomes. An intelligent paradigm is introduced that elevates label-free nonlinear optical imaging with contrastive patch-wise learning, yielding stain-free nonlinear optical computational histology (NOCH). NOCH enables swift, precise diagnostic analysis of fresh tissues, reducing patient anxiety and healthcare costs. Nonlinear modalities are evaluated, including stimulated Raman scattering and multiphoton imaging, for their ability to enhance tumor microenvironment sensitivity, pathological analysis, and cancer examination. Quantitative analysis confirmed that NOCH images accurately reproduce nuclear morphometric features across different cancer stages. Key diagnostic features, such as nuclear morphology, size, and nuclear-cytoplasmic contrast, are well preserved. NOCH models also demonstrate promising generalization when applied to other pathological tissues. The study unites label-free nonlinear optical imaging with histopathology using contrastive learning to establish stain-free computational histology. NOCH provides a rapid, non-invasive, and precise approach to surgical pathology, holding immense potential for revolutionizing cancer diagnosis and surgical interventions.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Zhenglin Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Ying Pan
- China–Japan Union Hospital of Jilin UniversityChangchun130033China
| | - Yuan Guo
- Shaanxi Provincial Cancer HospitalXi'an710065China
| | - Zongyi Yin
- Shenzhen University General HospitalShenzhen518055China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
38
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
39
|
Kim S, Lee J, Ko J, Park S, Lee SR, Kim Y, Lee T, Choi S, Kim J, Kim W, Chung Y, Kwon OH, Jeon NL. Angio-Net: deep learning-based label-free detection and morphometric analysis of in vitro angiogenesis. LAB ON A CHIP 2024; 24:751-763. [PMID: 38193617 DOI: 10.1039/d3lc00935a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Despite significant advancements in three-dimensional (3D) cell culture technology and the acquisition of extensive data, there is an ongoing need for more effective and dependable data analysis methods. These concerns arise from the continued reliance on manual quantification techniques. In this study, we introduce a microphysiological system (MPS) that seamlessly integrates 3D cell culture to acquire large-scale imaging data and employs deep learning-based virtual staining for quantitative angiogenesis analysis. We utilize a standardized microfluidic device to obtain comprehensive angiogenesis data. Introducing Angio-Net, a novel solution that replaces conventional immunocytochemistry, we convert brightfield images into label-free virtual fluorescence images through the fusion of SegNet and cGAN. Moreover, we develop a tool capable of extracting morphological blood vessel features and automating their measurement, facilitating precise quantitative analysis. This integrated system proves to be invaluable for evaluating drug efficacy, including the assessment of anticancer drugs on targets such as the tumor microenvironment. Additionally, its unique ability to enable live cell imaging without the need for cell fixation promises to broaden the horizons of pharmaceutical and biological research. Our study pioneers a powerful approach to high-throughput angiogenesis analysis, marking a significant advancement in MPS.
Collapse
Affiliation(s)
- Suryong Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jungseub Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jihoon Ko
- Department of BioNano Technology, Gachon University, Gyeonggi, 13120, Republic of Korea
| | - Seonghyuk Park
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Seung-Ryeol Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Youngtaek Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Taeseung Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Sunbeen Choi
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jiho Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Wonbae Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Yoojin Chung
- Division of Computer Engineering, Hankuk University of Foreign Studies, Yongin, 17035, Republic of Korea
| | - Oh-Heum Kwon
- Department of IT convergence and Applications Engineering, Pukyong National University, Busan, 48513, Republic of Korea
| | - Noo Li Jeon
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Institute of Advanced Machines and Design, Seoul National University, Seoul, 08826, Republic of Korea
| |
Collapse
|
40
|
Pirone D, Bianco V, Miccio L, Memmolo P, Psaltis D, Ferraro P. Beyond fluorescence: advances in computational label-free full specificity in 3D quantitative phase microscopy. Curr Opin Biotechnol 2024; 85:103054. [PMID: 38142647 DOI: 10.1016/j.copbio.2023.103054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/23/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Despite remarkable progresses in quantitative phase imaging (QPI) microscopes, their wide acceptance is limited due to the lack of specificity compared with the well-established fluorescence microscopy. In fact, the absence of fluorescent tag prevents to identify subcellular structures in single cells, making challenging the interpretation of label-free 2D and 3D phase-contrast data. Great effort has been made by many groups worldwide to address and overcome such limitation. Different computational methods have been proposed and many more are currently under investigation to achieve label-free microscopic imaging at single-cell level to recognize and quantify different subcellular compartments. This route promises to bridge the gap between QPI and FM for real-world applications.
Collapse
Affiliation(s)
- Daniele Pirone
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Demetri Psaltis
- EPFL, Ecole Polytechnique Fédérale de Lausanne, Optics Laboratory, CH-1015 Lausanne, Switzerland
| | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| |
Collapse
|
41
|
Asaf MZ, Rao B, Akram MU, Khawaja SG, Khan S, Truong TM, Sekhon P, Khan IJ, Abbasi MS. Dual contrastive learning based image-to-image translation of unstained skin tissue into virtually stained H&E images. Sci Rep 2024; 14:2335. [PMID: 38282056 PMCID: PMC11269663 DOI: 10.1038/s41598-024-52833-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.
Collapse
Affiliation(s)
- Muhammad Zeeshan Asaf
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Babar Rao
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Dermatology, Weill Cornell Medicine, New York, NY, 10021, USA
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan.
| | - Sajid Gul Khawaja
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Samavia Khan
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
| | - Thu Minh Truong
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Pathology, Immunology and Laboratory Medicine, New Jersey Medical School, 185 South Orange Ave, Newark, NJ, 07103, USA
| | - Palveen Sekhon
- EIV Diagnostics, Fresno, CA, USA
- University of California, San Francisco School of Medicine, San Francisco, USA
| | - Irfan J Khan
- Department of Pathology, St. Luke's University Health Network, Bethlehem, PA, 18015, USA
| | | |
Collapse
|
42
|
He H, Cao M, Gao Y, Zheng P, Yan S, Zhong JH, Wang L, Jin D, Ren B. Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy. Nat Commun 2024; 15:754. [PMID: 38272927 PMCID: PMC10810791 DOI: 10.1038/s41467-024-44864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/05/2024] [Indexed: 01/27/2024] Open
Abstract
The low scattering efficiency of Raman scattering makes it challenging to simultaneously achieve good signal-to-noise ratio (SNR), high imaging speed, and adequate spatial and spectral resolutions. Here, we report a noise learning (NL) approach that estimates the intrinsic noise distribution of each instrument by statistically learning the noise in the pixel-spatial frequency domain. The estimated noise is then removed from the noisy spectra. This enhances the SNR by ca. 10 folds, and suppresses the mean-square error by almost 150 folds. NL allows us to improve the positioning accuracy and spatial resolution and largely eliminates the impact of thermal drift on tip-enhanced Raman spectroscopic nanoimaging. NL is also applicable to enhance SNR in fluorescence and photoluminescence imaging. Our method manages the ground truth spectra and the instrumental noise simultaneously within the training dataset, which bypasses the tedious labelling of huge dataset required in conventional deep learning, potentially shifting deep learning from sample-dependent to instrument-dependent.
Collapse
Affiliation(s)
- Hao He
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Maofeng Cao
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Yun Gao
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Peng Zheng
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Sen Yan
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Jin-Hui Zhong
- Department of Materials Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.
| | - Lei Wang
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China.
| | - Dayong Jin
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
- Institute for Biomedical Materials & Devices (IBMD), University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Bin Ren
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China.
- Tan Kah Kee Innovation Laboratory, Xiamen, 361104, China.
| |
Collapse
|
43
|
Boktor M, Tweel JED, Ecclestone BR, Ye JA, Fieguth P, Haji Reza P. Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images. Sci Rep 2024; 14:2009. [PMID: 38263394 PMCID: PMC10805725 DOI: 10.1038/s41598-024-52588-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/20/2024] [Indexed: 01/25/2024] Open
Abstract
Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications.
Collapse
Affiliation(s)
- Marian Boktor
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - James E D Tweel
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Benjamin R Ecclestone
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Jennifer Ai Ye
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Paul Fieguth
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Parsin Haji Reza
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
44
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
45
|
Cazzaniga G, Rossi M, Eccher A, Girolami I, L'Imperio V, Van Nguyen H, Becker JU, Bueno García MG, Sbaraglia M, Dei Tos AP, Gambaro G, Pagni F. Time for a full digital approach in nephropathology: a systematic review of current artificial intelligence applications and future directions. J Nephrol 2024; 37:65-76. [PMID: 37768550 PMCID: PMC10920416 DOI: 10.1007/s40620-023-01775-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 08/22/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) integration in nephropathology has been growing rapidly in recent years, facing several challenges including the wide range of histological techniques used, the low occurrence of certain diseases, and the need for data sharing. This narrative review retraces the history of AI in nephropathology and provides insights into potential future developments. METHODS Electronic searches in PubMed-MEDLINE and Embase were made to extract pertinent articles from the literature. Works about automated image analysis or the application of an AI algorithm on non-neoplastic kidney histological samples were included and analyzed to extract information such as publication year, AI task, and learning type. Prepublication servers and reviews were not included. RESULTS Seventy-six (76) original research articles were selected. Most of the studies were conducted in the United States in the last 7 years. To date, research has been mainly conducted on relatively easy tasks, like single-stain glomerular segmentation. However, there is a trend towards developing more complex tasks such as glomerular multi-stain classification. CONCLUSION Deep learning has been used to identify patterns in complex histopathology data and looks promising for the comprehensive assessment of renal biopsy, through the use of multiple stains and virtual staining techniques. Hybrid and collaborative learning approaches have also been explored to utilize large amounts of unlabeled data. A diverse team of experts, including nephropathologists, computer scientists, and clinicians, is crucial for the development of AI systems for nephropathology. Collaborative efforts among multidisciplinary experts result in clinically relevant and effective AI tools.
Collapse
Affiliation(s)
- Giorgio Cazzaniga
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy.
| | - Mattia Rossi
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
- Department of Medical and Surgical Sciences for Children and Adults, University of Modena and Reggio Emilia, University Hospital of Modena, Modena, Italy
| | - Ilaria Girolami
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| | - Hien Van Nguyen
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, 77004, USA
| | - Jan Ulrich Becker
- Institute of Pathology, University Hospital of Cologne, Cologne, Germany
| | - María Gloria Bueno García
- VISILAB Research Group, E.T.S. Ingenieros Industriales, University of Castilla-La Mancha, Ciudad Real, Spain
| | - Marta Sbaraglia
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Angelo Paolo Dei Tos
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Giovanni Gambaro
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Fabio Pagni
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| |
Collapse
|
46
|
Thapa V, Galande AS, Ram GHP, John R. TIE-GANs: single-shot quantitative phase imaging using transport of intensity equation with integration of GANs. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:016010. [PMID: 38293292 PMCID: PMC10826717 DOI: 10.1117/1.jbo.29.1.016010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/18/2023] [Accepted: 01/09/2024] [Indexed: 02/01/2024]
Abstract
Significance Artificial intelligence (AI) has become a prominent technology in computational imaging over the past decade. The expeditious and label-free characteristics of quantitative phase imaging (QPI) render it a promising contender for AI investigation. Though interferometric methodologies exhibit potential efficacy, their implementation involves complex experimental platforms and computationally intensive reconstruction procedures. Hence, non-interferometric methods, such as transport of intensity equation (TIE), are preferred over interferometric methods. Aim TIE method, despite its effectiveness, is tedious as it requires the acquisition of many images at varying defocus planes. The proposed methodology holds the ability to generate a phase image utilizing a single intensity image using generative adversarial networks (GANs). We present a method called TIE-GANs to overcome the multi-shot scheme of conventional TIE. Approach The present investigation employs the TIE as a QPI methodology, which necessitates reduced experimental and computational efforts. TIE is being used for the dataset preparation as well. The proposed method captures images from different defocus planes for training. Our approach uses an image-to-image translation technique to produce phase maps and is based on GANs. The main contribution of this work is the introduction of GANs with TIE (TIE:GANs) that can give better phase reconstruction results with shorter computation times. This is the first time the GANs is proposed for TIE phase retrieval. Results The characterization of the system was carried out with microbeads of 4 μ m size and structural similarity index (SSIM) for microbeads was found to be 0.98. We demonstrated the application of the proposed method with oral cells, which yielded a maximum SSIM value of 0.95. The key characteristics include mean squared error and peak-signal-to-noise ratio values of 140 and 26.42 dB for oral cells and 100 and 28.10 dB for microbeads. Conclusions The proposed methodology holds the ability to generate a phase image utilizing a single intensity image. Our method is feasible for digital cytology because of its reported high value of SSIM. Our approach can handle defocused images in such a way that it can take intensity image from any defocus plane within the provided range and able to generate phase map.
Collapse
Affiliation(s)
- Vikas Thapa
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Ashwini Subhash Galande
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Gurram Hanu Phani Ram
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Renu John
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| |
Collapse
|
47
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
48
|
Zhao J, Wang X, Zhu J, Chukwudi C, Finebaum A, Zhang J, Yang S, He S, Saeidi N. PhaseFIT: live-organoid phase-fluorescent image transformation via generative AI. LIGHT, SCIENCE & APPLICATIONS 2023; 12:297. [PMID: 38097545 PMCID: PMC10721831 DOI: 10.1038/s41377-023-01296-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/02/2023] [Accepted: 09/24/2023] [Indexed: 12/17/2023]
Abstract
Organoid models have provided a powerful platform for mechanistic investigations into fundamental biological processes involved in the development and function of organs. Despite the potential for image-based phenotypic quantification of organoids, their complex 3D structure, and the time-consuming and labor-intensive nature of immunofluorescent staining present significant challenges. In this work, we developed a virtual painting system, PhaseFIT (phase-fluorescent image transformation) utilizing customized and morphologically rich 2.5D intestinal organoids, which generate virtual fluorescent images for phenotypic quantification via accessible and low-cost organoid phase images. This system is driven by a novel segmentation-informed deep generative model that specializes in segmenting overlap and proximity between objects. The model enables an annotation-free digital transformation from phase-contrast to multi-channel fluorescent images. The virtual painting results of nuclei, secretory cell markers, and stem cells demonstrate that PhaseFIT outperforms the existing deep learning-based stain transformation models by generating fine-grained visual content. We further validated the efficiency and accuracy of PhaseFIT to quantify the impacts of three compounds on crypt formation, cell population, and cell stemness. PhaseFIT is the first deep learning-enabled virtual painting system focused on live organoids, enabling large-scale, informative, and efficient organoid phenotypic quantification. PhaseFIT would enable the use of organoids in high-throughput drug screening applications.
Collapse
Affiliation(s)
- Junhan Zhao
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Junyou Zhu
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA
| | - Chijioke Chukwudi
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA
| | - Andrew Finebaum
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Jun Zhang
- Tencent AI Lab, Shenzhen, Guangdong, 518057, China
| | - Sen Yang
- Tencent AI Lab, Shenzhen, Guangdong, 518057, China
| | - Shijie He
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA.
| | - Nima Saeidi
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA.
- Harvard Stem Cell Institute, Cambridge, MA, 02138, USA.
| |
Collapse
|
49
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
50
|
Liu CH, Fu LW, Chen HH, Huang SL. Toward cell nuclei precision between OCT and H&E images translation using signal-to-noise ratio cycle-consistency. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107824. [PMID: 37832427 DOI: 10.1016/j.cmpb.2023.107824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/31/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023]
Abstract
Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-vivo Hematoxylin and eosin (H&E) stained images without the need for image stacking, registration, post-processing, and annotation. The model can generate high-quality and highly accurate virtual medical images, and is robust and bidirectional. Our framework introduces random noise to (1) blur redundant features, (2) defend against self-adversarial attacks, (3) stabilize inverse conversion, and (4) mitigate the impact of OCT speckles. We also demonstrate that our model can be pre-trained and then fine-tuned using images from different OCT systems in just a few epochs. Qualitative and quantitative comparisons with traditional image-to-image translation models show the robustness of our proposed signal-to-noise ratio (SNR) cycle-consistency method.
Collapse
Affiliation(s)
- Chih-Hao Liu
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Li-Wei Fu
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Homer H Chen
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Graduate Institute of Networking and Multimedia, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Sheng-Lung Huang
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; All Vista Healthcare Center, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| |
Collapse
|