1
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
2
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
3
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
4
|
Lu C, Chen K, Qiu H, Chen X, Chen G, Qi X, Jiang H. Diffusion-based deep learning method for augmenting ultrastructural imaging and volume electron microscopy. Nat Commun 2024; 15:4677. [PMID: 38824146 PMCID: PMC11144272 DOI: 10.1038/s41467-024-49125-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 05/20/2024] [Indexed: 06/03/2024] Open
Abstract
Electron microscopy (EM) revolutionized the way to visualize cellular ultrastructure. Volume EM (vEM) has further broadened its three-dimensional nanoscale imaging capacity. However, intrinsic trade-offs between imaging speed and quality of EM restrict the attainable imaging area and volume. Isotropic imaging with vEM for large biological volumes remains unachievable. Here, we developed EMDiffuse, a suite of algorithms designed to enhance EM and vEM capabilities, leveraging the cutting-edge image generation diffusion model. EMDiffuse generates realistic predictions with high resolution ultrastructural details and exhibits robust transferability by taking only one pair of images of 3 megapixels to fine-tune in denoising and super-resolution tasks. EMDiffuse also demonstrated proficiency in the isotropic vEM reconstruction task, generating isotropic volume even in the absence of isotropic training data. We demonstrated the robustness of EMDiffuse by generating isotropic volumes from seven public datasets obtained from different vEM techniques and instruments. The generated isotropic volume enables accurate three-dimensional nanoscale ultrastructure analysis. EMDiffuse also features self-assessment functionalities on predictions' reliability. We envision EMDiffuse to pave the way for investigations of the intricate subcellular nanoscale ultrastructure within large volumes of biological systems.
Collapse
Affiliation(s)
- Chixiang Lu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Kai Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Heng Qiu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojun Chen
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Gu Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
| | - Haibo Jiang
- Department of Chemistry, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
5
|
Gaire SK, Daneshkhah A, Flowerday E, Gong R, Frederick J, Backman V. Deep learning-based spectroscopic single-molecule localization microscopy. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:066501. [PMID: 38799979 PMCID: PMC11122423 DOI: 10.1117/1.jbo.29.6.066501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 05/03/2024] [Accepted: 05/09/2024] [Indexed: 05/29/2024]
Abstract
Significance Spectroscopic single-molecule localization microscopy (sSMLM) takes advantage of nanoscopy and spectroscopy, enabling sub-10 nm resolution as well as simultaneous multicolor imaging of multi-labeled samples. Reconstruction of raw sSMLM data using deep learning is a promising approach for visualizing the subcellular structures at the nanoscale. Aim Develop a novel computational approach leveraging deep learning to reconstruct both label-free and fluorescence-labeled sSMLM imaging data. Approach We developed a two-network-model based deep learning algorithm, termed DsSMLM, to reconstruct sSMLM data. The effectiveness of DsSMLM was assessed by conducting imaging experiments on diverse samples, including label-free single-stranded DNA (ssDNA) fiber, fluorescence-labeled histone markers on COS-7 and U2OS cells, and simultaneous multicolor imaging of synthetic DNA origami nanoruler. Results For label-free imaging, a spatial resolution of 6.22 nm was achieved on ssDNA fiber; for fluorescence-labeled imaging, DsSMLM revealed the distribution of chromatin-rich and chromatin-poor regions defined by histone markers on the cell nucleus and also offered simultaneous multicolor imaging of nanoruler samples, distinguishing two dyes labeled in three emitting points with a separation distance of 40 nm. With DsSMLM, we observed enhanced spectral profiles with 8.8% higher localization detection for single-color imaging and up to 5.05% higher localization detection for simultaneous two-color imaging. Conclusions We demonstrate the feasibility of deep learning-based reconstruction for sSMLM imaging applicable to label-free and fluorescence-labeled sSMLM imaging data. We anticipate our technique will be a valuable tool for high-quality super-resolution imaging for a deeper understanding of DNA molecules' photophysics and will facilitate the investigation of multiple nanoscopic cellular structures and their interactions.
Collapse
Affiliation(s)
- Sunil Kumar Gaire
- North Carolina Agricultural and Technical State University, Department of Electrical and Computer Engineering, Greensboro, North Carolina, United States
| | - Ali Daneshkhah
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Ethan Flowerday
- University of Tulsa, Department of Computer Science and Cyber Security, Tulsa, Oklahoma, United States
| | - Ruyi Gong
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Jane Frederick
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Vadim Backman
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| |
Collapse
|
6
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
7
|
Chia Y, Liao W, Vyas S, Chu CH, Yamaguchi T, Liu X, Tanaka T, Huang Y, Chen MK, Chen W, Tsai DP, Luo Y. In Vivo Intelligent Fluorescence Endo-Microscopy by Varifocal Meta-Device and Deep Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2307837. [PMID: 38488694 PMCID: PMC11132035 DOI: 10.1002/advs.202307837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/30/2023] [Indexed: 05/29/2024]
Abstract
Endo-microscopy is crucial for real-time 3D visualization of internal tissues and subcellular structures. Conventional methods rely on axial movement of optical components for precise focus adjustment, limiting miniaturization and complicating procedures. Meta-device, composed of artificial nanostructures, is an emerging optical flat device that can freely manipulate the phase and amplitude of light. Here, an intelligent fluorescence endo-microscope is developed based on varifocal meta-lens and deep learning (DL). The breakthrough enables in vivo 3D imaging of mouse brains, where varifocal meta-lens focal length adjusts through relative rotation angle. The system offers key advantages such as invariant magnification, a large field-of-view, and optical sectioning at a maximum focal length tuning range of ≈2 mm with 3 µm lateral resolution. Using a DL network, image acquisition time and system complexity are significantly reduced, and in vivo high-resolution brain images of detailed vessels and surrounding perivascular space are clearly observed within 0.1 s (≈50 times faster). The approach will benefit various surgical procedures, such as gastrointestinal biopsies, neural imaging, brain surgery, etc.
Collapse
Grants
- NSTC 112-2221-E-002-055-MY3 National Science and Technology Council, Taiwan
- NSTC 112-2221-E-002-212-MY3 National Science and Technology Council, Taiwan
- MOST-108-2221-E-002-168-MY4 National Science and Technology Council, Taiwan
- NTU-CC-113L891102 National Taiwan University
- NTU-113L8507 National Taiwan University
- NTU-CC-112L892902 National Taiwan University
- NTU-107L7728 National Taiwan University
- NTU-107L7807 National Taiwan University
- NTU-YIH-08HZT49001 National Taiwan University
- AoE/P-502/20 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- C1015-21E University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- C5031-22G University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU15303521 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11310522 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11305223 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11300123 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- 2020B1515120073 Department of Science and Technology of Guangdong Province
- 9380131 City University of Hong Kong
- 9610628 City University of Hong Kong
- 7005867 City University of Hong Kong
- JPMJCR1904 JST CREST
- NHRI-EX113-11327EI National Health Research Institutes
- National Science and Technology Council, Taiwan
- National Taiwan University
- Department of Science and Technology of Guangdong Province
- City University of Hong Kong
- National Health Research Institutes
Collapse
Affiliation(s)
- Yu‐Hsin Chia
- Department of Biomedical EngineeringNational Taiwan UniversityTaipei10051Taiwan
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
| | - Wei‐Hao Liao
- Department of Physical Medicine and RehabilitationNational Taiwan University Hospital & National Taiwan University College of MedicineTaipei10051Taiwan
| | - Sunil Vyas
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
| | - Cheng Hung Chu
- YongLin Institute of HealthNational Taiwan UniversityTaipei10087Taiwan
| | - Takeshi Yamaguchi
- Innovative Photon Manipulation Research TeamRIKEN Center for Advanced PhotonicsSaitama351‐0198Japan
| | - Xiaoyuan Liu
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
| | - Takuo Tanaka
- Innovative Photon Manipulation Research TeamRIKEN Center for Advanced PhotonicsSaitama351‐0198Japan
| | - Yi‐You Huang
- Department of Biomedical EngineeringNational Taiwan UniversityTaipei10051Taiwan
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
- Department of Biomedical EngineeringNational Taiwan University HospitalTaipei10051Taiwan
| | - Mu Ku Chen
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
- Centre for Biosystems, Neuroscience and NanotechnologyCity University of Hong KongKowloon999077Hong Kong, China
- The State Key Laboratory of Terahertz and Millimeter WavesCity University of Hong KongKowloon999077Hong Kong, China
| | - Wen‐Shiang Chen
- Department of Physical Medicine and RehabilitationNational Taiwan University Hospital & National Taiwan University College of MedicineTaipei10051Taiwan
- Institute of Biomedical Engineering and NanomedicineNational Health Research InstitutesMiaoli35053Taiwan
| | - Din Ping Tsai
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
- Centre for Biosystems, Neuroscience and NanotechnologyCity University of Hong KongKowloon999077Hong Kong, China
- The State Key Laboratory of Terahertz and Millimeter WavesCity University of Hong KongKowloon999077Hong Kong, China
| | - Yuan Luo
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
- YongLin Institute of HealthNational Taiwan UniversityTaipei10087Taiwan
- Molecular Imaging CenterNational Taiwan UniversityTaipei10672Taiwan
- Program for Precision Health and Intelligent MedicineNational Taiwan UniversityTaipei106319Taiwan
| |
Collapse
|
8
|
Yang K, Zhang H, Qiu Y, Zhai T, Zhang Z. Self-Supervised Joint Learning for pCLE Image Denoising. SENSORS (BASEL, SWITZERLAND) 2024; 24:2853. [PMID: 38732957 PMCID: PMC11086271 DOI: 10.3390/s24092853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 04/26/2024] [Accepted: 04/28/2024] [Indexed: 05/13/2024]
Abstract
Probe-based confocal laser endoscopy (pCLE) has emerged as a powerful tool for disease diagnosis, yet it faces challenges such as the formation of hexagonal patterns in images due to the inherent characteristics of fiber bundles. Recent advancements in deep learning offer promise in image denoising, but the acquisition of clean-noisy image pairs for training networks across all potential scenarios can be prohibitively costly. Few studies have explored training denoising networks on such pairs. Here, we propose an innovative self-supervised denoising method. Our approach integrates noise prediction networks, image quality assessment networks, and denoising networks in a collaborative, jointly trained manner. Compared to prior self-supervised denoising methods, our approach yields superior results on pCLE images and fluorescence microscopy images. In summary, our novel self-supervised denoising technique enhances image quality in pCLE diagnosis by leveraging the synergy of noise prediction, image quality assessment, and denoising networks, surpassing previous methods on both pCLE and fluorescence microscopy images.
Collapse
Affiliation(s)
| | - Haojie Zhang
- State Key Lab of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China; (K.Y.); (Y.Q.); (T.Z.); (Z.Z.)
| | | | | | | |
Collapse
|
9
|
Wang W, Yang L, Sun H, Peng X, Yuan J, Zhong W, Chen J, He X, Ye L, Zeng Y, Gao Z, Li Y, Qu X. Cellular nucleus image-based smarter microscope system for single cell analysis. Biosens Bioelectron 2024; 250:116052. [PMID: 38266616 DOI: 10.1016/j.bios.2024.116052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/31/2023] [Accepted: 01/18/2024] [Indexed: 01/26/2024]
Abstract
Cell imaging technology is undoubtedly a powerful tool for studying single-cell heterogeneity due to its non-invasive and visual advantages. It covers microscope hardware, software, and image analysis techniques, which are hindered by low throughput owing to abundant hands-on time and expertise. Herein, a cellular nucleus image-based smarter microscope system for single-cell analysis is reported to achieve high-throughput analysis and high-content detection of cells. By combining the hardware of an automatic fluorescence microscope and multi-object recognition/acquisition software, we have achieved more advanced process automation with the assistance of Robotic Process Automation (RPA), which realizes a high-throughput collection of single-cell images. Automated acquisition of single-cell images has benefits beyond ease and throughout and can lead to uniform standard and higher quality images. We further constructed a single-cell image database-based convolutional neural network (Efficient Convolutional Neural Network, E-CNN) exceeding 20618 single-cell nucleus images. Computational analysis of large and complex data sets enhances the content and efficiency of single-cell analysis with the assistance of Artificial Intelligence (AI), which breaks through the super-resolution microscope's hardware limitation, such as specialized light sources with specific wavelengths, advanced optical components, and high-performance graphics cards. Our system can identify single-cell nucleus images that cannot be artificially distinguished with an accuracy of 95.3%. Overall, we build an ordinary microscope into a high-throughput analysis and high-content smarter microscope system, making it a candidate tool for Imaging cytology.
Collapse
Affiliation(s)
- Wentao Wang
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Lin Yang
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Hang Sun
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Xiaohong Peng
- YueYang Central Hospital, YueYang, Hunan Province, 414000, China
| | - Junjie Yuan
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Wenhao Zhong
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Jinqi Chen
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Xin He
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Lingzhi Ye
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Yi Zeng
- College of Chemistry and Chemical Engineering, Huanggang Normal University, Huanggang, 438000, China
| | - Zhifan Gao
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China.
| | - Yunhui Li
- Department of Laboratory Medical Center, General Hospital of Northern Theater Command, No.83, Wenhua Road, Shenhe District, Shenyang, Liaoning Province, 110016, China.
| | - Xiangmeng Qu
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China.
| |
Collapse
|
10
|
Paveliev M, Egorchev AA, Musin F, Lipachev N, Melnikova A, Gimadutdinov RM, Kashipov AR, Molotkov D, Chickrin DE, Aganov AV. Perineuronal Net Microscopy: From Brain Pathology to Artificial Intelligence. Int J Mol Sci 2024; 25:4227. [PMID: 38673819 PMCID: PMC11049984 DOI: 10.3390/ijms25084227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/31/2024] [Accepted: 04/05/2024] [Indexed: 04/28/2024] Open
Abstract
Perineuronal nets (PNN) are a special highly structured type of extracellular matrix encapsulating synapses on large populations of CNS neurons. PNN undergo structural changes in schizophrenia, epilepsy, Alzheimer's disease, stroke, post-traumatic conditions, and some other brain disorders. The functional role of the PNN microstructure in brain pathologies has remained largely unstudied until recently. Here, we review recent research implicating PNN microstructural changes in schizophrenia and other disorders. We further concentrate on high-resolution studies of the PNN mesh units surrounding synaptic boutons to elucidate fine structural details behind the mutual functional regulation between the ECM and the synaptic terminal. We also review some updates regarding PNN as a potential pharmacological target. Artificial intelligence (AI)-based methods are now arriving as a new tool that may have the potential to grasp the brain's complexity through a wide range of organization levels-from synaptic molecular events to large scale tissue rearrangements and the whole-brain connectome function. This scope matches exactly the complex role of PNN in brain physiology and pathology processes, and the first AI-assisted PNN microscopy studies have been reported. To that end, we report here on a machine learning-assisted tool for PNN mesh contour tracing.
Collapse
Affiliation(s)
- Mikhail Paveliev
- Neuroscience Center, University of Helsinki, Haartmaninkatu 8, 00290 Helsinki, Finland
| | - Anton A. Egorchev
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, Kremlyovskaya 35, Kazan 420008, Tatarstan, Russia; (A.A.E.); (F.M.); (R.M.G.)
| | - Foat Musin
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, Kremlyovskaya 35, Kazan 420008, Tatarstan, Russia; (A.A.E.); (F.M.); (R.M.G.)
| | - Nikita Lipachev
- Institute of Physics, Kazan Federal University, Kremlyovskaya 16a, Kazan 420008, Tatarstan, Russia; (N.L.); (A.V.A.)
| | - Anastasiia Melnikova
- Institute of Fundamental Medicine and Biology, Kazan Federal University, Karl Marx 74, Kazan 420015, Tatarstan, Russia;
| | - Rustem M. Gimadutdinov
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, Kremlyovskaya 35, Kazan 420008, Tatarstan, Russia; (A.A.E.); (F.M.); (R.M.G.)
| | - Aidar R. Kashipov
- Institute of Artificial Intelligence, Robotics and Systems Engineering, Kazan Federal University, Kremlyovskaya 18, Kazan 420008, Tatarstan, Russia; (A.R.K.); (D.E.C.)
| | - Dmitry Molotkov
- Biomedicum Imaging Unit, University of Helsinki, Haartmaninkatu 8, 00014 Helsinki, Finland;
| | - Dmitry E. Chickrin
- Institute of Artificial Intelligence, Robotics and Systems Engineering, Kazan Federal University, Kremlyovskaya 18, Kazan 420008, Tatarstan, Russia; (A.R.K.); (D.E.C.)
| | - Albert V. Aganov
- Institute of Physics, Kazan Federal University, Kremlyovskaya 16a, Kazan 420008, Tatarstan, Russia; (N.L.); (A.V.A.)
| |
Collapse
|
11
|
Das V, Zhang F, Bower AJ, Li J, Liu T, Aguilera N, Alvisio B, Liu Z, Hammer DX, Tam J. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography. COMMUNICATIONS MEDICINE 2024; 4:68. [PMID: 38600290 PMCID: PMC11006674 DOI: 10.1038/s43856-024-00483-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 03/13/2024] [Indexed: 04/12/2024] Open
Abstract
BACKGROUND In vivo imaging of the human retina using adaptive optics optical coherence tomography (AO-OCT) has transformed medical imaging by enabling visualization of 3D retinal structures at cellular-scale resolution, including the retinal pigment epithelial (RPE) cells, which are essential for maintaining visual function. However, because noise inherent to the imaging process (e.g., speckle) makes it difficult to visualize RPE cells from a single volume acquisition, a large number of 3D volumes are typically averaged to improve contrast, substantially increasing the acquisition duration and reducing the overall imaging throughput. METHODS Here, we introduce parallel discriminator generative adversarial network (P-GAN), an artificial intelligence (AI) method designed to recover speckle-obscured cellular features from a single AO-OCT volume, circumventing the need for acquiring a large number of volumes for averaging. The combination of two parallel discriminators in P-GAN provides additional feedback to the generator to more faithfully recover both local and global cellular structures. Imaging data from 8 eyes of 7 participants were used in this study. RESULTS We show that P-GAN not only improves RPE cell contrast by 3.5-fold, but also improves the end-to-end time required to visualize RPE cells by 99-fold, thereby enabling large-scale imaging of cells in the living human eye. RPE cell spacing measured across a large set of AI recovered images from 3 participants were in agreement with expected normative ranges. CONCLUSIONS The results demonstrate the potential of AI assisted imaging in overcoming a key limitation of RPE imaging and making it more accessible in a routine clinical setting.
Collapse
Affiliation(s)
- Vineeta Das
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Furu Zhang
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Andrew J Bower
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Joanne Li
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Bruno Alvisio
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Daniel X Hammer
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
12
|
Geng Z, Sun Z, Chen Y, Lu X, Tian T, Cheng G, Li X. Multi-input mutual supervision network for single-pixel computational imaging. OPTICS EXPRESS 2024; 32:13224-13234. [PMID: 38859298 DOI: 10.1364/oe.510683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/02/2024] [Indexed: 06/12/2024]
Abstract
In this study, we propose a single-pixel computational imaging method based on a multi-input mutual supervision network (MIMSN). We input one-dimensional (1D) light intensity signals and two-dimensional (2D) random image signal into MIMSN, enabling the network to learn the correlation between the two signals and achieve information complementarity. The 2D signal provides spatial information to the reconstruction process, reducing the uncertainty of the reconstructed image. The mutual supervision of the reconstruction results for these two signals brings the reconstruction objective closer to the ground truth image. The 2D images generated by the MIMSN can be used as inputs for subsequent iterations, continuously merging prior information to ensure high-quality imaging at low sampling rates. The reconstruction network does not require pretraining, and 1D signals collected by a single-pixel detector serve as labels for the network, enabling high-quality image reconstruction in unfamiliar environments. Especially in scattering environments, it holds significant potential for applications.
Collapse
|
13
|
Ketabchi AM, Morova B, Uysalli Y, Aydin M, Eren F, Bavili N, Pysz D, Buczynski R, Kiraz A. Enhancing resolution and contrast in fibre bundle-based fluorescence microscopy using generative adversarial network. J Microsc 2024. [PMID: 38563195 DOI: 10.1111/jmi.13296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/04/2024]
Abstract
Fibre bundle (FB)-based endoscopes are indispensable in biology and medical science due to their minimally invasive nature. However, resolution and contrast for fluorescence imaging are limited due to characteristic features of the FBs, such as low numerical aperture (NA) and individual fibre core sizes. In this study, we improved the resolution and contrast of sample fluorescence images acquired using in-house fabricated high-NA FBs by utilising generative adversarial networks (GANs). In order to train our deep learning model, we built an FB-based multifocal structured illumination microscope (MSIM) based on a digital micromirror device (DMD) which improves the resolution and the contrast substantially compared to basic FB-based fluorescence microscopes. After network training, the GAN model, employing image-to-image translation techniques, effectively transformed wide-field images into high-resolution MSIM images without the need for any additional optical hardware. The results demonstrated that GAN-generated outputs significantly enhanced both contrast and resolution compared to the original wide-field images. These findings highlight the potential of GAN-based models trained using MSIM data to enhance resolution and contrast in wide-field imaging for fibre bundle-based fluorescence microscopy. Lay Description: Fibre bundle (FB) endoscopes are essential in biology and medicine but suffer from limited resolution and contrast for fluorescence imaging. Here we improved these limitations using high-NA FBs and generative adversarial networks (GANs). We trained a GAN model with data from an FB-based multifocal structured illumination microscope (MSIM) to enhance resolution and contrast without additional optical hardware. Results showed significant enhancement in contrast and resolution, showcasing the potential of GAN-based models for fibre bundle-based fluorescence microscopy.
Collapse
Affiliation(s)
| | - Berna Morova
- Department of Physics Engineering, Istanbul Technical University, Istanbul, Türkiye
| | - Yiğit Uysalli
- Optofil, Inc., Istanbul, Türkiye
- Department of Physics, Koç University, Istanbul, Türkiye
| | - Musa Aydin
- Department of Computer Engineering, Fatih Sultan Mehmet Vakif University, Istanbul, Türkiye
| | | | - Nima Bavili
- Department of Physics, Koç University, Istanbul, Türkiye
| | - Dariusz Pysz
- Department of Glass, Institute of Electronic Materials Technology, Warsaw, Poland
| | - Ryszard Buczynski
- Department of Glass, Institute of Electronic Materials Technology, Warsaw, Poland
- Faculty of Physics, University of Warsaw, Warsaw, Poland
| | - Alper Kiraz
- Department of Electrical and Electronics Engineering, Koç University, Istanbul, Türkiye
- Optofil, Inc., Istanbul, Türkiye
- Department of Physics, Koç University, Istanbul, Türkiye
- KUTTAM-Koç University Research Center for Translational Medicine, Istanbul, Türkiye
| |
Collapse
|
14
|
Dai W, Wong IHM, Wong TTW. Exceeding the limit for microscopic image translation with a deep learning-based unified framework. PNAS NEXUS 2024; 3:pgae133. [PMID: 38601859 PMCID: PMC11004937 DOI: 10.1093/pnasnexus/pgae133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
Deep learning algorithms have been widely used in microscopic image translation. The corresponding data-driven models can be trained by supervised or unsupervised learning depending on the availability of paired data. However, general cases are where the data are only roughly paired such that supervised learning could be invalid due to data unalignment, and unsupervised learning would be less ideal as the roughly paired information is not utilized. In this work, we propose a unified framework (U-Frame) that unifies supervised and unsupervised learning by introducing a tolerance size that can be adjusted automatically according to the degree of data misalignment. Together with the implementation of a global sampling rule, we demonstrate that U-Frame consistently outperforms both supervised and unsupervised learning in all levels of data misalignments (even for perfectly aligned image pairs) in a myriad of image translation applications, including pseudo-optical sectioning, virtual histological staining (with clinical evaluations for cancer diagnosis), improvement of signal-to-noise ratio or resolution, and prediction of fluorescent labels, potentially serving as new standard for image translation.
Collapse
Affiliation(s)
- Weixing Dai
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| |
Collapse
|
15
|
Chen R, Xu J, Wang B, Ding Y, Abdulla A, Li Y, Jiang L, Ding X. SpiDe-Sr: blind super-resolution network for precise cell segmentation and clustering in spatial proteomics imaging. Nat Commun 2024; 15:2708. [PMID: 38548720 PMCID: PMC10978886 DOI: 10.1038/s41467-024-46989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Spatial proteomics elucidates cellular biochemical changes with unprecedented topological level. Imaging mass cytometry (IMC) is a high-dimensional single-cell resolution platform for targeted spatial proteomics. However, the precision of subsequent clinical analysis is constrained by imaging noise and resolution. Here, we propose SpiDe-Sr, a super-resolution network embedded with a denoising module for IMC spatial resolution enhancement. SpiDe-Sr effectively resists noise and improves resolution by 4 times. We demonstrate SpiDe-Sr respectively with cells, mouse and human tissues, resulting 18.95%/27.27%/21.16% increase in peak signal-to-noise ratio and 15.95%/31.63%/15.52% increase in cell extraction accuracy. We further apply SpiDe-Sr to study the tumor microenvironment of a 20-patient clinical breast cancer cohort with 269,556 single cells, and discover the invasion of Gram-negative bacteria is positively correlated with carcinogenesis markers and negatively correlated with immunological markers. Additionally, SpiDe-Sr is also compatible with fluorescence microscopy imaging, suggesting SpiDe-Sr an alternative tool for microscopy image super-resolution.
Collapse
Grants
- This work was supported by National Key R&D Program of China (2022YFC2601700, 2022YFF0710202) and NSFC Projects (T2122002, 22077079, 81871448), Shanghai Municipal Science and Technology Project(22Z510202478), Shanghai Municipal Education Commission Project(21SG10), Shanghai Jiao Tong University Projects (YG2021ZD19, Agri-X20200101, 2020 SJTU-HUJI), Shanghai Municipal Health Commission Project (2019CXJQ03). Thanks for AEMD SJTU, Shanghai Jiao Tong University Laboratory Animal Center for the supporting.
Collapse
Affiliation(s)
- Rui Chen
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiasu Xu
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Boqian Wang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Aynur Abdulla
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiyang Li
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lai Jiang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xianting Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
16
|
Kim DD, Chandra RS, Yang L, Wu J, Feng X, Atalay M, Bettegowda C, Jones C, Sair H, Liao WH, Zhu C, Zou B, Kazerooni AF, Nabavizadeh A, Jiao Z, Peng J, Bai HX. Active Learning in Brain Tumor Segmentation with Uncertainty Sampling and Annotation Redundancy Restriction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01037-6. [PMID: 38514595 DOI: 10.1007/s10278-024-01037-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/30/2024] [Accepted: 02/01/2024] [Indexed: 03/23/2024]
Abstract
Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data.
Collapse
Affiliation(s)
- Daniel D Kim
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Rajat S Chandra
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Li Yang
- Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China
- Clinical Medical Research Center for Stroke Prevention and Treatment of Hunan Province, Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China
| | - Jing Wu
- Department of Radiology, Second Xiangya Hospital, Central South University, Changsha, China
| | - Xue Feng
- Biomedical Engineering, University of Virginia, Charlottesville, VA, USA
| | - Michael Atalay
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Chetan Bettegowda
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - Craig Jones
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Haris Sair
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - Wei-Hua Liao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, China
| | - Chengzhang Zhu
- College of Literature and Journalism, Central South University, Changsha, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Jian Peng
- Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China.
- Clinical Medical Research Center for Stroke Prevention and Treatment of Hunan Province, Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China.
| | - Harrison X Bai
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
17
|
Tang WH, Sim SR, Aik DYK, Nelanuthala AVS, Athilingam T, Röllin A, Wohland T. Deep learning reduces data requirements and allows real-time measurements in imaging FCS. Biophys J 2024; 123:655-666. [PMID: 38050354 PMCID: PMC10995408 DOI: 10.1016/j.bpj.2023.11.3403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 11/18/2023] [Accepted: 11/30/2023] [Indexed: 12/06/2023] Open
Abstract
Imaging fluorescence correlation spectroscopy (FCS) is a powerful tool to extract information on molecular mobilities, actions, and interactions in live cells, tissues, and organisms. Nevertheless, several limitations restrict its applicability. First, FCS is data hungry, requiring 50,000 frames at 1-ms time resolution to obtain accurate parameter estimates. Second, the data size makes evaluation slow. Third, as FCS evaluation is model dependent, data evaluation is significantly slowed unless analytic models are available. Here, we introduce two convolutional neural networks-FCSNet and ImFCSNet-for correlation and intensity trace analysis, respectively. FCSNet robustly predicts parameters in 2D and 3D live samples. ImFCSNet reduces the amount of data required for accurate parameter retrieval by at least one order of magnitude and makes correct estimates even in moderately defocused samples. Both convolutional neural networks are trained on simulated data, are model agnostic, and allow autonomous, real-time evaluation of imaging FCS measurements.
Collapse
Affiliation(s)
- Wai Hoh Tang
- Department of Biological Sciences, National University of Singapore, Singapore, Singapore; NUS Centre for Bio-Imaging Sciences, National University of Singapore, Singapore, Singapore; Department of Statistics and Data Science, National University of Singapore, Singapore, Singapore; Institute of Digital Molecular Analytics and Science, National University of Singapore, Singapore, Singapore
| | - Shao Ren Sim
- Department of Biological Sciences, National University of Singapore, Singapore, Singapore; NUS Centre for Bio-Imaging Sciences, National University of Singapore, Singapore, Singapore
| | - Daniel Ying Kia Aik
- Department of Biological Sciences, National University of Singapore, Singapore, Singapore; NUS Centre for Bio-Imaging Sciences, National University of Singapore, Singapore, Singapore; Institute of Digital Molecular Analytics and Science, National University of Singapore, Singapore, Singapore; Department of Chemistry, National University of Singapore, Singapore, Singapore
| | - Ashwin Venkata Subba Nelanuthala
- Department of Biological Sciences, National University of Singapore, Singapore, Singapore; NUS Centre for Bio-Imaging Sciences, National University of Singapore, Singapore, Singapore
| | | | - Adrian Röllin
- Department of Statistics and Data Science, National University of Singapore, Singapore, Singapore
| | - Thorsten Wohland
- Department of Biological Sciences, National University of Singapore, Singapore, Singapore; NUS Centre for Bio-Imaging Sciences, National University of Singapore, Singapore, Singapore; Institute of Digital Molecular Analytics and Science, National University of Singapore, Singapore, Singapore; Department of Chemistry, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
18
|
Gao X, Huang T, Tang P, Di J, Zhong L, Zhang W. Enhancing scanning electron microscopy imaging quality of weakly conductive samples through unsupervised learning. Sci Rep 2024; 14:6439. [PMID: 38499623 PMCID: PMC10948821 DOI: 10.1038/s41598-024-57056-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 03/13/2024] [Indexed: 03/20/2024] Open
Abstract
Scanning electron microscopy (SEM) is a crucial tool for analyzing submicron-scale structures. However, the attainment of high-quality SEM images is contingent upon the high conductivity of the material due to constraints imposed by its imaging principles. For weakly conductive materials or structures induced by intrinsic properties or organic doping, the SEM imaging quality is significantly compromised, thereby impeding the accuracy of subsequent structure-related analyses. Moreover, the unavailability of paired high-low quality images in this context renders the supervised-based image processing methods ineffective in addressing this challenge. Here, an unsupervised method based on Cycle-consistent Generative Adversarial Network (CycleGAN) was proposed to enhance the quality of SEM images for weakly conductive samples. The unsupervised model can perform end-to-end learning using unpaired blurred and clear SEM images from weakly and well-conductive samples, respectively. To address the requirements of material structure analysis, an edge loss function was further introduced to recover finer details in the network-generated images. Various quantitative evaluations substantiate the efficacy of the proposed method in SEM image quality improvement with better performance than the traditional methods. Our framework broadens the application of artificial intelligence in materials analysis, holding significant implications in fields such as materials science and image restoration.
Collapse
Affiliation(s)
- Xin Gao
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China
| | - Tao Huang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China
| | - Ping Tang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China
| | - Jianglei Di
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China
| | - Liyun Zhong
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China
| | - Weina Zhang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou, 510006, China.
| |
Collapse
|
19
|
Trieu Q, Nehmetallah G. Deep learning based coherence holography reconstruction of 3D objects. APPLIED OPTICS 2024; 63:B1-B15. [PMID: 38437250 DOI: 10.1364/ao.503034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 10/12/2023] [Indexed: 03/06/2024]
Abstract
We propose a reconstruction method for coherence holography using deep neural networks. cGAN and U-NET models were developed to reconstruct 3D complex objects from recorded interferograms. Our proposed methods, dubbed deep coherence holography (DCH), predict the non-diffracted fields or the sub-objects included in the 3D object from the captured interferograms, yielding better reconstructed objects than the traditional analytical imaging methods in terms of accuracy, resolution, and time. The DCH needs one image per sub-object as opposed to N images for the traditional sin-fit algorithm, and hence the total reconstruction time is reduced by N×. Furthermore, with noisy interferograms the DCH amplitude mean square reconstruction error (MSE) is 5×104× and 104× and phase MSE is 102× and 3×103× better than Fourier fringe and sin-fit algorithms, respectively. The amplitude peak signal to noise ratio (PSNR) is 3× and 2× and phase PSNR is 5× and 3× better than Fourier fringe and sin-fit algorithms, respectively. The reconstruction resolution is the same as sin-fit but 2× better than the Fourier fringe analysis technique.
Collapse
|
20
|
Lee XL, Chang JC, Ye XY, Chang CY. Field-programmable gate array and deep neural network-accelerated spatial-spectral interferometry for rapid optical dispersion analysis. OPTICS LETTERS 2024; 49:1289-1292. [PMID: 38426995 DOI: 10.1364/ol.510618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/24/2024] [Indexed: 03/02/2024]
Abstract
Spatial-spectral interferometry (SSI) is a technique used to reconstruct the electrical field of an ultrafast laser. By analyzing the spectral phase distribution, SSI provides valuable information about the optical dispersion affecting the spectral phase, which is related to the energy distribution of the laser pulses. SSI is a single-shot measurement process and has a low laser power requirement. However, the reconstruction algorithm involves numerous Fourier transform and filtering operations, which limits the applicability of SSI for real-time dispersion analysis. To address this issue, this Letter proposes a field-programmable gate array (FPGA)-based deep neural network to accelerate the spectral phase reconstruction and dispersion estimation process. The results show that the analysis time is improved from 124 to 9.27 ms, which represents a 13.4-fold improvement on the standard Fourier transform-based reconstruction algorithm.
Collapse
|
21
|
Lei M, Zhao J, Zhou J, Lee H, Wu Q, Burns Z, Chen G, Liu Z. Super resolution label-free dark-field microscopy by deep learning. NANOSCALE 2024; 16:4703-4709. [PMID: 38268454 DOI: 10.1039/d3nr04294d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Dark-field microscopy (DFM) is a powerful label-free and high-contrast imaging technique due to its ability to reveal features of transparent specimens with inhomogeneities. However, owing to the Abbe's diffraction limit, fine structures at sub-wavelength scale are difficult to resolve. In this work, we report a single image super resolution DFM scheme using a convolutional neural network (CNN). A U-net based CNN is trained with a dataset which is numerically simulated based on the forward physical model of the DFM. The forward physical model described by the parameters of the imaging setup connects the object ground truths and dark field images. With the trained network, we demonstrate super resolution dark field imaging of various test samples with twice resolution improvement. Our technique illustrates a promising deep learning approach to double the resolution of DFM without any hardware modification.
Collapse
Affiliation(s)
- Ming Lei
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiao Zhou
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Hongki Lee
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Qianyi Wu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Guanghao Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
- Materials Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| |
Collapse
|
22
|
Bender SWB, Dreisler MW, Zhang M, Kæstel-Hansen J, Hatzakis NS. SEMORE: SEgmentation and MORphological fingErprinting by machine learning automates super-resolution data analysis. Nat Commun 2024; 15:1763. [PMID: 38409214 PMCID: PMC10897458 DOI: 10.1038/s41467-024-46106-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 02/13/2024] [Indexed: 02/28/2024] Open
Abstract
The morphology of protein assemblies impacts their behaviour and contributes to beneficial and aberrant cellular responses. While single-molecule localization microscopy provides the required spatial resolution to investigate these assemblies, the lack of universal robust analytical tools to extract and quantify underlying structures limits this powerful technique. Here we present SEMORE, a semi-automatic machine learning framework for universal, system- and input-dependent, analysis of super-resolution data. SEMORE implements a multi-layered density-based clustering module to dissect biological assemblies and a morphology fingerprinting module for quantification by multiple geometric and kinetics-based descriptors. We demonstrate SEMORE on simulations and diverse raw super-resolution data: time-resolved insulin aggregates, and published data of dSTORM imaging of nuclear pore complexes, fibroblast growth receptor 1, sptPALM of Syntaxin 1a and dynamic live-cell PALM of ryanodine receptors. SEMORE extracts and quantifies all protein assemblies, their temporal morphology evolution and provides quantitative insights, e.g. classification of heterogeneous insulin aggregation pathways and NPC geometry in minutes. SEMORE is a general analysis platform for super-resolution data, and being a time-aware framework can also support the rise of 4D super-resolution data.
Collapse
Affiliation(s)
- Steen W B Bender
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Marcus W Dreisler
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Min Zhang
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Jacob Kæstel-Hansen
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark.
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark.
| | - Nikos S Hatzakis
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark.
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Protein Research, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
23
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
24
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
25
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
26
|
Chen MM, Kopittke PM, Zhao FJ, Wang P. Applications and opportunities of click chemistry in plant science. TRENDS IN PLANT SCIENCE 2024; 29:167-178. [PMID: 37612212 DOI: 10.1016/j.tplants.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/29/2023] [Accepted: 07/19/2023] [Indexed: 08/25/2023]
Abstract
The Nobel Prize in Chemistry for 2022 was awarded to the pioneers of Lego-like 'click chemistry': combinatorial chemistry with remarkable modularity and diversity. It has been applied to a wide variety of biological systems, from microorganisms to plants and animals, including humans. Although click chemistry is a powerful chemical biology tool, comparatively few studies have examined its potential in plant science. Here, we review click chemistry reactions and their applications in plant systems, highlighting the activity-based probes and metabolic labeling strategies combined with bioorthogonal click chemistry to visualize plant biological processes. These applications offer new opportunities to explore and understand the underlying molecular mechanisms regulating plant composition, growth, metabolism, defense, and immune responses.
Collapse
Affiliation(s)
- Ming-Ming Chen
- Centre of Agriculture and Health, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, 210095, China
| | - Peter M Kopittke
- School of Agriculture and Food Sciences, The University of Queensland, St Lucia, Queensland, 4072, Australia
| | - Fang-Jie Zhao
- State Key Laboratory of Crop Genetics and Germplasm Enhancement, College of Resources and Environmental Sciences, Nanjing Agricultural University, Nanjing, 210095, China
| | - Peng Wang
- Centre of Agriculture and Health, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, 210095, China; State Key Laboratory of Crop Genetics and Germplasm Enhancement, College of Resources and Environmental Sciences, Nanjing Agricultural University, Nanjing, 210095, China.
| |
Collapse
|
27
|
Gohari G, Jiang M, Manganaris GA, Zhou J, Fotopoulos V. Next generation chemical priming: with a little help from our nanocarrier friends. TRENDS IN PLANT SCIENCE 2024; 29:150-166. [PMID: 38233253 DOI: 10.1016/j.tplants.2023.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/28/2023] [Accepted: 11/30/2023] [Indexed: 01/19/2024]
Abstract
Plants are exposed to multiple threats linked to climate change which can cause critical yield losses. Therefore, designing novel crop management tools is crucial. Chemical priming has recently emerged as an effective technology for improving tolerance to stress factors. Several compounds such as phytohormones, reactive species, and synthetic chimeras have been identified as promising priming agents. Following remarkable developments in nanotechnology, several unique nanocarriers (NCs) have been engineered that can act as smart delivery systems. These provide an eco-friendly, next-generation method for chemical priming, leading to increased efficiency and reduced overall chemical usage. We review novel engineered NCs (NENCs) as vehicles for chemical agents in advanced priming strategies, and address challenges and opportunities to be met towards achieving sustainable agriculture.
Collapse
Affiliation(s)
- Gholamreza Gohari
- Department of Agricultural Sciences Biotechnology and Food Science, Cyprus University of Technology, Lemesos, Cyprus; Department of Horticulture, Faculty of Horticulture, University of Maragheh, Maragheh, Iran
| | - Meng Jiang
- Hainan Institute, Zhejiang University, Yazhou Bay Sci-Tech City, Sanya, PR China
| | - George A Manganaris
- Department of Agricultural Sciences Biotechnology and Food Science, Cyprus University of Technology, Lemesos, Cyprus
| | - Jie Zhou
- Hainan Institute, Zhejiang University, Yazhou Bay Sci-Tech City, Sanya, PR China; Department of Horticulture, Zhejiang Provincial Key Laboratory of Horticultural Plant Integrative Biology, Zhejiang University, Hangzhou, PR China
| | - Vasileios Fotopoulos
- Department of Agricultural Sciences Biotechnology and Food Science, Cyprus University of Technology, Lemesos, Cyprus.
| |
Collapse
|
28
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
29
|
He H, Cao M, Gao Y, Zheng P, Yan S, Zhong JH, Wang L, Jin D, Ren B. Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy. Nat Commun 2024; 15:754. [PMID: 38272927 PMCID: PMC10810791 DOI: 10.1038/s41467-024-44864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/05/2024] [Indexed: 01/27/2024] Open
Abstract
The low scattering efficiency of Raman scattering makes it challenging to simultaneously achieve good signal-to-noise ratio (SNR), high imaging speed, and adequate spatial and spectral resolutions. Here, we report a noise learning (NL) approach that estimates the intrinsic noise distribution of each instrument by statistically learning the noise in the pixel-spatial frequency domain. The estimated noise is then removed from the noisy spectra. This enhances the SNR by ca. 10 folds, and suppresses the mean-square error by almost 150 folds. NL allows us to improve the positioning accuracy and spatial resolution and largely eliminates the impact of thermal drift on tip-enhanced Raman spectroscopic nanoimaging. NL is also applicable to enhance SNR in fluorescence and photoluminescence imaging. Our method manages the ground truth spectra and the instrumental noise simultaneously within the training dataset, which bypasses the tedious labelling of huge dataset required in conventional deep learning, potentially shifting deep learning from sample-dependent to instrument-dependent.
Collapse
Affiliation(s)
- Hao He
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Maofeng Cao
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Yun Gao
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Peng Zheng
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Sen Yan
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Jin-Hui Zhong
- Department of Materials Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.
| | - Lei Wang
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China.
| | - Dayong Jin
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
- Institute for Biomedical Materials & Devices (IBMD), University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Bin Ren
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China.
- Tan Kah Kee Innovation Laboratory, Xiamen, 361104, China.
| |
Collapse
|
30
|
Sun J, Yang B, Koukourakis N, Guck J, Czarske JW. AI-driven projection tomography with multicore fibre-optic cell rotation. Nat Commun 2024; 15:147. [PMID: 38167247 PMCID: PMC10762230 DOI: 10.1038/s41467-023-44280-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024] Open
Abstract
Optical tomography has emerged as a non-invasive imaging method, providing three-dimensional insights into subcellular structures and thereby enabling a deeper understanding of cellular functions, interactions, and processes. Conventional optical tomography methods are constrained by a limited illumination scanning range, leading to anisotropic resolution and incomplete imaging of cellular structures. To overcome this problem, we employ a compact multi-core fibre-optic cell rotator system that facilitates precise optical manipulation of cells within a microfluidic chip, achieving full-angle projection tomography with isotropic resolution. Moreover, we demonstrate an AI-driven tomographic reconstruction workflow, which can be a paradigm shift from conventional computational methods, often demanding manual processing, to a fully autonomous process. The performance of the proposed cell rotation tomography approach is validated through the three-dimensional reconstruction of cell phantoms and HL60 human cancer cells. The versatility of this learning-based tomographic reconstruction workflow paves the way for its broad application across diverse tomographic imaging modalities, including but not limited to flow cytometry tomography and acoustic rotation tomography. Therefore, this AI-driven approach can propel advancements in cell biology, aiding in the inception of pioneering therapeutics, and augmenting early-stage cancer diagnostics.
Collapse
Affiliation(s)
- Jiawei Sun
- Shanghai Artificial Intelligence Laboratory, Longwen Road 129, Xuhui District, 200232, Shanghai, China.
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany.
| | - Bin Yang
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany
| | - Nektarios Koukourakis
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany
| | - Jochen Guck
- Max Planck Institute for the Science of Light & Max Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Juergen W Czarske
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Dresden, Germany.
- Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany.
- Institute of Applied Physics, TU Dresden, Dresden, Germany.
| |
Collapse
|
31
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
32
|
Alajaji SA, Khoury ZH, Elgharib M, Saeed M, Ahmed ARH, Khan MB, Tavares T, Jessri M, Puche AC, Hoorfar H, Stojanov I, Sciubba JJ, Sultan AS. Generative Adversarial Networks in Digital Histopathology: Current Applications, Limitations, Ethical Considerations, and Future Directions. Mod Pathol 2024; 37:100369. [PMID: 37890670 DOI: 10.1016/j.modpat.2023.100369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/04/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
Generative adversarial networks (GANs) have gained significant attention in the field of image synthesis, particularly in computer vision. GANs consist of a generative model and a discriminative model trained in an adversarial setting to generate realistic and novel data. In the context of image synthesis, the generator produces synthetic images, whereas the discriminator determines their authenticity by comparing them with real examples. Through iterative training, the generator allows the creation of images that are indistinguishable from real ones, leading to high-quality image generation. Considering their success in computer vision, GANs hold great potential for medical diagnostic applications. In the medical field, GANs can generate images of rare diseases, aid in learning, and be used as visualization tools. GANs can leverage unlabeled medical images, which are large in size, numerous in quantity, and challenging to annotate manually. GANs have demonstrated remarkable capabilities in image synthesis and have the potential to significantly impact digital histopathology. This review article focuses on the emerging use of GANs in digital histopathology, examining their applications and potential challenges. Histopathology plays a crucial role in disease diagnosis, and GANs can contribute by generating realistic microscopic images. However, ethical considerations arise because of the reliance on synthetic or pseudogenerated images. Therefore, the manuscript also explores the current limitations and highlights the ethical considerations associated with the use of this technology. In conclusion, digital histopathology has seen an emerging use of GANs for image enhancement, such as color (stain) normalization, virtual staining, and ink/marker removal. GANs offer significant potential in transforming digital pathology when applied to specific and narrow tasks (preprocessing enhancements). Evaluating data quality, addressing biases, protecting privacy, ensuring accountability and transparency, and developing regulation are imperative to ensure the ethical application of GANs.
Collapse
Affiliation(s)
- Shahd A Alajaji
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, Maryland; Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia; Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, Maryland
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, School of Dentistry, Meharry Medical College, Nashville, Tennessee
| | | | | | | | | | - Tiffany Tavares
- Department of Comprehensive Dentistry, UT Health San Antonio, School of Dentistry, San Antonio, Texas
| | - Maryam Jessri
- Oral Medicine and Pathology Department, School of Dentistry, University of Queensland, Herston, Queensland, Australia; Oral Medicine Department, Metro North Hospital and Health Services, Queensland Health, Queensland, Australia
| | - Adam C Puche
- Department of Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - Hamid Hoorfar
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, Maryland
| | - Ivan Stojanov
- Department of Pathology, Robert J. Tomsich Pathology and Laboratory Medicine Institute, Cleveland Clinic, Cleveland, Ohio
| | - James J Sciubba
- Department of Otolaryngology, Head and Neck Surgery, The Johns Hopkins University, Baltimore, Maryland
| | - Ahmed S Sultan
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, Maryland; Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, Maryland; University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, Maryland.
| |
Collapse
|
33
|
Bi X, Lin L, Chen Z, Ye J. Artificial Intelligence for Surface-Enhanced Raman Spectroscopy. SMALL METHODS 2024; 8:e2301243. [PMID: 37888799 DOI: 10.1002/smtd.202301243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/11/2023] [Indexed: 10/28/2023]
Abstract
Surface-enhanced Raman spectroscopy (SERS), well acknowledged as a fingerprinting and sensitive analytical technique, has exerted high applicational value in a broad range of fields including biomedicine, environmental protection, food safety among the others. In the endless pursuit of ever-sensitive, robust, and comprehensive sensing and imaging, advancements keep emerging in the whole pipeline of SERS, from the design of SERS substrates and reporter molecules, synthetic route planning, instrument refinement, to data preprocessing and analysis methods. Artificial intelligence (AI), which is created to imitate and eventually exceed human behaviors, has exhibited its power in learning high-level representations and recognizing complicated patterns with exceptional automaticity. Therefore, facing up with the intertwining influential factors and explosive data size, AI has been increasingly leveraged in all the above-mentioned aspects in SERS, presenting elite efficiency in accelerating systematic optimization and deepening understanding about the fundamental physics and spectral data, which far transcends human labors and conventional computations. In this review, the recent progresses in SERS are summarized through the integration of AI, and new insights of the challenges and perspectives are provided in aim to better gear SERS toward the fast track.
Collapse
Affiliation(s)
- Xinyuan Bi
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Li Lin
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Zhou Chen
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Jian Ye
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| |
Collapse
|
34
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
35
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
36
|
Wang L, Chen S, Liu L, Yin X, Shi G, Mo J. Axial super-resolution optical coherence tomography via complex-valued network. Phys Med Biol 2023; 68:235016. [PMID: 37922558 DOI: 10.1088/1361-6560/ad0997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/03/2023] [Indexed: 11/07/2023]
Abstract
Optical coherence tomography (OCT) is a fast and non-invasive optical interferometric imaging technique that can provide high-resolution cross-sectional images of biological tissues. OCT's key strength is its depth resolving capability which remains invariant along the imaging depth and is determined by the axial resolution. The axial resolution is inversely proportional to the bandwidth of the OCT light source. Thus, the use of broadband light sources can effectively improve the axial resolution and however leads to an increased cost. In recent years, real-valued deep learning technique has been introduced to obtain super-resolution optical imaging. In this study, we proposed a complex-valued super-resolution network (CVSR-Net) to achieve an axial super-resolution for OCT by fully utilizing the amplitude and phase of OCT signal. The method was evaluated on three OCT datasets. The results show that the CVSR-Net outperforms its real-valued counterpart with a better depth resolving capability. Furthermore, comparisons were made between our network, six prevailing real-valued networks and their complex-valued counterparts. The results demonstrate that the complex-valued network exhibited a better super-resolution performance than its real-valued counterpart and our proposed CVSR-Net achieved the best performance. In addition, the CVSR-Net was tested on out-of-distribution domain datasets and its super-resolution performance was well maintained as compared to that on source domain datasets, indicating a good generalization capability.
Collapse
Affiliation(s)
- Lingyun Wang
- School of Electronics and Information Engineering, Soochow University, Suzhou, People's Republic of China
| | - Si Chen
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Linbo Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Xue Yin
- The First Affiliated Hospital of Soochow University, Suzhou, People's Republic of China
| | - Guohua Shi
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Suzhou, People's Republic of China
| | - Jianhua Mo
- School of Electronics and Information Engineering, Soochow University, Suzhou, People's Republic of China
| |
Collapse
|
37
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
38
|
de Jong-Bolm D, Sadeghi M, Bogaciu CA, Bao G, Klaehn G, Hoff M, Mittelmeier L, Basmanav FB, Opazo F, Noé F, Rizzoli SO. Protein nanobarcodes enable single-step multiplexed fluorescence imaging. PLoS Biol 2023; 21:e3002427. [PMID: 38079451 PMCID: PMC10735187 DOI: 10.1371/journal.pbio.3002427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 12/21/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023] Open
Abstract
Multiplexed cellular imaging typically relies on the sequential application of detection probes, as antibodies or DNA barcodes, which is complex and time-consuming. To address this, we developed here protein nanobarcodes, composed of combinations of epitopes recognized by specific sets of nanobodies. The nanobarcodes are read in a single imaging step, relying on nanobodies conjugated to distinct fluorophores, which enables a precise analysis of large numbers of protein combinations. Fluorescence images from nanobarcodes were used as input images for a deep neural network, which was able to identify proteins with high precision. We thus present an efficient and straightforward protein identification method, which is applicable to relatively complex biological assays. We demonstrate this by a multicell competition assay, in which we successfully used our nanobarcoded proteins together with neurexin and neuroligin isoforms, thereby testing the preferred binding combinations of multiple isoforms, in parallel.
Collapse
Affiliation(s)
- Daniëlle de Jong-Bolm
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
| | - Mohsen Sadeghi
- Department of Mathematics and Computer Science, Free University of Berlin, Berlin, Germany
| | - Cristian A. Bogaciu
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
| | - Guobin Bao
- Institute of Pharmacology and Toxicology, University Medical Center, Georg-August-University, Göttingen, Germany
| | - Gabriele Klaehn
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
| | - Merle Hoff
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
| | - Lucas Mittelmeier
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
| | - F. Buket Basmanav
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
- Campus Laboratory for Advanced Imaging, Microscopy and Spectroscopy, University of Göttingen, Göttingen, Germany
| | - Felipe Opazo
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
- Center for Biostructural Imaging of Neurodegeneration (BIN), University of Göttingen Medical Center, Göttingen, Germany
- NanoTag Biotechnologies GmbH, Göttingen, Germany
| | - Frank Noé
- Department of Mathematics and Computer Science, Free University of Berlin, Berlin, Germany
- Department of Physics, Free University of Technology, Berlin, Germany
- Department of Chemistry, Rice University, Houston, Texas, United States of America
- Microsoft Research AI4Science, Berlin, Germany
| | - Silvio O. Rizzoli
- Department of Neuro- and Sensory physiology, University of Göttingen Medical Center, Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), Göttingen, Germany
- NanoTag Biotechnologies GmbH, Göttingen, Germany
| |
Collapse
|
39
|
Guo X, Zhao F, Zhu J, Zhu D, Zhao Y, Fei P. Rapid 3D isotropic imaging of whole organ with double-ring light-sheet microscopy and self-learning side-lobe elimination. BIOMEDICAL OPTICS EXPRESS 2023; 14:6206-6221. [PMID: 38420327 PMCID: PMC10898557 DOI: 10.1364/boe.505217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/26/2023] [Accepted: 10/30/2023] [Indexed: 03/02/2024]
Abstract
Bessel-like plane illumination forms a new type of light-sheet microscopy with ultra-long optical sectioning distance that enables rapid 3D imaging of fine cellular structures across an entire large tissue. However, the side-lobe excitation of conventional Bessel light sheets severely impairs the quality of the reconstructed 3D image. Here, we propose a self-supervised deep learning (DL) approach that can completely eliminate the residual side lobes for a double-ring-modulated non-diffraction light-sheet microscope, thereby substantially improving the axial resolution of the 3D image. This lightweight DL model utilizes the own point spread function (PSF) of the microscope as prior information without the need for external high-resolution microscopy data. After a quick training process based on a small number of datasets, the grown-up model can restore sidelobe-free 3D images with near isotropic resolution for diverse samples. Using an advanced double-ring light-sheet microscope in conjunction with this efficient restoration approach, we demonstrate 5-minute rapid imaging of an entire mouse brain with a size of ∼12 mm × 8 mm × 6 mm and achieve uniform isotropic resolution of ∼4 µm (1.6-µm voxel) capable of discerning the single neurons and vessels across the whole brain.
Collapse
Affiliation(s)
- Xinyi Guo
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Fang Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jingtan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Dan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
- Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| |
Collapse
|
40
|
Zhou H, Li Y, Chen B, Yang H, Zou M, Wen W, Ma Y, Chen M. Registration-free 3D super-resolution generative deep-learning network for fluorescence microscopy imaging. OPTICS LETTERS 2023; 48:6300-6303. [PMID: 38039252 DOI: 10.1364/ol.503238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/02/2023] [Indexed: 12/03/2023]
Abstract
Volumetric fluorescence microscopy has a great demand for high-resolution (HR) imaging and comes at the cost of sophisticated imaging solutions. Image super-resolution (SR) methods offer an effective way to recover HR images from low-resolution (LR) images. Nevertheless, these methods require pixel-level registered LR and HR images, posing a challenge in accurate image registration. To address these issues, we propose a novel registration-free image SR method. Our method conducts SR training and prediction directly on unregistered LR and HR volume neuronal images. The network is built on the CycleGAN framework and the 3D UNet based on attention mechanism. We evaluated our method on LR (5×/0.16-NA) and HR (20×/1.0-NA) fluorescence volume neuronal images collected by light-sheet microscopy. Compared to other super-resolution methods, our approach achieved the best reconstruction results. Our method shows promise for wide applications in the field of neuronal image super-resolution.
Collapse
|
41
|
Xu X, Qiu K, Tian Z, Aryal C, Rowan F, Chen R, Sun Y, Diao J. Probing the dynamic crosstalk of lysosomes and mitochondria with structured illumination microscopy. Trends Analyt Chem 2023; 169:117370. [PMID: 37928815 PMCID: PMC10621629 DOI: 10.1016/j.trac.2023.117370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2023]
Abstract
Structured illumination microscopy (SIM) is a super-resolution technology for imaging living cells and has been used for studying the dynamics of lysosomes and mitochondria. Recently, new probes and analyzing methods have been developed for SIM imaging, enabling the quantitative analysis of these subcellular structures and their interactions. This review provides an overview of the working principle and advances of SIM, as well as the organelle-targeting principles and types of fluorescence probes, including small molecules, metal complexes, nanoparticles, and fluorescent proteins. Additionally, quantitative methods based on organelle morphology and distribution are outlined. Finally, the review provides an outlook on the current challenges and future directions for improving the combination of SIM imaging and image analysis to further advance the study of organelles. We hope that this review will be useful for researchers working in the field of organelle research and help to facilitate the development of SIM imaging and analysis techniques.
Collapse
Affiliation(s)
- Xiuqiong Xu
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Kangqiang Qiu
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Zhiqi Tian
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Chinta Aryal
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Fiona Rowan
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Rui Chen
- Department of Chemistry, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Yujie Sun
- Department of Chemistry, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Jiajie Diao
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| |
Collapse
|
42
|
Hou H, Mitbander R, Tang Y, Azimuddin A, Carns J, Schwarz RA, Richards-Kortum RR. Optical imaging technologies for in vivo cancer detection in low-resource settings. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2023; 28:100495. [PMID: 38406798 PMCID: PMC10883072 DOI: 10.1016/j.cobme.2023.100495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Cancer continues to affect underserved populations disproportionately. Novel optical imaging technologies, which can provide rapid, non-invasive, and accurate cancer detection at the point of care, have great potential to improve global cancer care. This article reviews the recent technical innovations and clinical translation of low-cost optical imaging technologies, highlighting the advances in both hardware and software, especially the integration of artificial intelligence, to improve in vivo cancer detection in low-resource settings. Additionally, this article provides an overview of existing challenges and future perspectives of adapting optical imaging technologies into clinical practice, which can potentially contribute to novel insights and programs that effectively improve cancer detection in low-resource settings.
Collapse
Affiliation(s)
- Huayu Hou
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Ruchika Mitbander
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Ahad Azimuddin
- School of Medicine, Texas A&M University, Houston, TX 77030, USA
| | - Jennifer Carns
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | - Richard A Schwarz
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
| | | |
Collapse
|
43
|
Xu L, Kan S, Yu X, Liu Y, Fu Y, Peng Y, Liang Y, Cen Y, Zhu C, Jiang W. Deep learning enables stochastic optical reconstruction microscopy-like superresolution image reconstruction from conventional microscopy. iScience 2023; 26:108145. [PMID: 37867953 PMCID: PMC10587619 DOI: 10.1016/j.isci.2023.108145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/05/2023] [Accepted: 10/02/2023] [Indexed: 10/24/2023] Open
Abstract
Despite its remarkable potential for transforming low-resolution images, deep learning faces significant challenges in achieving high-quality superresolution microscopy imaging from wide-field (conventional) microscopy. Here, we present X-Microscopy, a computational tool comprising two deep learning subnets, UR-Net-8 and X-Net, which enables STORM-like superresolution microscopy image reconstruction from wide-field images with input-size flexibility. X-Microscopy was trained using samples of various subcellular structures, including cytoskeletal filaments, dot-like, beehive-like, and nanocluster-like structures, to generate prediction models capable of producing images of comparable quality to STORM-like images. In addition to enabling multicolour superresolution image reconstructions, X-Microscopy also facilitates superresolution image reconstruction from different conventional microscopic systems. The capabilities of X-Microscopy offer promising prospects for making superresolution microscopy accessible to a broader range of users, going beyond the confines of well-equipped laboratories.
Collapse
Affiliation(s)
- Lei Xu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Shichao Kan
- School of Computer Science and Engineering, Central South University, Changsha, Hunan 410083, China
| | - Xiying Yu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ye Liu
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yuxia Fu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yiqiang Peng
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yanhui Liang
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yigang Cen
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
| | - Changjun Zhu
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Wei Jiang
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
44
|
Ge X, Yang P, Wu Z, Luo C, Jin P, Wang Z, Wang S, Huang Y, Niu T. Virtual differential phase-contrast and dark-field imaging of x-ray absorption images via deep learning. Bioeng Transl Med 2023; 8:e10494. [PMID: 38023711 PMCID: PMC10658538 DOI: 10.1002/btm2.10494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 12/22/2022] [Accepted: 01/04/2023] [Indexed: 01/21/2023] Open
Abstract
Weak absorption contrast in biological tissues has hindered x-ray computed tomography from accessing biological structures. Recently, grating-based imaging has emerged as a promising solution to biological low-contrast imaging, providing complementary and previously unavailable structural information of the specimen. Although it has been successfully applied to work with conventional x-ray sources, grating-based imaging is time-consuming and requires a sophisticated experimental setup. In this work, we demonstrate that a deep convolutional neural network trained with a generative adversarial network can directly convert x-ray absorption images into differential phase-contrast and dark-field images that are comparable to those obtained at both a synchrotron beamline and a laboratory facility. By smearing back all of the virtual projections, high-quality tomographic images of biological test specimens deliver the differential phase-contrast- and dark-field-like contrast and quantitative information, broadening the horizon of x-ray image contrast generation.
Collapse
Affiliation(s)
- Xin Ge
- School of Science, Shenzhen Campus of Sun Yat‐sen UniversityShenzhenGuangdongChina
- Institute of Biomedical EngineeringShenzhen Bay LaboratoryShenzhenGuangdongChina
| | - Pengfei Yang
- College of Biomedical Engineering and Instrument Science, Zhejiang UniversityHangzhouZhejiangChina
| | - Zhao Wu
- National Synchrotron Radiation LaboratoryUniversity of Science and Technology of ChinaHefeiAnhuiChina
| | - Chen Luo
- Institute of Biomedical EngineeringShenzhen Bay LaboratoryShenzhenGuangdongChina
| | - Peng Jin
- Institute of Biomedical EngineeringShenzhen Bay LaboratoryShenzhenGuangdongChina
| | - Zhili Wang
- Department of Optical EngineeringSchool of Physics, Hefei University of TechnologyHefeiAnhuiChina
| | - Shengxiang Wang
- Spallation Neutron Source Science CenterDongguanGuangdongChina
- Institute of High Energy Physics, Chinese Academy of SciencesBeijingChina
| | - Yongsheng Huang
- School of Science, Shenzhen Campus of Sun Yat‐sen UniversityShenzhenGuangdongChina
| | - Tianye Niu
- Institute of Biomedical EngineeringShenzhen Bay LaboratoryShenzhenGuangdongChina
- Peking University Aerospace School of Clinical Medicine, Aerospace Center HospitalBeijingChina
| |
Collapse
|
45
|
Guo M, Wu Y, Su Y, Qian S, Krueger E, Christensen R, Kroeschell G, Bui J, Chaw M, Zhang L, Liu J, Hou X, Han X, Ma X, Zhovmer A, Combs C, Moyle M, Yemini E, Liu H, Liu Z, La Riviere P, Colón-Ramos D, Shroff H. Deep learning-based aberration compensation improves contrast and resolution in fluorescence microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.15.562439. [PMID: 37986950 PMCID: PMC10659418 DOI: 10.1101/2023.10.15.562439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Optical aberrations hinder fluorescence microscopy of thick samples, reducing image signal, contrast, and resolution. Here we introduce a deep learning-based strategy for aberration compensation, improving image quality without slowing image acquisition, applying additional dose, or introducing more optics into the imaging path. Our method (i) introduces synthetic aberrations to images acquired on the shallow side of image stacks, making them resemble those acquired deeper into the volume and (ii) trains neural networks to reverse the effect of these aberrations. We use simulations to show that applying the trained 'de-aberration' networks outperforms alternative methods, and subsequently apply the networks to diverse datasets captured with confocal, light-sheet, multi-photon, and super-resolution microscopy. In all cases, the improved quality of the restored data facilitates qualitative image inspection and improves downstream image quantitation, including orientational analysis of blood vessels in mouse tissue and improved membrane and nuclear segmentation in C. elegans embryos.
Collapse
Affiliation(s)
- Min Guo
- Current address: State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Yicong Wu
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, Maryland, USA
| | - Yijun Su
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, Maryland, USA
| | - Shuhao Qian
- Current address: State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Eric Krueger
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Ryan Christensen
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Grant Kroeschell
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Johnny Bui
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Matthew Chaw
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Lixia Zhang
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, Maryland, USA
| | - Jiamin Liu
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, Maryland, USA
| | - Xuekai Hou
- Current address: State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Xiaofei Han
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
| | - Xuefei Ma
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Alexander Zhovmer
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Christian Combs
- NHLBI Light Microscopy Facility, National Institutes of Health, Bethesda, MD, USA
| | - Mark Moyle
- Department of Biology, Brigham Young University-Idaho, Rexburg, ID, USA
| | - Eviatar Yemini
- Department of Neurobiology, UMass Chan Medical School, Worcester, MA
| | - Huafeng Liu
- Current address: State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Zhiyi Liu
- Current address: State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Patrick La Riviere
- Department of Radiology, University of Chicago, Chicago, IL, USA
- MBL Fellows Program, Marine Biological Laboratory, Woods Hole, MA, USA
| | - Daniel Colón-Ramos
- MBL Fellows Program, Marine Biological Laboratory, Woods Hole, MA, USA
- Wu Tsai Institute, Department of Neuroscience and Department of Cell Biology, Yale University School of Medicine, New Haven, CT, USA
| | - Hari Shroff
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, Maryland, USA
- MBL Fellows Program, Marine Biological Laboratory, Woods Hole, MA, USA
| |
Collapse
|
46
|
Wang Y, Shah N, Soliman A, Guo D, Rajasekaran S. KE: A Knowledge Enhancing Framework for Machine Learning Models. J Phys Chem A 2023; 127:8437-8446. [PMID: 37773038 DOI: 10.1021/acs.jpca.3c04076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
Machine learning models are widely used in science and engineering to predict the properties of materials and solve complex problems. However, training large models can take days and fine-tuning hyperparameters can take months, making it challenging to achieve optimal performance. To address this issue, we propose a Knowledge Enhancing (KE) algorithm that enhances knowledge gained from a lower capacity model to a higher capacity model, enhancing training efficiency and performance. We focus on the problem of predicting the bandgap of an unknown material and present a theoretical analysis and experimental verification of our algorithm. Our experiments show that the performance of our knowledge enhancement model is improved by at least 10.21% compared to current methods on OMDB datasets. We believe that our generic idea of knowledge enhancement will be useful for solving other problems and provide a promising direction for future research.
Collapse
Affiliation(s)
- Yijue Wang
- Department of Computer Science, University of Connecticut, Storrs, Connecticut 06269, United States
| | - Nidhibahen Shah
- Department of Computer Science, University of Connecticut, Storrs, Connecticut 06269, United States
| | - Ahmed Soliman
- Department of Computer Science, University of Connecticut, Storrs, Connecticut 06269, United States
| | - Dan Guo
- Khoury College of Computer Sciences, Northeastern University, Boston, Massachusetts 02115, United States
| | - Sanguthevar Rajasekaran
- Department of Computer Science, University of Connecticut, Storrs, Connecticut 06269, United States
| |
Collapse
|
47
|
Zhao J, Zhang H, Chong MZ, Zhang YY, Zhang ZW, Zhang ZK, Du CH, Liu PK. Deep-Learning-Assisted Simultaneous Target Sensing and Super-Resolution Imaging. ACS APPLIED MATERIALS & INTERFACES 2023; 15:47669-47681. [PMID: 37755336 DOI: 10.1021/acsami.3c07812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
Abstract
Metasurfaces have recently experienced revolutionary progress in sensing and super-resolution imaging fields, mainly due to their manipulation of electromagnetic waves on subwavelength scales. However, on the one hand, the addition of metasurfaces can multiply the complexity of retrieving target information from detected electromagnetic fields. On the other hand, many existing studies utilize deep learning methods to provide compelling tools for electromagnetic problems but mainly concentrate on resolving one single function, limiting their versatilities. In this work, a multifunctional deep learning network is demonstrated to reconstruct diverse target information in a metasurface-target interactive system. First, a preliminary experiment verifies that the metasurface-involved scenario can tolerate the system noises. Then, the captured electric field distributions are fed into the multifunctional network, which can not only accurately sense the quantity and relative permittivity of targets but also generate super-resolution images precisely. The deep learning network, thus, paves an alternative way to recover the targets' information in metasurface-target interactive systems, accelerating the progression of target sensing and superimaging areas. Besides, another new network that allows forward electromagnetic prediction is also proposed and demonstrated. To sum up, the deep learning methodology may hold promise for inverse reconstructions or forward predictions in many electromagnetic scenarios.
Collapse
Affiliation(s)
- Jin Zhao
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| | - Huangzhao Zhang
- School of Computer Science, Peking University, Beijing 100871, China
| | - Ming-Zhe Chong
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| | - Yue-Yi Zhang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| | - Zi-Wen Zhang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| | - Zong-Kun Zhang
- Laboratory of Electromagnetic and Microwave Technology, School of Electronics, Peking University, Beijing 100871, China
| | - Chao-Hai Du
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| | - Pu-Kun Liu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China
| |
Collapse
|
48
|
Eddy CZ, Naylor A, Cunningham CT, Sun B. Facilitating cell segmentation with the projection-enhancement network. Phys Biol 2023; 20:10.1088/1478-3975/acfe53. [PMID: 37769666 PMCID: PMC10586931 DOI: 10.1088/1478-3975/acfe53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data that greatly reduces the utility of such 3D data, especially in crowded sample space with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the projection enhancement network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
Collapse
Affiliation(s)
| | - Austin Naylor
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| | | | - Bo Sun
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| |
Collapse
|
49
|
Stossi F, Singh PK, Safari K, Marini M, Labate D, Mancini MA. High throughput microscopy and single cell phenotypic image-based analysis in toxicology and drug discovery. Biochem Pharmacol 2023; 216:115770. [PMID: 37660829 DOI: 10.1016/j.bcp.2023.115770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/23/2023] [Accepted: 08/25/2023] [Indexed: 09/05/2023]
Abstract
Measuring single cell responses to the universe of chemicals (drugs, natural products, environmental toxicants etc.) is of paramount importance to human health as phenotypic variability in sensing stimuli is a hallmark of biology that is considered during high throughput screening. One of the ways to approach this problem is via high throughput, microscopy-based assays coupled with multi-dimensional single cell analysis methods. Here, we will summarize some of the efforts in this vast and growing field, focusing on phenotypic screens (e.g., Cell Painting), single cell analytics and quality control, with particular attention to environmental toxicology and drug screening. We will discuss advantages and limitations of high throughput assays with various end points and levels of complexity.
Collapse
Affiliation(s)
- Fabio Stossi
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX, USA; GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA.
| | - Pankaj K Singh
- GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA; Center for Translational Cancer Research, Institute of Biosciences and Technology, Texas A&M University, Houston, TX, USA
| | - Kazem Safari
- GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA; Center for Translational Cancer Research, Institute of Biosciences and Technology, Texas A&M University, Houston, TX, USA
| | - Michela Marini
- GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA; Department of Mathematics, University of Houston, Houston, TX, USA
| | - Demetrio Labate
- GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA; Department of Mathematics, University of Houston, Houston, TX, USA
| | - Michael A Mancini
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX, USA; GCC Center for Advanced Microscopy and Image Informatics, Houston, TX, USA; Center for Translational Cancer Research, Institute of Biosciences and Technology, Texas A&M University, Houston, TX, USA
| |
Collapse
|
50
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|