1
|
Hasani H, Sun J, Zhu SI, Rong Q, Willomitzer F, Amor R, McConnell G, Cossairt O, Goodhill GJ. Whole-brain imaging of freely-moving zebrafish. Front Neurosci 2023; 17:1127574. [PMID: 37139528 PMCID: PMC10150962 DOI: 10.3389/fnins.2023.1127574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023] Open
Abstract
One of the holy grails of neuroscience is to record the activity of every neuron in the brain while an animal moves freely and performs complex behavioral tasks. While important steps forward have been taken recently in large-scale neural recording in rodent models, single neuron resolution across the entire mammalian brain remains elusive. In contrast the larval zebrafish offers great promise in this regard. Zebrafish are a vertebrate model with substantial homology to the mammalian brain, but their transparency allows whole-brain recordings of genetically-encoded fluorescent indicators at single-neuron resolution using optical microscopy techniques. Furthermore zebrafish begin to show a complex repertoire of natural behavior from an early age, including hunting small, fast-moving prey using visual cues. Until recently work to address the neural bases of these behaviors mostly relied on assays where the fish was immobilized under the microscope objective, and stimuli such as prey were presented virtually. However significant progress has recently been made in developing brain imaging techniques for zebrafish which are not immobilized. Here we discuss recent advances, focusing particularly on techniques based on light-field microscopy. We also draw attention to several important outstanding issues which remain to be addressed to increase the ecological validity of the results obtained.
Collapse
Affiliation(s)
- Hamid Hasani
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
| | - Jipeng Sun
- Department of Computer Science, Northwestern University, Evanston, IL, United States
| | - Shuyu I. Zhu
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| | - Qiangzhou Rong
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| | - Florian Willomitzer
- Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, United States
| | - Rumelo Amor
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD, Australia
| | - Gail McConnell
- Centre for Biophotonics, Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Oliver Cossairt
- Department of Computer Science, Northwestern University, Evanston, IL, United States
| | - Geoffrey J. Goodhill
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
2
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. J Biomed Opt 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
3
|
Zeng D, Zeng C, Zeng Z, Li S, Deng Z, Chen S, Bian Z, Ma J. Basis and current state of computed tomography perfusion imaging: a review. Phys Med Biol 2022; 67. [PMID: 35926503 DOI: 10.1088/1361-6560/ac8717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 08/04/2022] [Indexed: 12/30/2022]
Abstract
Computed tomography perfusion (CTP) is a functional imaging that allows for providing capillary-level hemodynamics information of the desired tissue in clinics. In this paper, we aim to offer insight into CTP imaging which covers the basics and current state of CTP imaging, then summarize the technical applications in the CTP imaging as well as the future technological potential. At first, we focus on the fundamentals of CTP imaging including systematically summarized CTP image acquisition and hemodynamic parameter map estimation techniques. A short assessment is presented to outline the clinical applications with CTP imaging, and then a review of radiation dose effect of the CTP imaging on the different applications is presented. We present a categorized methodology review on known and potential solvable challenges of radiation dose reduction in CTP imaging. To evaluate the quality of CTP images, we list various standardized performance metrics. Moreover, we present a review on the determination of infarct and penumbra. Finally, we reveal the popularity and future trend of CTP imaging.
Collapse
Affiliation(s)
- Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Cuidie Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhixiong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Sui Li
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhen Deng
- Department of Neurology, Nanfang Hospital, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Sijin Chen
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangdong 510515, China; and Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangdong 510515, People's Republic of China
| |
Collapse
|
4
|
Bazzi F, Mescam M, Diab A, Falou O, Amoud H, Basarab A, Kouamé D. Marmoset brain segmentation from deconvolved magnetic resonance images and estimated label maps. Magn Reson Med 2021; 86:2766-2779. [PMID: 34170032 DOI: 10.1002/mrm.28881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 04/23/2021] [Accepted: 05/13/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE The proposed method aims to create label maps that can be used for the segmentation of animal brain MR images without the need of a brain template. This is achieved by performing a joint deconvolution and segmentation of the brain MR images. METHODS It is based on modeling locally the image statistics using a generalized Gaussian distribution (GGD) and couples the deconvolved image and its corresponding labels map using the GGD-Potts model. Because of the complexity of the resulting Bayesian estimators of the unknown model parameters, a Gibbs sampler is used to generate samples following the desired posterior probability. RESULTS The performance of the proposed algorithm is assessed on simulated and real MR images by the segmentation of enhanced marmoset brain images into its main compartments using the corresponding label maps created. Quantitative assessment showed that this method presents results that are comparable to those obtained with the classical method-registering the volumes to a brain template. CONCLUSION The proposed method of using labels as prior information for brain segmentation provides a similar or a slightly better performance compared with the classical reference method based on a dedicated template.
Collapse
Affiliation(s)
- Farah Bazzi
- Computer Science Research Institute of Toulouse (IRIT), Toulouse University UPS, CNRS, UMR, Toulouse, France.,Centre de Recherche Cerveau et Cognition (CerCo), Université de Toulouse UPS, CNRS, UMR, Toulouse, France.,Doctoral School of Sciences and Technology, AZM Center for Research in Biotechnology and Its Applications, Lebanese University, Beirut, Lebanon
| | - Muriel Mescam
- Centre de Recherche Cerveau et Cognition (CerCo), Université de Toulouse UPS, CNRS, UMR, Toulouse, France
| | - Ahmad Diab
- Doctoral School of Sciences and Technology, AZM Center for Research in Biotechnology and Its Applications, Lebanese University, Beirut, Lebanon
| | - Omar Falou
- Doctoral School of Sciences and Technology, AZM Center for Research in Biotechnology and Its Applications, Lebanese University, Beirut, Lebanon
| | - Hassan Amoud
- Doctoral School of Sciences and Technology, AZM Center for Research in Biotechnology and Its Applications, Lebanese University, Beirut, Lebanon
| | - Adrian Basarab
- Computer Science Research Institute of Toulouse (IRIT), Toulouse University UPS, CNRS, UMR, Toulouse, France
| | - Denis Kouamé
- Computer Science Research Institute of Toulouse (IRIT), Toulouse University UPS, CNRS, UMR, Toulouse, France
| |
Collapse
|
5
|
Geng Q, Fu Z, Chen SC. High-resolution 3D light-field imaging. J Biomed Opt 2020; 25:JBO-200169R. [PMID: 33047519 PMCID: PMC7548856 DOI: 10.1117/1.jbo.25.10.106502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 09/24/2020] [Indexed: 06/11/2023]
Abstract
SIGNIFICANCE High-speed 3D imaging methods have been playing crucial roles in many biological discoveries. AIM We present a hybrid light-field imaging system and image processing algorithm that can visualize high-speed biological events. APPROACH The hybrid light-field imaging system uses the selective plane optical illumination, which simultaneously records a high-resolution 2D image and a low-resolution 4D light-field image. The high-resolution 4D light-field image is obtained by applying the hybrid algorithm derived from the deconvolution and phase retrieval methods. RESULTS High-resolution 3D imaging at a speed of 100-s volumes per second over an imaging field of 250 × 250 × 80 μm3 in the x, y, and z axis, respectively, is achieved with a 2.5 times enhancement in lateral resolution over the entire imaging field compared with standard light-field systems. In comparison to the deconvolution algorithm, the hybrid algorithm addresses the artifact issue at the focal plane and reduces the computation time by a factor of 4. CONCLUSIONS The new hybrid light-field imaging method realizes high-resolution and ultrafast 3D imaging with a compact setup and simple algorithm, which may help discover important applications in biophotonics to visualize high-speed biological events.
Collapse
Affiliation(s)
- Qiang Geng
- The Chinese University of Hong Kong, Department of Mechanical and Automation Engineering, Shatin, Hong Kong, China
| | - Zhiqiang Fu
- The Chinese University of Hong Kong, Department of Mechanical and Automation Engineering, Shatin, Hong Kong, China
| | - Shih-Chi Chen
- The Chinese University of Hong Kong, Department of Mechanical and Automation Engineering, Shatin, Hong Kong, China
| |
Collapse
|
6
|
Perri S, Sestito C, Spagnolo F, Corsonello P. Efficient Deconvolution Architecture for Heterogeneous Systems-on-Chip. J Imaging 2020; 6:85. [PMID: 34460742 DOI: 10.3390/jimaging6090085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 08/21/2020] [Accepted: 08/22/2020] [Indexed: 11/16/2022] Open
Abstract
Today, convolutional anddeconvolutional neural network models are exceptionally popular thanks to the impressive accuracies they have been proven in several computer-vision applications. To speed up the overall tasks of these neural networks, purpose-designed accelerators are highly desirable. Unfortunately, the high computational complexity and the huge memory demand make the design of efficient hardware architectures, as well as their deployment in resource- and power-constrained embedded systems, still quite challenging. This paper presents a novel purpose-designed hardware accelerator to perform 2D deconvolutions. The proposed structure applies a hardware-oriented computational approach that overcomes the issues of traditional deconvolution methods, and it is suitable for being implemented within any virtually system-on-chip based on field-programmable gate array devices. In fact, the novel accelerator is simply scalable to comply with resources available within both high- and low-end devices by adequately scaling the adopted parallelism. As an example, when exploited to accelerate the Deep Convolutional Generative Adversarial Network model, the novel accelerator, running as a standalone unit implemented within the Xilinx Zynq XC7Z020 System-on-Chip (SoC) device, performs up to 72 GOPs. Moreover, it dissipates less than 500mW@200MHz and occupies 5.6%, 4.1%, 17%, and 96%, respectively, of the look-up tables, flip-flops, random access memory, and digital signal processors available on-chip. When accommodated within the same device, the whole embedded system equipped with the novel accelerator performs up to 54 GOPs and dissipates less than 1.8W@150MHz. Thanks to the increased parallelism exploitable, more than 900 GOPs can be executed when the high-end Virtex-7 XC7VX690T device is used as the implementation platform. Moreover, in comparison with state-of-the-art competitors implemented within the Zynq XC7Z045 device, the system proposed here reaches a computational capability up to 20% higher, and saves more than 60% and 80% of power consumption and logic resources requirement, respectively, using 5.7× fewer on-chip memory resources.
Collapse
|
7
|
Yu JY, Narumanchi V, Chen S, Xing J, Becker SR, Cogswell CJ. Analyzing the super-resolution characteristics of focused-spot illumination approaches. J Biomed Opt 2020; 25:1-13. [PMID: 32441065 PMCID: PMC7240318 DOI: 10.1117/1.jbo.25.5.056501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 05/01/2020] [Indexed: 05/04/2023]
Abstract
SIGNIFICANCE It is commonly assumed that using the objective lens to create a tightly focused light spot for illumination provides a twofold resolution improvement over the Rayleigh resolution limit and that resolution improvement is independent of object properties. Nevertheless, such an assumption has not been carefully examined. We examine this assumption by analyzing the performance of two super-resolution methods, known as image scanning microscopy (ISM) and illumination-enhanced sparsity (IES). AIM We aim to identify the fundamental differences between the two methods, and to provide examples that help researchers determine which method to utilize for different imaging conditions. APPROACH We input the same image datasets into the two methods and analyze their restorations. In numerical simulations, we design objects of distinct brightness and sparsity levels for imaging. We use biological imaging experiments to verify the simulation results. RESULTS The resolution of IES often exceeds twice the Rayleigh resolution limit when imaging sparse objects. A decrease in object sparsity negatively affects the resolution improvement in both methods. CONCLUSIONS The IES method is superior for imaging sparse objects with its main features being bright and small against a dark, large background. For objects that are largely bright with small dark features, the ISM method is favorable.
Collapse
Affiliation(s)
- Jiun-Yann Yu
- University of Colorado Boulder, Department of Electrical, Computer and Energy Engineering, Boulder, Colorado, United States
| | - Venkatalakshmi Narumanchi
- University of Colorado Boulder, Department of Electrical, Computer and Energy Engineering, Boulder, Colorado, United States
| | - Simeng Chen
- University of Colorado Boulder, Department of Electrical, Computer and Energy Engineering, Boulder, Colorado, United States
| | - Jian Xing
- University of Colorado Boulder, Department of Electrical, Computer and Energy Engineering, Boulder, Colorado, United States
| | - Stephen R. Becker
- University of Colorado Boulder, Department of Applied Mathematics, Boulder, Colorado, United States
| | - Carol J. Cogswell
- University of Colorado Boulder, Department of Electrical, Computer and Energy Engineering, Boulder, Colorado, United States
| |
Collapse
|
8
|
Yu S, Joshi P, Park YJ, Yu KN, Lee MY. Deconvolution of images from 3D printed cells in layers on a chip. Biotechnol Prog 2017; 34:445-454. [PMID: 29240313 DOI: 10.1002/btpr.2591] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 11/19/2017] [Indexed: 01/14/2023]
Abstract
Layer-by-layer cell printing is useful in mimicking layered tissue structures inside the human body and has great potential for being a promising tool in the field of tissue engineering, regenerative medicine, and drug discovery. However, imaging human cells cultured in multiple hydrogel layers in 3D-printed tissue constructs is challenging as the cells are not in a single focal plane. Although confocal microscopy could be a potential solution for this issue, it compromises the throughput which is a key factor in rapidly screening drug efficacy and toxicity in pharmaceutical industries. With epifluorescence microscopy, the throughput can be maintained at a cost of blurred cell images from printed tissue constructs. To rapidly acquire in-focus cell images from bioprinted tissues using an epifluorescence microscope, we created two layers of Hep3B human hepatoma cells by printing green and red fluorescently labeled Hep3B cells encapsulated in two alginate layers in a microwell chip. In-focus fluorescent cell images were obtained in high throughput using an automated epifluorescence microscopy coupled with image analysis algorithms, including three deconvolution methods in combination with three kernel estimation methods, generating a total of nine deconvolution paths. As a result, a combination of Inter-Level Intra-Level Deconvolution (ILILD) algorithm and Richardson-Lucy (RL) kernel estimation proved to be highly useful in bringing out-of-focus cell images into focus, thus rapidly yielding more sensitive and accurate fluorescence reading from the cells in different layers. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:445-454, 2018.
Collapse
Affiliation(s)
- Sean Yu
- Dept. of Chemical and Biomedical Engineering, Cleveland State University, 455 Fenn Hall, 1960 East 24th Street, Cleveland, OH, 44115
| | - Pranav Joshi
- Dept. of Chemical and Biomedical Engineering, Cleveland State University, 455 Fenn Hall, 1960 East 24th Street, Cleveland, OH, 44115
| | - Yi Ju Park
- Advanced Technology Inc. (ATI), 112 Gaetbeol-ro, Yeonsu-gu, Incheon, Republic of Korea
| | - Kyeong-Nam Yu
- Dept. of Chemical and Biomedical Engineering, Cleveland State University, 455 Fenn Hall, 1960 East 24th Street, Cleveland, OH, 44115
| | - Moo-Yeal Lee
- Dept. of Chemical and Biomedical Engineering, Cleveland State University, 455 Fenn Hall, 1960 East 24th Street, Cleveland, OH, 44115
| |
Collapse
|
9
|
Fors O, Núñez J, Otazu X, Prades A, Cardinal RD. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques. Sensors (Basel) 2010; 10:1743-52. [PMID: 22294896 DOI: 10.3390/s100301743] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2009] [Revised: 01/25/2010] [Accepted: 02/03/2010] [Indexed: 11/16/2022]
Abstract
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Collapse
|