1
|
Wu J, Wang H, Gao W, Wei R, Zhang J. SomaSeg: a robust neuron identification framework for two-photon imaging video. J Neural Eng 2024; 21:046045. [PMID: 39029491 DOI: 10.1088/1741-2552/ad6591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 07/19/2024] [Indexed: 07/21/2024]
Abstract
Objective.Accurate neuron identification is fundamental to the analysis of neuronal population dynamics and signal extraction in fluorescence videos. However, several factors such as severe imaging noise, out-of-focus neuropil contamination, and adjacent neuron overlap would impair the performance of neuron identification algorithms and lead to errors in neuron shape and calcium activity extraction, or ultimately compromise the reliability of analysis conclusions.Approach.To address these challenges, we developed a novel cascade framework named SomaSeg. This framework integrates Duffing denoising and neuropil contamination defogging for video enhancement, and an overlapping instance segmentation network for stacked neurons differentiating.Main results.Compared with the state-of-the-art neuron identification methods, both simulation and actual experimental results demonstrate that SomaSeg framework is robust to noise, insensitive to out-of-focus contamination and effective in dealing with overlapping neurons in actual complex imaging scenarios.Significance.The SomaSeg framework provides a widely applicable solution for two-photon video processing, which enhances the reliability of neuron identification and exhibits value in distinguishing visually confusing neurons.
Collapse
Affiliation(s)
- Junjie Wu
- College of Engineering, Peking University, Beijing, People's Republic of China
| | - Hanbin Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, People's Republic of China
| | - Weizheng Gao
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, People's Republic of China
| | - Rong Wei
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, People's Republic of China
| | - Jue Zhang
- College of Engineering, Peking University, Beijing, People's Republic of China
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, People's Republic of China
| |
Collapse
|
2
|
Bao Y, Gong Y. Accurate neuron segmentation method for one-photon calcium imaging videos combining convolutional neural networks and clustering. Commun Biol 2024; 7:970. [PMID: 39122882 PMCID: PMC11316101 DOI: 10.1038/s42003-024-06668-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 08/01/2024] [Indexed: 08/12/2024] Open
Abstract
One-photon fluorescent calcium imaging helps understand brain functions by recording large-scale neural activities in freely moving animals. Automatic, fast, and accurate active neuron segmentation algorithms are essential to extract and interpret information from these videos. One-photon imaging videos' low resolution, high noise, and high background fluctuation pose significant challenges. Here, we develop a software pipeline to address the challenges of processing one-photon calcium imaging videos. We extend our previous two-photon active neuron segmentation algorithm, Shallow U-Net Neuron Segmentation (SUNS), to better suppress background fluctuations in one-photon videos. We also develop additional neuron extraction (ANE) to locate small or dim neurons missed by SUNS. To train our segmentation method, we create ground truth neurons by developing a manual labeling pipeline assisted with semi-automatic refinement. Our method is more accurate and faster than state-of-the-art techniques when processing simulated videos and multiple experimental datasets acquired over various brain regions with different imaging conditions.
Collapse
Affiliation(s)
- Yijun Bao
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 311215, China.
| | - Yiyang Gong
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
- Department of Neurobiology, Duke University, Durham, NC, 27708, USA.
- Department of Cell Biology, University of Oklahoma Health Science Center, Oklahoma City, OK, 73104, USA.
| |
Collapse
|
3
|
Hira R. Closed-loop experiments and brain machine interfaces with multiphoton microscopy. NEUROPHOTONICS 2024; 11:033405. [PMID: 38375331 PMCID: PMC10876015 DOI: 10.1117/1.nph.11.3.033405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 01/22/2024] [Accepted: 01/29/2024] [Indexed: 02/21/2024]
Abstract
In the field of neuroscience, the importance of constructing closed-loop experimental systems has increased in conjunction with technological advances in measuring and controlling neural activity in live animals. We provide an overview of recent technological advances in the field, focusing on closed-loop experimental systems where multiphoton microscopy-the only method capable of recording and controlling targeted population activity of neurons at a single-cell resolution in vivo-works through real-time feedback. Specifically, we present some examples of brain machine interfaces (BMIs) using in vivo two-photon calcium imaging and discuss applications of two-photon optogenetic stimulation and adaptive optics to real-time BMIs. We also consider conditions for realizing future optical BMIs at the synaptic level, and their possible roles in understanding the computational principles of the brain.
Collapse
Affiliation(s)
- Riichiro Hira
- Tokyo Medical and Dental University, Graduate School of Medical and Dental Sciences, Department of Physiology and Cell Biology, Tokyo, Japan
| |
Collapse
|
4
|
Barros BJ, Cunha JPS. Neurophotonics: a comprehensive review, current challenges and future trends. Front Neurosci 2024; 18:1382341. [PMID: 38765670 PMCID: PMC11102054 DOI: 10.3389/fnins.2024.1382341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 03/21/2024] [Indexed: 05/22/2024] Open
Abstract
The human brain, with its vast network of billions of neurons and trillions of synapses (connections) between diverse cell types, remains one of the greatest mysteries in science and medicine. Despite extensive research, an understanding of the underlying mechanisms that drive normal behaviors and response to disease states is still limited. Advancement in the Neuroscience field and development of therapeutics for related pathologies requires innovative technologies that can provide a dynamic and systematic understanding of the interactions between neurons and neural circuits. In this work, we provide an up-to-date overview of the evolution of neurophotonic approaches in the last 10 years through a multi-source, literature analysis. From an initial corpus of 243 papers retrieved from Scopus, PubMed and WoS databases, we have followed the PRISMA approach to select 56 papers in the area. Following a full-text evaluation of these 56 scientific articles, six main areas of applied research were identified and discussed: (1) Advanced optogenetics, (2) Multimodal neural interfaces, (3) Innovative therapeutics, (4) Imaging devices and probes, (5) Remote operations, and (6) Microfluidic platforms. For each area, the main technologies selected are discussed according to the photonic principles applied, the neuroscience application evaluated and the more indicative results of efficiency and scientific potential. This detailed analysis is followed by an outlook of the main challenges tackled over the last 10 years in the Neurophotonics field, as well as the main technological advances regarding specificity, light delivery, multimodality, imaging, materials and system designs. We conclude with a discussion of considerable challenges for future innovation and translation in Neurophotonics, from light delivery within the brain to physical constraints and data management strategies.
Collapse
Affiliation(s)
- Beatriz Jacinto Barros
- INESC TEC – Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - João P. S. Cunha
- INESC TEC – Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
- Faculty of Engineering, University of Porto, Porto, Portugal
| |
Collapse
|
5
|
Wu Y, Xu Z, Liang S, Wang L, Wang M, Jia H, Chen X, Zhao Z, Liao X. NeuroSeg-III: efficient neuron segmentation in two-photon Ca 2+ imaging data using self-supervised learning. BIOMEDICAL OPTICS EXPRESS 2024; 15:2910-2925. [PMID: 38855703 PMCID: PMC11161377 DOI: 10.1364/boe.521478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 03/28/2024] [Accepted: 03/30/2024] [Indexed: 06/11/2024]
Abstract
Two-photon Ca2+ imaging technology increasingly plays an essential role in neuroscience research. However, the requirement for extensive professional annotation poses a significant challenge to improving the performance of neuron segmentation models. Here, we present NeuroSeg-III, an innovative self-supervised learning approach specifically designed to achieve fast and precise segmentation of neurons in imaging data. This approach consists of two modules: a self-supervised pre-training network and a segmentation network. After pre-training the encoder of the segmentation network via a self-supervised learning method without any annotated data, we only need to fine-tune the segmentation network with a small amount of annotated data. The segmentation network is designed with YOLOv8s, FasterNet, efficient multi-scale attention mechanism (EMA), and bi-directional feature pyramid network (BiFPN), which enhanced the model's segmentation accuracy while reducing the computational cost and parameters. The generalization of our approach was validated across different Ca2+ indicators and scales of imaging data. Significantly, the proposed neuron segmentation approach exhibits exceptional speed and accuracy, surpassing the current state-of-the-art benchmarks when evaluated using a publicly available dataset. The results underscore the effectiveness of NeuroSeg-III, with employing an efficient training strategy tailored for two-photon Ca2+ imaging data and delivering remarkable precision in neuron segmentation.
Collapse
Affiliation(s)
- Yukun Wu
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
| | - Zhehao Xu
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400030, China
| | - Shanshan Liang
- Brain Research Center, State Key Laboratory of Trauma and Chemical Poisoning, Third Military Medical University, Chongqing 400038, China
| | - Lukang Wang
- Brain Research Center, State Key Laboratory of Trauma and Chemical Poisoning, Third Military Medical University, Chongqing 400038, China
| | - Meng Wang
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400030, China
| | - Hongbo Jia
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
- Brain Research Instrument Innovation Center, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, Jiangsu, China
| | - Xiaowei Chen
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
- Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400064, China
| | - Zhikai Zhao
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400030, China
| | - Xiang Liao
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400030, China
| |
Collapse
|
6
|
Baker CM, Gong Y. A Semi-supervised Pipeline for Accurate Neuron Segmentation with Fewer Ground Truth Labels. eNeuro 2024; 11:ENEURO.0352-23.2024. [PMID: 38242690 PMCID: PMC10880440 DOI: 10.1523/eneuro.0352-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/21/2023] [Accepted: 01/04/2024] [Indexed: 01/21/2024] Open
Abstract
Recent advancements in two-photon calcium imaging have enabled scientists to record the activity of thousands of neurons with cellular resolution. This scope of data collection is crucial to understanding the next generation of neuroscience questions, but analyzing these large recordings requires automated methods for neuron segmentation. Supervised methods for neuron segmentation achieve state of-the-art accuracy and speed but currently require large amounts of manually generated ground truth training labels. We reduced the required number of training labels by designing a semi-supervised pipeline. Our pipeline used neural network ensembling to generate pseudolabels to train a single shallow U-Net. We tested our method on three publicly available datasets and compared our performance to three widely used segmentation methods. Our method outperformed other methods when trained on a small number of ground truth labels and could achieve state-of-the-art accuracy after training on approximately a quarter of the number of ground truth labels as supervised methods. When trained on many ground truth labels, our pipeline attained higher accuracy than that of state-of-the-art methods. Overall, our work will help researchers accurately process large neural recordings while minimizing the time and effort needed to generate manual labels.
Collapse
Affiliation(s)
- Casey M Baker
- Departments of Biomedical Engineering, Duke University, Durham, North Carolina 27701
| | - Yiyang Gong
- Departments of Biomedical Engineering, Duke University, Durham, North Carolina 27701
- Neurobiology, Duke University, Durham, North Carolina 27701
| |
Collapse
|
7
|
Greene J, Xue Y, Alido J, Matlock A, Hu G, Kiliç K, Davison I, Tian L. Pupil engineering for extended depth-of-field imaging in a fluorescence miniscope. NEUROPHOTONICS 2023; 10:044302. [PMID: 37215637 PMCID: PMC10197144 DOI: 10.1117/1.nph.10.4.044302] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/24/2023]
Abstract
Significance Fluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze in-vivo neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses. Aim We present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8× between twin foci in fixed scattering samples. Approach We use a genetic algorithm that considers the GRIN lens' aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 μm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight. Results We characterize the performance of EDoF-Miniscope across 5- and 10-μm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm-thick mouse brain sample and vessels in a whole mouse brain sample. Conclusions Built from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.
Collapse
Affiliation(s)
- Joseph Greene
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Yujia Xue
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Jeffrey Alido
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Alex Matlock
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Guorong Hu
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Kivilcim Kiliç
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
| | - Ian Davison
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
- Boston University, Department of Biology, Boston, Massachusetts, United States
| | - Lei Tian
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
| |
Collapse
|
8
|
Pasarkar A, Kinsella I, Zhou P, Wu M, Pan D, Fan JL, Wang Z, Abdeladim L, Peterka DS, Adesnik H, Ji N, Paninski L. maskNMF: A denoise-sparsen-detect approach for extracting neural signals from dense imaging data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.14.557777. [PMID: 37745388 PMCID: PMC10515957 DOI: 10.1101/2023.09.14.557777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
A number of calcium imaging methods have been developed to monitor the activity of large populations of neurons. One particularly promising approach, Bessel imaging, captures neural activity from a volume by projecting within the imaged volume onto a single imaging plane, therefore effectively mixing signals and increasing the number of neurons imaged per pixel. These signals must then be computationally demixed to recover the desired neural activity. Unfortunately, currently-available demixing methods can perform poorly in the regime of high imaging density (i.e., many neurons per pixel). In this work we introduce a new pipeline (maskNMF) for demixing dense calcium imaging data. The main idea is to first denoise and temporally sparsen the observed video; this enhances signal strength and reduces spatial overlap significantly. Next we detect neurons in the sparsened video using a neural network trained on a library of neural shapes. These shapes are derived from segmented electron microscopy images input into a Bessel imaging model; therefore no manual selection of "good" neural shapes from the functional data is required here. After cells are detected, we use a constrained non-negative matrix factorization approach to demix the activity, using the detected cells' shapes to initialize the factorization. We test the resulting pipeline on both simulated and real datasets and find that it is able to achieve accurate demixing on denser data than was previously feasible, therefore enabling faithful imaging of larger neural populations. The method also provides good results on more "standard" two-photon imaging data. Finally, because much of the pipeline operates on a significantly compressed version of the raw data and is highly parallelizable, the algorithm is fast, processing large datasets faster than real time.
Collapse
Affiliation(s)
- Amol Pasarkar
- Center for Theoretical Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Computer Science, Columbia University, New York, NY, 10027, USA
| | - Ian Kinsella
- Center for Theoretical Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Statistics, Columbia University, New York, NY, 10027, USA
| | - Pengcheng Zhou
- Shenzhen Institute of Advanced Technology, Shenzhen, 518055, China
| | - Melissa Wu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708
| | - Daisong Pan
- Department of Physics, University of California, Berkeley, California 94720, USA
| | - Jiang Lan Fan
- Joint Bioengineering Graduate Program, University of California, Berkeley, CA 94720
| | - Zhen Wang
- Department of Electrical and Computer Engineering, UCLA, Los Angeles, CA, 90095, USA
| | - Lamiae Abdeladim
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Darcy S Peterka
- Center for Theoretical Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Na Ji
- Department of Physics, University of California, Berkeley, California 94720, USA
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Liam Paninski
- Center for Theoretical Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Statistics, Columbia University, New York, NY, 10027, USA
| |
Collapse
|
9
|
Cai C, Dong C, Friedrich J, Rozsa M, Pnevmatikakis EA, Giovannucci A. FIOLA: an accelerated pipeline for fluorescence imaging online analysis. Nat Methods 2023; 20:1417-1425. [PMID: 37679524 DOI: 10.1038/s41592-023-01964-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 06/19/2023] [Indexed: 09/09/2023]
Abstract
Optical microscopy methods such as calcium and voltage imaging enable fast activity readout of large neuronal populations using light. However, the lack of corresponding advances in online algorithms has slowed progress in retrieving information about neural activity during or shortly after an experiment. This gap not only prevents the execution of real-time closed-loop experiments, but also hampers fast experiment-analysis-theory turnover for high-throughput imaging modalities. Reliable extraction of neural activity from fluorescence imaging frames at speeds compatible with indicator dynamics and imaging modalities poses a challenge. We therefore developed FIOLA, a framework for fluorescence imaging online analysis that extracts neuronal activity from calcium and voltage imaging movies at speeds one order of magnitude faster than state-of-the-art methods. FIOLA exploits algorithms optimized for parallel processing on GPUs and CPUs. We demonstrate reliable and scalable performance of FIOLA on both simulated and real calcium and voltage imaging datasets. Finally, we present an online experimental scenario to provide guidance in setting FIOLA parameters and to highlight the trade-offs of our approach.
Collapse
Affiliation(s)
- Changjia Cai
- Joint Department of Biomedical Engineering UNC/NCSU, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Cynthia Dong
- Joint Department of Biomedical Engineering UNC/NCSU, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | | | - Marton Rozsa
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | | | - Andrea Giovannucci
- Joint Department of Biomedical Engineering UNC/NCSU, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Closed-Loop Engineering for Advanced Rehabilitation (CLEAR), North Carolina State University, Raleigh, NC, USA.
| |
Collapse
|
10
|
de Vries SEJ, Siegle JH, Koch C. Sharing neurophysiology data from the Allen Brain Observatory. eLife 2023; 12:e85550. [PMID: 37432073 PMCID: PMC10335829 DOI: 10.7554/elife.85550] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 06/27/2023] [Indexed: 07/12/2023] Open
Abstract
Nullius in verba ('trust no one'), chosen as the motto of the Royal Society in 1660, implies that independently verifiable observations-rather than authoritative claims-are a defining feature of empirical science. As the complexity of modern scientific instrumentation has made exact replications prohibitive, sharing data is now essential for ensuring the trustworthiness of one's findings. While embraced in spirit by many, in practice open data sharing remains the exception in contemporary systems neuroscience. Here, we take stock of the Allen Brain Observatory, an effort to share data and metadata associated with surveys of neuronal activity in the visual system of laboratory mice. Data from these surveys have been used to produce new discoveries, to validate computational algorithms, and as a benchmark for comparison with other data, resulting in over 100 publications and preprints to date. We distill some of the lessons learned about open surveys and data reuse, including remaining barriers to data sharing and what might be done to address these.
Collapse
|
11
|
Zhang Y, Zhang G, Han X, Wu J, Li Z, Li X, Xiao G, Xie H, Fang L, Dai Q. Rapid detection of neurons in widefield calcium imaging datasets after training with synthetic data. Nat Methods 2023; 20:747-754. [PMID: 37002377 PMCID: PMC10172132 DOI: 10.1038/s41592-023-01838-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/07/2023] [Indexed: 04/03/2023]
Abstract
Widefield microscopy can provide optical access to multi-millimeter fields of view and thousands of neurons in mammalian brains at video rate. However, tissue scattering and background contamination results in signal deterioration, making the extraction of neuronal activity challenging, laborious and time consuming. Here we present our deep-learning-based widefield neuron finder (DeepWonder), which is trained by simulated functional recordings and effectively works on experimental data to achieve high-fidelity neuronal extraction. Equipped with systematic background contribution priors, DeepWonder conducts neuronal inference with an order-of-magnitude-faster speed and improved accuracy compared with alternative approaches. DeepWonder removes background contaminations and is computationally efficient. Specifically, DeepWonder accomplishes 50-fold signal-to-background ratio enhancement when processing terabytes-scale cortex-wide functional recordings, with over 14,000 neurons extracted in 17 h.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Guoxun Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Xiaofei Han
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Ziwei Li
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China.
| |
Collapse
|
12
|
Xu Z, Wu Y, Guan J, Liang S, Pan J, Wang M, Hu Q, Jia H, Chen X, Liao X. NeuroSeg-II: A deep learning approach for generalized neuron segmentation in two-photon Ca 2+ imaging. Front Cell Neurosci 2023; 17:1127847. [PMID: 37091918 PMCID: PMC10117760 DOI: 10.3389/fncel.2023.1127847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
The development of two-photon microscopy and Ca2+ indicators has enabled the recording of multiscale neuronal activities in vivo and thus advanced the understanding of brain functions. However, it is challenging to perform automatic, accurate, and generalized neuron segmentation when processing a large amount of imaging data. Here, we propose a novel deep-learning-based neural network, termed as NeuroSeg-II, to conduct automatic neuron segmentation for in vivo two-photon Ca2+ imaging data. This network architecture is based on Mask region-based convolutional neural network (R-CNN) but has enhancements of an attention mechanism and modified feature hierarchy modules. We added an attention mechanism module to focus the computation on neuron regions in imaging data. We also enhanced the feature hierarchy to extract feature information at diverse levels. To incorporate both spatial and temporal information in our data processing, we fused the images from average projection and correlation map extracting the temporal information of active neurons, and the integrated information was expressed as two-dimensional (2D) images. To achieve a generalized neuron segmentation, we conducted a hybrid learning strategy by training our model with imaging data from different labs, including multiscale data with different Ca2+ indicators. The results showed that our approach achieved promising segmentation performance across different imaging scales and Ca2+ indicators, even including the challenging data of large field-of-view mesoscopic images. By comparing state-of-the-art neuron segmentation methods for two-photon Ca2+ imaging data, we showed that our approach achieved the highest accuracy with a publicly available dataset. Thus, NeuroSeg-II enables good segmentation accuracy and a convenient training and testing process.
Collapse
Affiliation(s)
- Zhehao Xu
- Advanced Institute for Brain and Intelligence, Medical College, Guangxi University, Nanning, China
| | - Yukun Wu
- Advanced Institute for Brain and Intelligence, Medical College, Guangxi University, Nanning, China
| | - Jiangheng Guan
- Department of Neurosurgery, The General Hospital of Chinese PLA Central Theater Command, Wuhan, China
| | - Shanshan Liang
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Junxia Pan
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Meng Wang
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing, China
| | - Qianshuo Hu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Hongbo Jia
- Advanced Institute for Brain and Intelligence, Medical College, Guangxi University, Nanning, China
- Brain Research Instrument Innovation Center, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Xiaowei Chen
- Advanced Institute for Brain and Intelligence, Medical College, Guangxi University, Nanning, China
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- *Correspondence: Xiaowei Chen,
| | - Xiang Liao
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing, China
- Xiang Liao,
| |
Collapse
|
13
|
Joint model- and immunohistochemistry-driven few-shot learning scheme for breast cancer segmentation on 4D DCE-MRI. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04272-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
14
|
Cai Y, Wu J, Dai Q. Review on data analysis methods for mesoscale neural imaging in vivo. NEUROPHOTONICS 2022; 9:041407. [PMID: 35450225 PMCID: PMC9010663 DOI: 10.1117/1.nph.9.4.041407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
Significance: Mesoscale neural imaging in vivo has gained extreme popularity in neuroscience for its capacity of recording large-scale neurons in action. Optical imaging with single-cell resolution and millimeter-level field of view in vivo has been providing an accumulated database of neuron-behavior correspondence. Meanwhile, optical detection of neuron signals is easily contaminated by noises, background, crosstalk, and motion artifacts, while neural-level signal processing and network-level coordinate are extremely complicated, leading to laborious and challenging signal processing demands. The existing data analysis procedure remains unstandardized, which could be daunting to neophytes or neuroscientists without computational background. Aim: We hope to provide a general data analysis pipeline of mesoscale neural imaging shared between imaging modalities and systems. Approach: We divide the pipeline into two main stages. The first stage focuses on extracting high-fidelity neural responses at single-cell level from raw images, including motion registration, image denoising, neuron segmentation, and signal extraction. The second stage focuses on data mining, including neural functional mapping, clustering, and brain-wide network deduction. Results: Here, we introduce the general pipeline of processing the mesoscale neural images. We explain the principles of these procedures and compare different approaches and their application scopes with detailed discussions about the shortcomings and remaining challenges. Conclusions: There are great challenges and opportunities brought by the large-scale mesoscale data, such as the balance between fidelity and efficiency, increasing computational load, and neural network interpretability. We believe that global circuits on single-neuron level will be more extensively explored in the future.
Collapse
Affiliation(s)
- Yeyi Cai
- Tsinghua University, Department of Automation, Beijing, China
| | - Jiamin Wu
- Tsinghua University, Department of Automation, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Beijing, China
| |
Collapse
|
15
|
Benisty H, Song A, Mishne G, Charles AS. Review of data processing of functional optical microscopy for neuroscience. NEUROPHOTONICS 2022; 9:041402. [PMID: 35937186 PMCID: PMC9351186 DOI: 10.1117/1.nph.9.4.041402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 07/15/2022] [Indexed: 05/04/2023]
Abstract
Functional optical imaging in neuroscience is rapidly growing with the development of optical systems and fluorescence indicators. To realize the potential of these massive spatiotemporal datasets for relating neuronal activity to behavior and stimuli and uncovering local circuits in the brain, accurate automated processing is increasingly essential. We cover recent computational developments in the full data processing pipeline of functional optical microscopy for neuroscience data and discuss ongoing and emerging challenges.
Collapse
Affiliation(s)
- Hadas Benisty
- Yale Neuroscience, New Haven, Connecticut, United States
| | - Alexander Song
- Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Gal Mishne
- UC San Diego, Halıcığlu Data Science Institute, Department of Electrical and Computer Engineering and the Neurosciences Graduate Program, La Jolla, California, United States
| | - Adam S. Charles
- Johns Hopkins University, Kavli Neuroscience Discovery Institute, Center for Imaging Science, Department of Biomedical Engineering, Department of Neuroscience, and Mathematical Institute for Data Science, Baltimore, Maryland, United States
| |
Collapse
|
16
|
Xue Y, Yang Q, Hu G, Guo K, Tian L. Deep-learning-augmented computational miniature mesoscope. OPTICA 2022; 9:1009-1021. [PMID: 36506462 PMCID: PMC9731182 DOI: 10.1364/optica.464700] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 08/02/2022] [Indexed: 05/30/2023]
Abstract
Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a trade-off between field of view (FOV), resolution, and system complexity, and thus cannot fulfill the emerging need for miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed a computational miniature mesoscope (CM2) that exploits a computational imaging strategy to enable single-shot, 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM2 V2, which significantly advances both the hardware and computation. We complement the 3 × 3 microlens array with a hybrid emission filter that improves the imaging contrast by 5×, and design a 3D-printed free-form collimator for the LED illuminator that improves the excitation efficiency by 3×. To enable high-resolution reconstruction across a large volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model to characterize spatially varying aberrations. We then train a multimodule deep learning model called CM2Net, using only the 3D-LSV simulator. We quantify the detection performance and localization accuracy of CM2Net to reconstruct fluorescent emitters under different conditions in simulation. We then show that CM2Net generalizes well to experiments and achieves accurate 3D reconstruction across a ~7-mm FOV and 800-μm depth, and provides ~6-μm lateral and ~25-μm axial resolution. This provides an ~8× better axial resolution and ~1400× faster speed compared to the previous model-based algorithm. We anticipate this simple, low-cost computational miniature imaging system will be useful for many large-scale 3D fluorescence imaging applications.
Collapse
Affiliation(s)
- Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Qianwan Yang
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Kehan Guo
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, USA
- Neurophotonics Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
17
|
Moroni M, Brondi M, Fellin T, Panzeri S. SmaRT2P: a software for generating and processing smart line recording trajectories for population two-photon calcium imaging. Brain Inform 2022; 9:18. [PMID: 35927517 PMCID: PMC9352634 DOI: 10.1186/s40708-022-00166-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/01/2022] [Indexed: 11/17/2022] Open
Abstract
Two-photon fluorescence calcium imaging allows recording the activity of large neural populations with subcellular spatial resolution, but it is typically characterized by low signal-to-noise ratio (SNR) and poor accuracy in detecting single or few action potentials when large number of neurons are imaged. We recently showed that implementing a smart line scanning approach using trajectories that optimally sample the regions of interest increases both the SNR fluorescence signals and the accuracy of single spike detection in population imaging in vivo. However, smart line scanning requires highly specialised software to design recording trajectories, interface with acquisition hardware, and efficiently process acquired data. Furthermore, smart line scanning needs optimized strategies to cope with movement artefacts and neuropil contamination. Here, we develop and validate SmaRT2P, an open-source, user-friendly and easy-to-interface Matlab-based software environment to perform optimized smart line scanning in two-photon calcium imaging experiments. SmaRT2P is designed to interface with popular acquisition software (e.g., ScanImage) and implements novel strategies to detect motion artefacts, estimate neuropil contamination, and minimize their impact on functional signals extracted from neuronal population imaging. SmaRT2P is structured in a modular way to allow flexibility in the processing pipeline, requiring minimal user intervention in parameter setting. The use of SmaRT2P for smart line scanning has the potential to facilitate the functional investigation of large neuronal populations with increased SNR and accuracy in detecting the discharge of single and few action potentials.
Collapse
Affiliation(s)
- Monica Moroni
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, UniTn, Istituto Italiano Di Tecnologia, 38068, Rovereto, Italy.
| | - Marco Brondi
- Optical Approaches to Brain Function Laboratory, Istituto Italiano Di Tecnologia, 16163, Genoa, Italy.,Department of Biomedical Sciences-UNIPD, Università Degli Studi Di Padova, 35121, Padua, Italy.,Padova Neuroscience Center (PNC), Università Degli Studi Di Padova, 35129, Padua, Italy
| | - Tommaso Fellin
- Optical Approaches to Brain Function Laboratory, Istituto Italiano Di Tecnologia, 16163, Genoa, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, UniTn, Istituto Italiano Di Tecnologia, 38068, Rovereto, Italy. .,Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251, Hamburg, Germany.
| |
Collapse
|
18
|
Computational Methods for Neuron Segmentation in Two-Photon Calcium Imaging Data: A Survey. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146876] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Calcium imaging has rapidly become a methodology of choice for real-time in vivo neuron analysis. Its application to large sets of data requires automated tools to annotate and segment cells, allowing scalable image segmentation under reproducible criteria. In this paper, we review and summarize the most recent methods for computational segmentation of calcium imaging. The contributions of the paper are three-fold: we provide an overview of the main algorithms taxonomized in three categories (signal processing, matrix factorization and machine learning-based approaches), we highlight the main advantages and disadvantages of each category and we provide a summary of the performance of the methods that have been tested on public benchmarks (with links to the public code when available).
Collapse
|
19
|
Gong Y, Tian Y, Baker C. A fully water coupled oblique light-sheet microscope. Sci Rep 2022; 12:5940. [PMID: 35396532 PMCID: PMC8993908 DOI: 10.1038/s41598-022-09975-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 03/31/2022] [Indexed: 11/25/2022] Open
Abstract
Recently developed descanned versions of the oblique light-sheet microscope promise to enable high-frame rate volumetric imaging in a variety of convenient preparations. The efficiency of these microscopes depends on the implementation of the objective coupling that turns the intermediate imaging plane. In this work, we developed a fully immersed coupling strategy between the middle and end objectives of the oblique light-sheet microscope to enable facile alignment and high efficiency coupling. Our design outperformed conventional designs that used only air objectives in resolution and light-collection power. We further demonstrated our design’s ability to capture large fields-of-view when paired with a camera with built-in electronic binning. We simultaneously imaged the forebrain and hindbrain of larval zebrafish and found clusters of activity localized to each region of the brain.
Collapse
Affiliation(s)
- Yiyang Gong
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
| | - Yuqi Tian
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Casey Baker
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
20
|
Bao Y, Redington E, Agarwal A, Gong Y. Decontaminate Traces From Fluorescence Calcium Imaging Videos Using Targeted Non-negative Matrix Factorization. Front Neurosci 2022; 15:797421. [PMID: 35126042 PMCID: PMC8815790 DOI: 10.3389/fnins.2021.797421] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 01/26/2023] Open
Abstract
Fluorescence microscopy and genetically encoded calcium indicators help understand brain function by recording large-scale in vivo videos in assorted animal models. Extracting the fluorescent transients that represent active periods of individual neurons is a key step when analyzing imaging videos. Non-specific calcium sources and background adjacent to segmented neurons contaminate the neurons' temporal traces with false transients. We developed and characterized a novel method, temporal unmixing of calcium traces (TUnCaT), to quickly and accurately unmix the calcium signals of neighboring neurons and background. Our algorithm used background subtraction to remove the false transients caused by background fluctuations, and then applied targeted non-negative matrix factorization to remove the false transients caused by neighboring calcium sources. TUnCaT was more accurate than existing algorithms when processing multiple experimental and simulated datasets. TUnCaT's speed was faster than or comparable to existing algorithms.
Collapse
Affiliation(s)
- Yijun Bao
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Emily Redington
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Agnim Agarwal
- North Carolina School of Science and Mathematics, Durham, NC, United States
| | - Yiyang Gong
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
- Department of Neurobiology, Duke University, Durham, NC, United States
| |
Collapse
|
21
|
Abdelfattah AS, Ahuja S, Akkin T, Allu SR, Brake J, Boas DA, Buckley EM, Campbell RE, Chen AI, Cheng X, Čižmár T, Costantini I, De Vittorio M, Devor A, Doran PR, El Khatib M, Emiliani V, Fomin-Thunemann N, Fainman Y, Fernandez-Alfonso T, Ferri CGL, Gilad A, Han X, Harris A, Hillman EMC, Hochgeschwender U, Holt MG, Ji N, Kılıç K, Lake EMR, Li L, Li T, Mächler P, Miller EW, Mesquita RC, Nadella KMNS, Nägerl UV, Nasu Y, Nimmerjahn A, Ondráčková P, Pavone FS, Perez Campos C, Peterka DS, Pisano F, Pisanello F, Puppo F, Sabatini BL, Sadegh S, Sakadzic S, Shoham S, Shroff SN, Silver RA, Sims RR, Smith SL, Srinivasan VJ, Thunemann M, Tian L, Tian L, Troxler T, Valera A, Vaziri A, Vinogradov SA, Vitale F, Wang LV, Uhlířová H, Xu C, Yang C, Yang MH, Yellen G, Yizhar O, Zhao Y. Neurophotonic tools for microscopic measurements and manipulation: status report. NEUROPHOTONICS 2022; 9:013001. [PMID: 35493335 PMCID: PMC9047450 DOI: 10.1117/1.nph.9.s1.013001] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Neurophotonics was launched in 2014 coinciding with the launch of the BRAIN Initiative focused on development of technologies for advancement of neuroscience. For the last seven years, Neurophotonics' agenda has been well aligned with this focus on neurotechnologies featuring new optical methods and tools applicable to brain studies. While the BRAIN Initiative 2.0 is pivoting towards applications of these novel tools in the quest to understand the brain, this status report reviews an extensive and diverse toolkit of novel methods to explore brain function that have emerged from the BRAIN Initiative and related large-scale efforts for measurement and manipulation of brain structure and function. Here, we focus on neurophotonic tools mostly applicable to animal studies. A companion report, scheduled to appear later this year, will cover diffuse optical imaging methods applicable to noninvasive human studies. For each domain, we outline the current state-of-the-art of the respective technologies, identify the areas where innovation is needed, and provide an outlook for the future directions.
Collapse
Affiliation(s)
- Ahmed S. Abdelfattah
- Brown University, Department of Neuroscience, Providence, Rhode Island, United States
| | - Sapna Ahuja
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Taner Akkin
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, Minnesota, United States
| | - Srinivasa Rao Allu
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Joshua Brake
- Harvey Mudd College, Department of Engineering, Claremont, California, United States
| | - David A. Boas
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Erin M. Buckley
- Georgia Institute of Technology and Emory University, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Emory University, Department of Pediatrics, Atlanta, Georgia, United States
| | - Robert E. Campbell
- University of Tokyo, Department of Chemistry, Tokyo, Japan
- University of Alberta, Department of Chemistry, Edmonton, Alberta, Canada
| | - Anderson I. Chen
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Xiaojun Cheng
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Tomáš Čižmár
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Irene Costantini
- University of Florence, European Laboratory for Non-Linear Spectroscopy, Department of Biology, Florence, Italy
- National Institute of Optics, National Research Council, Rome, Italy
| | - Massimo De Vittorio
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Anna Devor
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Massachusetts General Hospital, Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, United States
| | - Patrick R. Doran
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Mirna El Khatib
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | | | - Natalie Fomin-Thunemann
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Yeshaiahu Fainman
- University of California San Diego, Department of Electrical and Computer Engineering, La Jolla, California, United States
| | - Tomas Fernandez-Alfonso
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Christopher G. L. Ferri
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Ariel Gilad
- The Hebrew University of Jerusalem, Institute for Medical Research Israel–Canada, Department of Medical Neurobiology, Faculty of Medicine, Jerusalem, Israel
| | - Xue Han
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Andrew Harris
- Weizmann Institute of Science, Department of Brain Sciences, Rehovot, Israel
| | | | - Ute Hochgeschwender
- Central Michigan University, Department of Neuroscience, Mount Pleasant, Michigan, United States
| | - Matthew G. Holt
- University of Porto, Instituto de Investigação e Inovação em Saúde (i3S), Porto, Portugal
| | - Na Ji
- University of California Berkeley, Department of Physics, Berkeley, California, United States
| | - Kıvılcım Kılıç
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Evelyn M. R. Lake
- Yale School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, Connecticut, United States
| | - Lei Li
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| | - Tianqi Li
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, Minnesota, United States
| | - Philipp Mächler
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Evan W. Miller
- University of California Berkeley, Departments of Chemistry and Molecular & Cell Biology and Helen Wills Neuroscience Institute, Berkeley, California, United States
| | | | | | - U. Valentin Nägerl
- Interdisciplinary Institute for Neuroscience University of Bordeaux & CNRS, Bordeaux, France
| | - Yusuke Nasu
- University of Tokyo, Department of Chemistry, Tokyo, Japan
| | - Axel Nimmerjahn
- Salk Institute for Biological Studies, Waitt Advanced Biophotonics Center, La Jolla, California, United States
| | - Petra Ondráčková
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Francesco S. Pavone
- National Institute of Optics, National Research Council, Rome, Italy
- University of Florence, European Laboratory for Non-Linear Spectroscopy, Department of Physics, Florence, Italy
| | - Citlali Perez Campos
- Columbia University, Zuckerman Mind Brain Behavior Institute, New York, United States
| | - Darcy S. Peterka
- Columbia University, Zuckerman Mind Brain Behavior Institute, New York, United States
| | - Filippo Pisano
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Ferruccio Pisanello
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Francesca Puppo
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Bernardo L. Sabatini
- Harvard Medical School, Howard Hughes Medical Institute, Department of Neurobiology, Boston, Massachusetts, United States
| | - Sanaz Sadegh
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Sava Sakadzic
- Massachusetts General Hospital, Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, United States
| | - Shy Shoham
- New York University Grossman School of Medicine, Tech4Health and Neuroscience Institutes, New York, New York, United States
| | - Sanaya N. Shroff
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - R. Angus Silver
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Ruth R. Sims
- Sorbonne University, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Spencer L. Smith
- University of California Santa Barbara, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
| | - Vivek J. Srinivasan
- New York University Langone Health, Departments of Ophthalmology and Radiology, New York, New York, United States
| | - Martin Thunemann
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Lei Tian
- Boston University, Departments of Electrical Engineering and Biomedical Engineering, Boston, Massachusetts, United States
| | - Lin Tian
- University of California Davis, Department of Biochemistry and Molecular Medicine, Davis, California, United States
| | - Thomas Troxler
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Antoine Valera
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Alipasha Vaziri
- Rockefeller University, Laboratory of Neurotechnology and Biophysics, New York, New York, United States
- The Rockefeller University, The Kavli Neural Systems Institute, New York, New York, United States
| | - Sergei A. Vinogradov
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Flavia Vitale
- Center for Neuroengineering and Therapeutics, Departments of Neurology, Bioengineering, Physical Medicine and Rehabilitation, Philadelphia, Pennsylvania, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| | - Hana Uhlířová
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Chris Xu
- Cornell University, School of Applied and Engineering Physics, Ithaca, New York, United States
| | - Changhuei Yang
- California Institute of Technology, Departments of Electrical Engineering, Bioengineering and Medical Engineering, Pasadena, California, United States
| | - Mu-Han Yang
- University of California San Diego, Department of Electrical and Computer Engineering, La Jolla, California, United States
| | - Gary Yellen
- Harvard Medical School, Department of Neurobiology, Boston, Massachusetts, United States
| | - Ofer Yizhar
- Weizmann Institute of Science, Department of Brain Sciences, Rehovot, Israel
| | - Yongxin Zhao
- Carnegie Mellon University, Department of Biological Sciences, Pittsburgh, Pennsylvania, United States
| |
Collapse
|