1
|
Li W, Chen W, Dai Z, Chai X, An S, Guan Z, Zhou W, Chen J, Gong H, Luo Q, Feng Z, Li A. Graph-based cell pattern recognition for merging the multi-modal optical microscopic image of neurons. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108392. [PMID: 39226842 DOI: 10.1016/j.cmpb.2024.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/06/2024] [Accepted: 08/22/2024] [Indexed: 09/05/2024]
Abstract
A deep understanding of neuron structure and function is crucial for elucidating brain mechanisms, diagnosing and treating diseases. Optical microscopy, pivotal in neuroscience, illuminates neuronal shapes, projections, and electrical activities. To explore the projection of specific functional neurons, scientists have been developing optical-based multimodal imaging strategies to simultaneously capture dynamic in vivo signals and static ex vivo structures from the same neuron. However, the original position of neurons is highly susceptible to displacement during ex vivo imaging, presenting a significant challenge for integrating multimodal information at the single-neuron level. This study introduces a graph-model-based approach for cell image matching, facilitating precise and automated pairing of sparsely labeled neurons across different optical microscopic images. It has been shown that utilizing neuron distribution as a matching feature can mitigate modal differences, the high-order graph model can address scale inconsistency, and the nonlinear iteration can resolve discrepancies in neuron density. This strategy was applied to the connectivity study of the mouse visual cortex, performing cell matching between the two-photon calcium image and the HD-fMOST brain-wide anatomical image sets. Experimental results demonstrate 96.67% precision, 85.29% recall rate, and 90.63% F1 Score, comparable to expert technicians. This study builds a bridge between functional and structural imaging, offering crucial technical support for neuron classification and circuitry analysis.
Collapse
Affiliation(s)
- Wenwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Wu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Zimin Dai
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Xiaokang Chai
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Sile An
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Zhuang Guan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Wei Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Jianwei Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China; HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, PR China
| | - Qingming Luo
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, PR China
| | - Zhao Feng
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, PR China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, PR China; HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, PR China; Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, PR China.
| |
Collapse
|
2
|
Zhu J, Liu X, Liu Z, Deng Y, Xu J, Liu K, Zhang R, Meng X, Fei P, Yu T, Zhu D. SOLID: minimizing tissue distortion for brain-wide profiling of diverse architectures. Nat Commun 2024; 15:8303. [PMID: 39333107 PMCID: PMC11436996 DOI: 10.1038/s41467-024-52560-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 09/12/2024] [Indexed: 09/29/2024] Open
Abstract
Brain-wide profiling of diverse biological components is fundamental for understanding complex brain pathology. Despite the availability in whole-brain imaging, it is still challenging to conduct multiplexed, brain-wide analysis with current tissue clearing techniques. Here, we propose SOLID, a hydrophobic tissue clearing method that can minimize tissue distortion while offering impressive clearing performance. SOLID achieves high-quality imaging of multi-color labeled mouse brain, and the acquired datasets can be effectively registered to the Allen Brain Atlas via commonly-used algorithms. SOLID enables generation of neural and vascular maps within one mouse brain, as well as tracing of specific neural projections labeled with viruses. SOLID also allows cross-channel investigations of β-amyloid plaques and neurovascular lesions in the reconstructed all-in-one panorama, providing quantitative insights into structural interactions at different stages of Alzheimer's disease. Altogether, SOLID provides a robust pipeline for whole-brain mapping, which may widen the utility of tissue clearing techniques in diverse neuroscience research.
Collapse
Affiliation(s)
- Jingtan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Xiaomei Liu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Zhang Liu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yating Deng
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Jianyi Xu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Kunxing Liu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Ruiying Zhang
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Xizhi Meng
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Peng Fei
- School of Optical Electronic Information, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingting Yu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| | - Dan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
3
|
Li C, Li Y, Zhao H, Ding L. Enhancing brain image quality with 3D U-net for stripe removal in light sheet fluorescence microscopy. Brain Inform 2024; 11:24. [PMID: 39325110 PMCID: PMC11427638 DOI: 10.1186/s40708-024-00236-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 08/19/2024] [Indexed: 09/27/2024] Open
Abstract
Light Sheet Fluorescence Microscopy (LSFM) is increasingly popular in neuroimaging for its ability to capture high-resolution 3D neural data. However, the presence of stripe noise significantly degrades image quality, particularly in complex 3D stripes with varying widths and brightness, posing challenges in neuroscience research. Existing stripe removal algorithms excel in suppressing noise and preserving details in 2D images with simple stripes but struggle with the complexity of 3D stripes. To address this, we propose a novel 3D U-net model for Stripe Removal in Light sheet fluorescence microscopy (USRL). This approach directly learns and removes stripes in 3D space across different scales, employing a dual-resolution strategy to effectively handle stripes of varying complexities. Additionally, we integrate a nonlinear mapping technique to normalize high dynamic range and unevenly distributed data before applying the stripe removal algorithm. We validate our method on diverse datasets, demonstrating substantial improvements in peak signal-to-noise ratio (PSNR) compared to existing algorithms. Moreover, our algorithm exhibits robust performance when applied to real LSFM data. Through extensive validation experiments, both on test sets and real-world data, our approach outperforms traditional methods, affirming its effectiveness in enhancing image quality. Furthermore, the adaptability of our algorithm extends beyond LSFM applications to encompass other imaging modalities. This versatility underscores its potential to enhance image usability across various research disciplines.
Collapse
Affiliation(s)
- Changshan Li
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Youqi Li
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Hu Zhao
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Liya Ding
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
4
|
Zhang L, Huang L, Yuan Z, Hang Y, Zeng Y, Li K, Wang L, Zeng H, Chen X, Zhang H, Xi J, Chen D, Gao Z, Le L, Chen J, Ye W, Liu L, Wang Y, Peng H. Collaborative augmented reconstruction of 3D neuron morphology in mouse and human brains. Nat Methods 2024:10.1038/s41592-024-02401-8. [PMID: 39232199 DOI: 10.1038/s41592-024-02401-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 07/30/2024] [Indexed: 09/06/2024]
Abstract
Digital reconstruction of the intricate 3D morphology of individual neurons from microscopic images is a crucial challenge in both individual laboratories and large-scale projects focusing on cell types and brain anatomy. This task often fails in both conventional manual reconstruction and state-of-the-art artificial intelligence (AI)-based automatic reconstruction algorithms. It is also challenging to organize multiple neuroanatomists to generate and cross-validate biologically relevant and mutually agreed upon reconstructions in large-scale data production. Based on collaborative group intelligence augmented by AI, we developed a collaborative augmented reconstruction (CAR) platform for neuron reconstruction at scale. This platform allows for immersive interaction and efficient collaborative editing of neuron anatomy using a variety of devices, such as desktop workstations, virtual reality headsets and mobile phones, enabling users to contribute anytime and anywhere and to take advantage of several AI-based automation tools. We tested CAR's applicability for challenging mouse and human neurons toward scaled and faithful data production.
Collapse
Affiliation(s)
- Lingli Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lei Huang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zexin Yuan
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Yuning Hang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Ying Zeng
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Kaixiang Li
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijun Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Haoyu Zeng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xin Chen
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Hairuo Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jiaqi Xi
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Danni Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Ziqin Gao
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Longxin Le
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Jie Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Wen Ye
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China.
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
5
|
Kaltenecker D, Al-Maskari R, Negwer M, Hoeher L, Kofler F, Zhao S, Todorov M, Rong Z, Paetzold JC, Wiestler B, Piraud M, Rueckert D, Geppert J, Morigny P, Rohm M, Menze BH, Herzig S, Berriel Diaz M, Ertürk A. Virtual reality-empowered deep-learning analysis of brain cells. Nat Methods 2024; 21:1306-1315. [PMID: 38649742 PMCID: PMC11239522 DOI: 10.1038/s41592-024-02245-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 03/12/2024] [Indexed: 04/25/2024]
Abstract
Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.
Collapse
Affiliation(s)
- Doris Kaltenecker
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany
- German Center for Diabetes Research (DZD), Neuherberg, Germany
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| | - Rami Al-Maskari
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
- Department of Computer Science, TUM Computation, Information and Technology, Technical University of Munich (TUM), Munich, Germany
- Center for Translational Cancer Research of the TUM (TranslaTUM), Munich, Germany
| | - Moritz Negwer
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
| | - Luciano Hoeher
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
| | - Florian Kofler
- Department of Computer Science, TUM Computation, Information and Technology, Technical University of Munich (TUM), Munich, Germany
- Center for Translational Cancer Research of the TUM (TranslaTUM), Munich, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Munich, Neuherberg, Germany
| | - Shan Zhao
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
| | - Mihail Todorov
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
| | - Zhouyi Rong
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
| | - Johannes Christian Paetzold
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany
- Center for Translational Cancer Research of the TUM (TranslaTUM), Munich, Germany
- Department of Computing, Imperial College London, London, United Kingdom
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Marie Piraud
- Helmholtz AI, Helmholtz Munich, Neuherberg, Germany
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, United Kingdom
| | - Julia Geppert
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany
- German Center for Diabetes Research (DZD), Neuherberg, Germany
| | - Pauline Morigny
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany
- German Center for Diabetes Research (DZD), Neuherberg, Germany
| | - Maria Rohm
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany
- German Center for Diabetes Research (DZD), Neuherberg, Germany
| | - Bjoern H Menze
- Department of Computer Science, TUM Computation, Information and Technology, Technical University of Munich (TUM), Munich, Germany
- Department for Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Stephan Herzig
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany
- German Center for Diabetes Research (DZD), Neuherberg, Germany
- Chair Molecular Metabolic Control, TU Munich, Munich, Germany
| | - Mauricio Berriel Diaz
- Institute for Diabetes and Cancer (IDC), Helmholtz Munich, Neuherberg, Germany.
- Joint Heidelberg-IDC Translational Diabetes Program, Heidelberg University Hospital, Heidelberg, Germany.
- German Center for Diabetes Research (DZD), Neuherberg, Germany.
| | - Ali Ertürk
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany.
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Munich, Neuherberg, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany.
- Deep Piction, Munich, Germany.
| |
Collapse
|
6
|
Zhou H, Yang W, Sun L, Huang L, Li S, Luo X, Jin Y, Sun W, Yan W, Li J, Ding X, He Y, Xie Z. RDLR: A Robust Deep Learning-Based Image Registration Method for Pediatric Retinal Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01154-2. [PMID: 38874699 DOI: 10.1007/s10278-024-01154-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 05/24/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
Retinal diseases stand as a primary cause of childhood blindness. Analyzing the progression of these diseases requires close attention to lesion morphology and spatial information. Standard image registration methods fail to accurately reconstruct pediatric fundus images containing significant distortion and blurring. To address this challenge, we proposed a robust deep learning-based image registration method (RDLR). The method consisted of two modules: registration module (RM) and panoramic view module (PVM). RM effectively integrated global and local feature information and learned prior information related to the orientation of images. PVM was capable of reconstructing spatial information in panoramic images. Furthermore, as the registration model was trained on over 280,000 pediatric fundus images, we introduced a registration annotation automatic generation process coupled with a quality control module to ensure the reliability of training data. We compared the performance of RDLR to the other methods, including conventional registration pipeline (CRP), voxel morph (WM), generalizable image matcher (GIM), and self-supervised techniques (SS). RDLR achieved significantly higher registration accuracy (average Dice score of 0.948) than the other methods (ranging from 0.491 to 0.802). The resulting panoramic retinal maps reconstructed by RDLR also demonstrated substantially higher fidelity (average Dice score of 0.960) compared to the other methods (ranging from 0.720 to 0.783). Overall, the proposed method addressed key challenges in pediatric retinal imaging, providing an effective solution to enhance disease diagnosis. Our source code is available at https://github.com/wuwusky/RobustDeepLeraningRegistration .
Collapse
Affiliation(s)
- Hao Zhou
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Wenhan Yang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Limei Sun
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Li Huang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Songshan Li
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaoling Luo
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yili Jin
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Wei Sun
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wenjia Yan
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jing Li
- Department of Ophthalmology, Guangdong Women and Children Hospital, Guangzhou, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
| | - Yao He
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
| | - Zhi Xie
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
7
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
8
|
Liu Z, Li A, Gong H, Yang X, Luo Q, Feng Z, Li X. The cytoarchitectonic landscape revealed by deep learning method facilitated precise positioning in mouse neocortex. Cereb Cortex 2024; 34:bhae229. [PMID: 38836835 DOI: 10.1093/cercor/bhae229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 05/13/2024] [Accepted: 05/23/2024] [Indexed: 06/06/2024] Open
Abstract
Neocortex is a complex structure with different cortical sublayers and regions. However, the precise positioning of cortical regions can be challenging due to the absence of distinct landmarks without special preparation. To address this challenge, we developed a cytoarchitectonic landmark identification pipeline. The fluorescence micro-optical sectioning tomography method was employed to image the whole mouse brain stained by general fluorescent nucleotide dye. A fast 3D convolution network was subsequently utilized to segment neuronal somas in entire neocortex. By approach, the cortical cytoarchitectonic profile and the neuronal morphology were analyzed in 3D, eliminating the influence of section angle. And the distribution maps were generated that visualized the number of neurons across diverse morphological types, revealing the cytoarchitectonic landscape which characterizes the landmarks of cortical regions, especially the typical signal pattern of barrel cortex. Furthermore, the cortical regions of various ages were aligned using the generated cytoarchitectonic landmarks suggesting the structural changes of barrel cortex during the aging process. Moreover, we observed the spatiotemporally gradient distributions of spindly neurons, concentrated in the deep layer of primary visual area, with their proportion decreased over time. These findings could improve structural understanding of neocortex, paving the way for further exploration with this method.
Collapse
Affiliation(s)
- Zhixiang Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, No. 1037 Luoyu Road, Wuhan 430070, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, No. 1037 Luoyu Road, Wuhan 430070, China
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, No. 58 Renmin Road, Haikou 570228, China
- HUST-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, No. 388 Ruoshui Road, Suzhou 215000, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, No. 1037 Luoyu Road, Wuhan 430070, China
- HUST-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, No. 388 Ruoshui Road, Suzhou 215000, China
| | - Xiaoquan Yang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, No. 58 Renmin Road, Haikou 570228, China
- HUST-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, No. 388 Ruoshui Road, Suzhou 215000, China
| | - Qingming Luo
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, No. 58 Renmin Road, Haikou 570228, China
| | - Zhao Feng
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, No. 58 Renmin Road, Haikou 570228, China
- HUST-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, No. 388 Ruoshui Road, Suzhou 215000, China
| | - Xiangning Li
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, No. 58 Renmin Road, Haikou 570228, China
- HUST-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, No. 388 Ruoshui Road, Suzhou 215000, China
| |
Collapse
|
9
|
Chen Y, Yang H, Luo Y, Niu Y, Yu M, Deng S, Wang X, Deng H, Chen H, Gao L, Li X, Xu P, Xue F, Miao J, Shi SH, Zhong Y, Ma C, Lei B. Photoacoustic Tomography with Temporal Encoding Reconstruction (PATTERN) for cross-modal individual analysis of the whole brain. Nat Commun 2024; 15:4228. [PMID: 38762498 PMCID: PMC11102525 DOI: 10.1038/s41467-024-48393-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 04/26/2024] [Indexed: 05/20/2024] Open
Abstract
Cross-modal analysis of the same whole brain is an ideal strategy to uncover brain function and dysfunction. However, it remains challenging due to the slow speed and destructiveness of traditional whole-brain optical imaging techniques. Here we develop a new platform, termed Photoacoustic Tomography with Temporal Encoding Reconstruction (PATTERN), for non-destructive, high-speed, 3D imaging of ex vivo rodent, ferret, and non-human primate brains. Using an optimally designed image acquisition scheme and an accompanying machine-learning algorithm, PATTERN extracts signals of genetically-encoded probes from photobleaching-based temporal modulation and enables reliable visualization of neural projection in the whole central nervous system with 3D isotropic resolution. Without structural and biological perturbation to the sample, PATTERN can be combined with other whole-brain imaging modalities to acquire the whole-brain image with both high resolution and morphological fidelity. Furthermore, cross-modal transcriptome analysis of an individual brain is achieved by PATTERN imaging. Together, PATTERN provides a compatible and versatile strategy for brain-wide cross-modal analysis at the individual level.
Collapse
Affiliation(s)
- Yuwen Chen
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, PR China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, PR China
| | - Haoyu Yang
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China
- IDG/McGovern Institute of Brain Research, Beijing, 100084, PR China
- Tsinghua-Peking Center for Life Sciences, Tsinghua University, Beijing, 100084, PR China
| | - Yan Luo
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, PR China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, PR China
| | - Yijun Niu
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China
- IDG/McGovern Institute of Brain Research, Beijing, 100084, PR China
| | - Muzhou Yu
- School of Computer Science, Xi'an Jiaotong University, Xi'an, 713599, PR China
| | - Shanjun Deng
- School of Life Sciences, Sun Yat-sen University, Guangzhou, 510275, PR China
| | - Xuanhao Wang
- Research Center for Humanoid Sensing, Zhejiang Laboratory, Hangzhou, 311100, PR China
| | - Handi Deng
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, PR China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, PR China
| | - Haichao Chen
- School of Medicine, Tsinghua University, Beijing, 100084, PR China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou, 310029, PR China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou, 310029, PR China
| | - Pingyong Xu
- Key Laboratory of Biomacromolecules (CAS), CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, PR China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100101, PR China
| | - Fudong Xue
- Key Laboratory of Biomacromolecules (CAS), CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, PR China
| | - Jing Miao
- Canterbury School, New Milford, CT, 06776, USA
| | - Song-Hai Shi
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China
- IDG/McGovern Institute of Brain Research, Beijing, 100084, PR China
- Tsinghua-Peking Center for Life Sciences, Tsinghua University, Beijing, 100084, PR China
| | - Yi Zhong
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China
- IDG/McGovern Institute of Brain Research, Beijing, 100084, PR China
- Tsinghua-Peking Center for Life Sciences, Tsinghua University, Beijing, 100084, PR China
| | - Cheng Ma
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, PR China.
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, PR China.
| | - Bo Lei
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China.
- IDG/McGovern Institute of Brain Research, Beijing, 100084, PR China.
- Beijing Academy of Artificial Intelligence, Beijing, 100084, PR China.
| |
Collapse
|
10
|
Mansour H, Azrak R, Cook JJ, Hornburg KJ, Qi Y, Tian Y, Williams RW, Yeh FC, White LE, Johnson GA. An Open Resource: MR and light sheet microscopy stereotaxic atlas of the mouse brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.28.587246. [PMID: 38586051 PMCID: PMC10996689 DOI: 10.1101/2024.03.28.587246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
We have combined MR histology and light sheet microscopy (LSM) of five postmortem C57BL/6J mouse brains in a stereotaxic space based on micro-CT yielding a multimodal 3D atlas with the highest spatial and contrast resolution yet reported. Brains were imaged in situ with multi gradient echo (mGRE) and diffusion tensor imaging (DTI) at 15 μm resolution (∼ 2.4 million times that of clinical MRI). Scalar images derived from the average DTI and mGRE provide unprecedented contrast in 14 complementary 3D volumes, each highlighting distinct histologic features. The same tissues scanned with LSM and registered into the stereotaxic space provide 17 different molecular cell type stains. The common coordinate framework labels (CCFv3) complete the multimodal atlas. The atlas has been used to correct distortions in the Allen Brain Atlas and harmonize it with Franklin Paxinos. It provides a unique resource for stereotaxic labeling of mouse brain images from many sources.
Collapse
|
11
|
Willekens SMA, Morini F, Mediavilla T, Nilsson E, Orädd G, Hahn M, Chotiwan N, Visa M, Berggren PO, Ilegems E, Överby AK, Ahlgren U, Marcellino D. An MR-based brain template and atlas for optical projection tomography and light sheet fluorescence microscopy in neuroscience. Front Neurosci 2024; 18:1328815. [PMID: 38601090 PMCID: PMC11004350 DOI: 10.3389/fnins.2024.1328815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/11/2024] [Indexed: 04/12/2024] Open
Abstract
Introduction Optical Projection Tomography (OPT) and light sheet fluorescence microscopy (LSFM) are high resolution optical imaging techniques, ideally suited for ex vivo 3D whole mouse brain imaging. Although they exhibit high specificity for their targets, the anatomical detail provided by tissue autofluorescence remains limited. Methods T1-weighted images were acquired from 19 BABB or DBE cleared brains to create an MR template using serial longitudinal registration. Afterwards, fluorescent OPT and LSFM images were coregistered/normalized to the MR template to create fusion images. Results Volumetric calculations revealed a significant difference between BABB and DBE cleared brains, leading to develop two optimized templates, with associated tissue priors and brain atlas, for BABB (OCUM) and DBE (iOCUM). By creating fusion images, we identified virus infected brain regions, mapped dopamine transporter and translocator protein expression, and traced innervation from the eye along the optic tract to the thalamus and superior colliculus using cholera toxin B. Fusion images allowed for precise anatomical identification of fluorescent signal in the detailed anatomical context provided by MR. Discussion The possibility to anatomically map fluorescent signals on magnetic resonance (MR) images, widely used in clinical and preclinical neuroscience, would greatly benefit applications of optical imaging of mouse brain. These specific MR templates for cleared brains enable a broad range of neuroscientific applications integrating 3D optical brain imaging.
Collapse
Affiliation(s)
- Stefanie M. A. Willekens
- Department of Clinical Microbiology, Umeå University, Umeå, Sweden
- The Laboratory for Molecular Infection Medicine Sweden (MIMS), Umeå University, Umeå, Sweden
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Federico Morini
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Tomas Mediavilla
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Emma Nilsson
- Department of Clinical Microbiology, Umeå University, Umeå, Sweden
- The Laboratory for Molecular Infection Medicine Sweden (MIMS), Umeå University, Umeå, Sweden
| | - Greger Orädd
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Max Hahn
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Nunya Chotiwan
- Department of Clinical Microbiology, Umeå University, Umeå, Sweden
- The Laboratory for Molecular Infection Medicine Sweden (MIMS), Umeå University, Umeå, Sweden
| | - Montse Visa
- The Rolf Luft Research Centre for Diabetes and Endocrinology, Karolinska Institutet, Stockholm, Sweden
| | - Per-Olof Berggren
- The Rolf Luft Research Centre for Diabetes and Endocrinology, Karolinska Institutet, Stockholm, Sweden
| | - Erwin Ilegems
- The Rolf Luft Research Centre for Diabetes and Endocrinology, Karolinska Institutet, Stockholm, Sweden
| | - Anna K. Överby
- Department of Clinical Microbiology, Umeå University, Umeå, Sweden
- The Laboratory for Molecular Infection Medicine Sweden (MIMS), Umeå University, Umeå, Sweden
| | - Ulf Ahlgren
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| | - Daniel Marcellino
- Department of Medical and Translational Biology, Umeå University, Umeå, Sweden
| |
Collapse
|
12
|
Qian P, Manubens-Gil L, Jiang S, Peng H. Non-homogenous axonal bouton distribution in whole-brain single-cell neuronal networks. Cell Rep 2024; 43:113871. [PMID: 38451816 DOI: 10.1016/j.celrep.2024.113871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 01/08/2024] [Accepted: 02/09/2024] [Indexed: 03/09/2024] Open
Abstract
We examined the distribution of pre-synaptic contacts in axons of mouse neurons and constructed whole-brain single-cell neuronal networks using an extensive dataset of 1,891 fully reconstructed neurons. We found that bouton locations were not homogeneous throughout the axon and among brain regions. As our algorithm was able to generate whole-brain single-cell connectivity matrices from full morphology reconstruction datasets, we further found that non-homogeneous bouton locations have a significant impact on network wiring, including degree distribution, triad census, and community structure. By perturbing neuronal morphology, we further explored the link between anatomical details and network topology. In our in silico exploration, we found that dendritic and axonal tree span would have the greatest impact on network wiring, followed by synaptic contact deletion. Our results suggest that neuroanatomical details must be carefully addressed in studies of whole-brain networks at the single-cell level.
Collapse
Affiliation(s)
- Penghao Qian
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China; School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Linus Manubens-Gil
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China.
| | - Shengdian Jiang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China; School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China.
| |
Collapse
|
13
|
Lambert T, Brunner C, Kil D, Wuyts R, D'Hondt E, Montaldo G, Urban A. A deep learning classification task for brain navigation in rodents using micro-Doppler ultrasound imaging. Heliyon 2024; 10:e27432. [PMID: 38495198 PMCID: PMC10943389 DOI: 10.1016/j.heliyon.2024.e27432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 02/28/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Positioning and navigation are essential components of neuroimaging as they improve the quality and reliability of data acquisition, leading to advances in diagnosis, treatment outcomes, and fundamental understanding of the brain. Functional ultrasound imaging is an emerging technology providing high-resolution images of the brain vasculature, allowing for the monitoring of brain activity. However, as the technology is relatively new, there is no standardized tool for inferring the position in the brain from the vascular images. In this study, we present a deep learning-based framework designed to address this challenge. Our approach uses an image classification task coupled with a regression on the resulting probabilities to determine the position of a single image. To evaluate its performance, we conducted experiments using a dataset of 51 rat brain scans. The training positions were extracted at intervals of 375 μm, resulting in a positioning error of 176 μm. Further GradCAM analysis revealed that the predictions were primarily driven by subcortical vascular structures. Finally, we assessed the robustness of our method in a cortical stroke where the brain vasculature is severely impaired. Remarkably, no specific increase in the number of misclassifications was observed, confirming the method's reliability in challenging conditions. Overall, our framework provides accurate and flexible positioning, not relying on a pre-registered reference but rather on conserved vascular patterns.
Collapse
Affiliation(s)
- Théo Lambert
- Neuro-Electronics Research Flanders, Leuven, Belgium
- VIB, Leuven, Belgium
- Imec, Leuven, Belgium
- Department of Neuroscience, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Clément Brunner
- Neuro-Electronics Research Flanders, Leuven, Belgium
- VIB, Leuven, Belgium
- Imec, Leuven, Belgium
- Department of Neuroscience, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Dries Kil
- Neuro-Electronics Research Flanders, Leuven, Belgium
- VIB, Leuven, Belgium
- Imec, Leuven, Belgium
- Department of Neuroscience, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | | | | | - Gabriel Montaldo
- Neuro-Electronics Research Flanders, Leuven, Belgium
- VIB, Leuven, Belgium
- Imec, Leuven, Belgium
- Department of Neuroscience, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Alan Urban
- Neuro-Electronics Research Flanders, Leuven, Belgium
- VIB, Leuven, Belgium
- Imec, Leuven, Belgium
- Department of Neuroscience, Faculty of Medicine, KU Leuven, Leuven, Belgium
| |
Collapse
|
14
|
Peng H, Xie P, Xiong F. Meet the authors: Hanchuan Peng, Peng Xie, and Feng Xiong. PATTERNS (NEW YORK, N.Y.) 2024; 5:100912. [PMID: 38264723 PMCID: PMC10801219 DOI: 10.1016/j.patter.2023.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
In a recent paper at Patterns, Hanchuan Peng, Peng Xie, and Feng Xiong from Southeast University describe a deep learning method to characterize complete single-neuron morphologies, which can discover neuron projection patterns of diverse cells and learn neuronal morphology representation. In this interview, the authors shared the story behind the paper and their research experience. This interview is a companion to these authors' recent paper, "DSM: Deep sequential model for complete neuronal morphology representation and feature extraction."1.
Collapse
Affiliation(s)
- Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Peng Xie
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Feng Xiong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| |
Collapse
|
15
|
Xiong F, Xie P, Zhao Z, Li Y, Zhao S, Manubens-Gil L, Liu L, Peng H. DSM: Deep sequential model for complete neuronal morphology representation and feature extraction. PATTERNS (NEW YORK, N.Y.) 2024; 5:100896. [PMID: 38264721 PMCID: PMC10801254 DOI: 10.1016/j.patter.2023.100896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/24/2023] [Accepted: 11/20/2023] [Indexed: 01/25/2024]
Abstract
The full morphology of single neurons is indispensable for understanding cell types, the basic building blocks in brains. Projecting trajectories are critical to extracting biologically relevant information from neuron morphologies, as they provide valuable information for both connectivity and cell identity. We developed an artificial intelligence method, deep sequential model (DSM), to extract concise, cell-type-defining features from projections across brain regions. DSM achieves more than 90% accuracy in classifying 12 major neuron projection types without compromising performance when spatial noise is present. Such remarkable robustness enabled us to efficiently manage and analyze several major full-morphology data sources, showcasing how characteristic long projections can define cell identities. We also succeeded in applying our model to both discovering previously unknown neuron subtypes and analyzing exceptional co-expressed genes involved in neuron projection circuits.
Collapse
Affiliation(s)
- Feng Xiong
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Peng Xie
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Zuohan Zhao
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Yiwei Li
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Sujun Zhao
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Linus Manubens-Gil
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| |
Collapse
|
16
|
Athey TL, Tward DJ, Mueller U, Younes L, Vogelstein JT, Miller MI. Preserving Derivative Information while Transforming Neuronal Curves. Neuroinformatics 2024; 22:63-74. [PMID: 38036915 PMCID: PMC10917852 DOI: 10.1007/s12021-023-09648-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/31/2023] [Indexed: 12/02/2023]
Abstract
The international neuroscience community is building the first comprehensive atlases of brain cell types to understand how the brain functions from a higher resolution, and more integrated perspective than ever before. In order to build these atlases, subsets of neurons (e.g. serotonergic neurons, prefrontal cortical neurons etc.) are traced in individual brain samples by placing points along dendrites and axons. Then, the traces are mapped to common coordinate systems by transforming the positions of their points, which neglects how the transformation bends the line segments in between. In this work, we apply the theory of jets to describe how to preserve derivatives of neuron traces up to any order. We provide a framework to compute possible error introduced by standard mapping methods, which involves the Jacobian of the mapping transformation. We show how our first order method improves mapping accuracy in both simulated and real neuron traces under random diffeomorphisms. Our method is freely available in our open-source Python package brainlit.
Collapse
Affiliation(s)
- Thomas L Athey
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA.
| | - Daniel J Tward
- Department of Computational Medicine, University of California at Los Angeles, Los Angeles, CA, USA
- Department of Neurology, University of California at Los Angeles, Los Angeles, CA, USA
| | - Ulrich Mueller
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Laurent Younes
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Applied Mathematics & Statistics, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Joshua T Vogelstein
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Michael I Miller
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
17
|
Liu X, Li A, Luo Y, Bao S, Jiang T, Li X, Yuan J, Feng Z. An interactive image segmentation method for the anatomical structures of the main olfactory bulb with micro-level resolution. Front Neuroinform 2023; 17:1276891. [PMID: 38187824 PMCID: PMC10766684 DOI: 10.3389/fninf.2023.1276891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 11/28/2023] [Indexed: 01/09/2024] Open
Abstract
The main olfactory bulb is the key element of the olfactory pathway of rodents. To precisely dissect the neural pathway in the main olfactory bulb (MOB), it is necessary to construct the three-dimensional morphologies of the anatomical structures within it with micro-level resolution. However, the construction remains challenging due to the complicated shape of the anatomical structures in the main olfactory bulb and the high resolution of micro-optical images. To address these issues, we propose an interactive volume image segmentation method with micro-level resolution in the horizontal and axial direction. Firstly, we obtain the initial location of the anatomical structures by manual annotation and design a patch-based neural network to learn the complex texture feature of the anatomical structures. Then we randomly sample some patches to predict by the trained network and perform an annotation reconstruction based on intensity calculation to get the final location results of the anatomical structures. Our experiments were conducted using Nissl-stained brain images acquired by the Micro-optical sectioning tomography (MOST) system. Our method achieved a mean dice similarity coefficient (DSC) of 81.8% and obtain the best segmentation performance. At the same time, the experiment shows the three-dimensional morphology reconstruction results of the anatomical structures in the main olfactory bulb are smooth and consistent with their natural shapes, which addresses the possibility of constructing three-dimensional morphologies of the anatomical structures in the whole brain.
Collapse
Affiliation(s)
- Xin Liu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Yue Luo
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Shengda Bao
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Tao Jiang
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Xiangning Li
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Jing Yuan
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Zhao Feng
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| |
Collapse
|
18
|
Jiang T, Gong H, Yuan J. Whole-brain Optical Imaging: A Powerful Tool for Precise Brain Mapping at the Mesoscopic Level. Neurosci Bull 2023; 39:1840-1858. [PMID: 37715920 PMCID: PMC10661546 DOI: 10.1007/s12264-023-01112-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 05/08/2023] [Indexed: 09/18/2023] Open
Abstract
The mammalian brain is a highly complex network that consists of millions to billions of densely-interconnected neurons. Precise dissection of neural circuits at the mesoscopic level can provide important structural information for understanding the brain. Optical approaches can achieve submicron lateral resolution and achieve "optical sectioning" by a variety of means, which has the natural advantage of allowing the observation of neural circuits at the mesoscopic level. Automated whole-brain optical imaging methods based on tissue clearing or histological sectioning surpass the limitation of optical imaging depth in biological tissues and can provide delicate structural information in a large volume of tissues. Combined with various fluorescent labeling techniques, whole-brain optical imaging methods have shown great potential in the brain-wide quantitative profiling of cells, circuits, and blood vessels. In this review, we summarize the principles and implementations of various whole-brain optical imaging methods and provide some concepts regarding their future development.
Collapse
Affiliation(s)
- Tao Jiang
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
| | - Hui Gong
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jing Yuan
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China.
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|
19
|
Kleven H, Bjerke IE, Clascá F, Groenewegen HJ, Bjaalie JG, Leergaard TB. Waxholm Space atlas of the rat brain: a 3D atlas supporting data analysis and integration. Nat Methods 2023; 20:1822-1829. [PMID: 37783883 PMCID: PMC10630136 DOI: 10.1038/s41592-023-02034-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/01/2023] [Indexed: 10/04/2023]
Abstract
Volumetric brain atlases are increasingly used to integrate and analyze diverse experimental neuroscience data acquired from animal models, but until recently a publicly available digital atlas with complete coverage of the rat brain has been missing. Here we present an update of the Waxholm Space rat brain atlas, a comprehensive open-access volumetric atlas resource. This brain atlas features annotations of 222 structures, of which 112 are new and 57 revised compared to previous versions. It provides a detailed map of the cerebral cortex, hippocampal region, striatopallidal areas, midbrain dopaminergic system, thalamic cell groups, the auditory system and main fiber tracts. We document the criteria underlying the annotations and demonstrate how the atlas with related tools and workflows can be used to support interpretation, integration, analysis and dissemination of experimental rat brain data.
Collapse
Affiliation(s)
- Heidi Kleven
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Ingvild E Bjerke
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Francisco Clascá
- Department of Anatomy and Neuroscience, Autónoma de Madrid University, Madrid, Spain
| | - Henk J Groenewegen
- Department of Anatomy and Neurosciences, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Jan G Bjaalie
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Trygve B Leergaard
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway.
| |
Collapse
|
20
|
Li Z, Shang Z, Liu J, Zhen H, Zhu E, Zhong S, Sturgess RN, Zhou Y, Hu X, Zhao X, Wu Y, Li P, Lin R, Ren J. D-LMBmap: a fully automated deep-learning pipeline for whole-brain profiling of neural circuitry. Nat Methods 2023; 20:1593-1604. [PMID: 37770711 PMCID: PMC10555838 DOI: 10.1038/s41592-023-01998-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 08/02/2023] [Indexed: 09/30/2023]
Abstract
Recent proliferation and integration of tissue-clearing methods and light-sheet fluorescence microscopy has created new opportunities to achieve mesoscale three-dimensional whole-brain connectivity mapping with exceptionally high throughput. With the rapid generation of large, high-quality imaging datasets, downstream analysis is becoming the major technical bottleneck for mesoscale connectomics. Current computational solutions are labor intensive with limited applications because of the exhaustive manual annotation and heavily customized training. Meanwhile, whole-brain data analysis always requires combining multiple packages and secondary development by users. To address these challenges, we developed D-LMBmap, an end-to-end package providing an integrated workflow containing three modules based on deep-learning algorithms for whole-brain connectivity mapping: axon segmentation, brain region segmentation and whole-brain registration. D-LMBmap does not require manual annotation for axon segmentation and achieves quantitative analysis of whole-brain projectome in a single workflow with superior accuracy for multiple cell types in all of the modalities tested.
Collapse
Affiliation(s)
- Zhongyu Li
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Zengyi Shang
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jingyi Liu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Haotian Zhen
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Entao Zhu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shilin Zhong
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Robyn N Sturgess
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yitian Zhou
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xuemeng Hu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xingyue Zhao
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Yi Wu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Peiqi Li
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Rui Lin
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Jing Ren
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK.
| |
Collapse
|
21
|
Kronman FA, Liwang JK, Betty R, Vanselow DJ, Wu YT, Tustison NJ, Bhandiwad A, Manjila SB, Minteer JA, Shin D, Lee CH, Patil R, Duda JT, Puelles L, Gee JC, Zhang J, Ng L, Kim Y. Developmental Mouse Brain Common Coordinate Framework. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.14.557789. [PMID: 37745386 PMCID: PMC10515964 DOI: 10.1101/2023.09.14.557789] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
3D standard reference brains serve as key resources to understand the spatial organization of the brain and promote interoperability across different studies. However, unlike the adult mouse brain, the lack of standard 3D reference atlases for developing mouse brains has hindered advancement of our understanding of brain development. Here, we present a multimodal 3D developmental common coordinate framework (DevCCF) spanning mouse embryonic day (E) 11.5, E13.5, E15.5, E18.5, and postnatal day (P) 4, P14, and P56 with anatomical segmentations defined by a developmental ontology. At each age, the DevCCF features undistorted morphologically averaged atlas templates created from Magnetic Resonance Imaging and co-registered high-resolution templates from light sheet fluorescence microscopy. Expert-curated 3D anatomical segmentations at each age adhere to an updated prosomeric model and can be explored via an interactive 3D web-visualizer. As a use case, we employed the DevCCF to unveil the emergence of GABAergic neurons in embryonic brains. Moreover, we integrated the Allen CCFv3 into the P56 template with stereotaxic coordinates and mapped spatial transcriptome cell-type data with the developmental ontology. In summary, the DevCCF is an openly accessible resource that can be used for large-scale data integration to gain a comprehensive understanding of brain development.
Collapse
Affiliation(s)
- Fae A Kronman
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Josephine K Liwang
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Rebecca Betty
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Daniel J Vanselow
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Yuan-Ting Wu
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Nicholas J Tustison
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA
| | | | - Steffy B Manjila
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Jennifer A Minteer
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Donghui Shin
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Choong Heon Lee
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NY, USA
| | - Rohan Patil
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| | - Jeffrey T Duda
- Department of Radiology, Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Luis Puelles
- Department of Human Anatomy and Psychobiology, Faculty of Medicine, Universidad de Murcia, and Murcia Arrixaca Institute for Biomedical Research (IMIB) Murcia, Spain
| | - James C Gee
- Department of Radiology, Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jiangyang Zhang
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NY, USA
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Yongsoo Kim
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA
| |
Collapse
|
22
|
Sadeghi M, Ramos-Prats A, Neto P, Castaldi F, Crowley D, Matulewicz P, Paradiso E, Freysinger W, Ferraguti F, Goebel G. Localization and Registration of 2D Histological Mouse Brain Images in 3D Atlas Space. Neuroinformatics 2023; 21:615-630. [PMID: 37357231 PMCID: PMC10406728 DOI: 10.1007/s12021-023-09632-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2023] [Indexed: 06/27/2023]
Abstract
To accurately explore the anatomical organization of neural circuits in the brain, it is crucial to map the experimental brain data onto a standardized system of coordinates. Studying 2D histological mouse brain slices remains the standard procedure in many laboratories. Mapping these 2D brain slices is challenging; due to deformations, artifacts, and tilted angles introduced during the standard preparation and slicing process. In addition, analysis of experimental mouse brain slices can be highly dependent on the level of expertise of the human operator. Here we propose a computational tool for Accurate Mouse Brain Image Analysis (AMBIA), to map 2D mouse brain slices on the 3D brain model with minimal human intervention. AMBIA has a modular design that comprises a localization module and a registration module. The localization module is a deep learning-based pipeline that localizes a single 2D slice in the 3D Allen Brain Atlas and generates a corresponding atlas plane. The registration module is built upon the Ardent python package that performs deformable 2D registration between the brain slice to its corresponding atlas. By comparing AMBIA's performance in localization and registration to human ratings, we demonstrate that it performs at a human expert level. AMBIA provides an intuitive and highly efficient way for accurate registration of experimental 2D mouse brain images to 3D digital mouse brain atlas. Our tool provides a graphical user interface and it is designed to be used by researchers with minimal programming knowledge.
Collapse
Affiliation(s)
- Maryam Sadeghi
- Department of Medical Statistics and Informatics, Medical University of Innsbruck, Innsbruck, Austria.
| | - Arnau Ramos-Prats
- Department of Pharmacology, Medical University of Innsbruck, Innsbruck, Austria
| | - Pedro Neto
- Faculty of Engineering, University of Porto, Porto, Portugal
| | - Federico Castaldi
- Department of Pharmacology, Medical University of Innsbruck, Innsbruck, Austria
| | - Devin Crowley
- Biomedical Engineering, Johns Hopkins University, Baltimore, United States
| | - Pawel Matulewicz
- Department of Pharmacology, Medical University of Innsbruck, Innsbruck, Austria
| | - Enrica Paradiso
- KNAW, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | | | - Francesco Ferraguti
- Department of Pharmacology, Medical University of Innsbruck, Innsbruck, Austria
| | - Georg Goebel
- Department of Medical Statistics and Informatics, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
23
|
Skovbjerg G, Roostalu U, Salinas CG, Skytte JL, Perens J, Clemmensen C, Elster L, Frich CK, Hansen HH, Hecksher-Sørensen J. Uncovering CNS access of lipidated exendin-4 analogues by quantitative whole-brain 3D light sheet imaging. Neuropharmacology 2023:109637. [PMID: 37391028 DOI: 10.1016/j.neuropharm.2023.109637] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 06/07/2023] [Accepted: 06/09/2023] [Indexed: 07/02/2023]
Abstract
Peptide-based drug development for CNS disorders is challenged by poor blood-brain barrier (BBB) penetrability of peptides. While acylation protractions (lipidation) have been successfully applied to increase circulating half-life of therapeutic peptides, little is known about the CNS accessibility of lipidated peptide drugs. Light-sheet fluorescence microscopy (LSFM) has emerged as a powerful method to visualize whole-brain 3D distribution of fluorescently labelled therapeutic peptides at single-cell resolution. Here, we applied LSFM to map CNS distribution of the clinically relevant GLP-1 receptor agonist (GLP-1RA) exendin-4 (Ex4) and lipidated analogues following peripheral administration. Mice received an intravenous dose (100 nmol/kg) of IR800 fluorophore-labelled Ex4 (Ex4), Ex4 acylated with a C16-monoacid (Ex4_C16MA) or C18-diacid (Ex4_C18DA). Other mice were administered C16MA-acylated exendin 9-39 (Ex9-39_C16MA), a selective GLP-1R antagonist, serving as negative control for GLP-1R mediated agonist internalization. Two hours post-dosing, brain distribution of Ex4 and analogues was predominantly restricted to the circumventricular organs, notably area postrema and nucleus of the solitary tract. Ex4_C16MA and Ex9-39_C16MA also distributed to the paraventricular hypothalamic nucleus and medial habenula. Notably, Ex4_C18DA was detected in deeper-lying brain structures such as dorsomedial/ventromedial hypothalamic nuclei and the dentate gyrus. Similar CNS distribution maps of Ex4-C16MA and Ex9-39_C16MA suggest that brain access of lipidated Ex4 analogues is independent on GLP-1 receptor internalization. The cerebrovasculature was devoid of specific labelling, hence not supporting a direct role of GLP-1 RAs in BBB function. In conclusion, peptide lipidation increases CNS accessibility of Ex4. Our fully automated LSFM pipeline is suitable for mapping whole-brain distribution of fluorescently labelled drugs.
Collapse
Affiliation(s)
- Grethe Skovbjerg
- Gubra ApS, Hørsholm Kongevej 11B, 2970, Hørsholm, Denmark; Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, Denmark
| | - Urmas Roostalu
- Gubra ApS, Hørsholm Kongevej 11B, 2970, Hørsholm, Denmark
| | | | - Jacob L Skytte
- Gubra ApS, Hørsholm Kongevej 11B, 2970, Hørsholm, Denmark
| | - Johanna Perens
- Gubra ApS, Hørsholm Kongevej 11B, 2970, Hørsholm, Denmark
| | - Christoffer Clemmensen
- Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, Denmark
| | - Lisbeth Elster
- Gubra ApS, Hørsholm Kongevej 11B, 2970, Hørsholm, Denmark
| | | | | | | |
Collapse
|
24
|
Hawrylycz M, Martone ME, Ascoli GA, Bjaalie JG, Dong HW, Ghosh SS, Gillis J, Hertzano R, Haynor DR, Hof PR, Kim Y, Lein E, Liu Y, Miller JA, Mitra PP, Mukamel E, Ng L, Osumi-Sutherland D, Peng H, Ray PL, Sanchez R, Regev A, Ropelewski A, Scheuermann RH, Tan SZK, Thompson CL, Tickle T, Tilgner H, Varghese M, Wester B, White O, Zeng H, Aevermann B, Allemang D, Ament S, Athey TL, Baker C, Baker KS, Baker PM, Bandrowski A, Banerjee S, Bishwakarma P, Carr A, Chen M, Choudhury R, Cool J, Creasy H, D’Orazi F, Degatano K, Dichter B, Ding SL, Dolbeare T, Ecker JR, Fang R, Fillion-Robin JC, Fliss TP, Gee J, Gillespie T, Gouwens N, Zhang GQ, Halchenko YO, Harris NL, Herb BR, Hintiryan H, Hood G, Horvath S, Huo B, Jarecka D, Jiang S, Khajouei F, Kiernan EA, Kir H, Kruse L, Lee C, Lelieveldt B, Li Y, Liu H, Liu L, Markuhar A, Mathews J, Mathews KL, Mezias C, Miller MI, Mollenkopf T, Mufti S, Mungall CJ, Orvis J, Puchades MA, Qu L, Receveur JP, Ren B, Sjoquist N, Staats B, Tward D, van Velthoven CTJ, Wang Q, Xie F, Xu H, Yao Z, Yun Z, Zhang YR, Zheng WJ, Zingg B. A guide to the BRAIN Initiative Cell Census Network data ecosystem. PLoS Biol 2023; 21:e3002133. [PMID: 37390046 PMCID: PMC10313015 DOI: 10.1371/journal.pbio.3002133] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2023] Open
Abstract
Characterizing cellular diversity at different levels of biological organization and across data modalities is a prerequisite to understanding the function of cell types in the brain. Classification of neurons is also essential to manipulate cell types in controlled ways and to understand their variation and vulnerability in brain disorders. The BRAIN Initiative Cell Census Network (BICCN) is an integrated network of data-generating centers, data archives, and data standards developers, with the goal of systematic multimodal brain cell type profiling and characterization. Emphasis of the BICCN is on the whole mouse brain with demonstration of prototype feasibility for human and nonhuman primate (NHP) brains. Here, we provide a guide to the cellular and spatial approaches employed by the BICCN, and to accessing and using these data and extensive resources, including the BRAIN Cell Data Center (BCDC), which serves to manage and integrate data across the ecosystem. We illustrate the power of the BICCN data ecosystem through vignettes highlighting several BICCN analysis and visualization tools. Finally, we present emerging standards that have been developed or adopted toward Findable, Accessible, Interoperable, and Reusable (FAIR) neuroscience. The combined BICCN ecosystem provides a comprehensive resource for the exploration and analysis of cell types in the brain.
Collapse
Affiliation(s)
- Michael Hawrylycz
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Maryann E. Martone
- Department of Neuroscience, University of California San Diego, San Diego, California, United States of America
- San Francisco Veterans Affairs Medical Center, San Francisco, California, United States of America
| | - Giorgio A. Ascoli
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, Volgenau School of Engineering, George Mason University, Fairfax, Virginia, United States of America
| | - Jan G. Bjaalie
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Hong-Wei Dong
- UCLA Brain Research & Artificial Intelligence Nexus, Department of Neurobiology, David Geffen School of Medicine at University of California, Los Angeles, California, United States of America
| | - Satrajit S. Ghosh
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Jesse Gillis
- Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Ronna Hertzano
- Department of Otorhinolaryngology Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - David R. Haynor
- Department of Radiology, University of Washington, Seattle, Washington, United States of America
| | - Patrick R. Hof
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Yongsoo Kim
- Department of Neural and Behavioral Sciences, College of Medicine, The Pennsylvania State University, Hershey, Pennsylvania, United States of America
| | - Ed Lein
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Yufeng Liu
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Jeremy A. Miller
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Partha P. Mitra
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Eran Mukamel
- Department of Cognitive Science, University of California, San Diego, La Jolla, California, United States of America
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - David Osumi-Sutherland
- European Bioinformatics Institute (EMBL-EBI), Wellcome Trust Genome Campus, Hinxton, Cambridge, United Kingdom
| | - Hanchuan Peng
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Patrick L. Ray
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Raymond Sanchez
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Aviv Regev
- Genentech, South San Francisco, California, United States of America
| | - Alex Ropelewski
- Pittsburgh Supercomputing Center, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | | | - Shawn Zheng Kai Tan
- European Bioinformatics Institute (EMBL-EBI), Wellcome Trust Genome Campus, Hinxton, Cambridge, United Kingdom
| | - Carol L. Thompson
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Timothy Tickle
- Data Sciences Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, United States of America
| | - Hagen Tilgner
- Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, New York, United States of America
| | - Merina Varghese
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Brock Wester
- Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, United States of America
| | - Owen White
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Brian Aevermann
- Chan Zuckerberg Initiative, Redwood City, California, United States of America
| | - David Allemang
- Kitware Inc., Albany, New York, United States of America
| | - Seth Ament
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Thomas L. Athey
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Cody Baker
- CatalystNeuro, Benicia, California, United States of America
| | - Katherine S. Baker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Pamela M. Baker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Anita Bandrowski
- Department of Neuroscience, University of California San Diego, San Diego, California, United States of America
| | - Samik Banerjee
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Prajal Bishwakarma
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ambrose Carr
- Chan Zuckerberg Initiative, Redwood City, California, United States of America
| | - Min Chen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Roni Choudhury
- Kitware Inc., Albany, New York, United States of America
| | - Jonah Cool
- Chan Zuckerberg Initiative, Redwood City, California, United States of America
| | - Heather Creasy
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Florence D’Orazi
- Chan Zuckerberg Initiative, Redwood City, California, United States of America
| | - Kylee Degatano
- Data Sciences Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, United States of America
| | | | - Song-Lin Ding
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Tim Dolbeare
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Joseph R. Ecker
- Genomic Analysis Laboratory, Howard Hughes Medical Institute, The Salk Institute for Biological Studies La Jolla, California, United States of America
| | - Rongxin Fang
- Bioinformatics and Systems Biology Graduate Program, University of California San Diego, La Jolla, California, United States of America
| | | | - Timothy P. Fliss
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - James Gee
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Tom Gillespie
- Department of Neuroscience, University of California San Diego, San Diego, California, United States of America
| | - Nathan Gouwens
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Guo-Qiang Zhang
- Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Yaroslav O. Halchenko
- Department of Psychological and Brain Sciences, Dartmouth College, Hannover, New Hampshire, United States of America
| | - Nomi L. Harris
- Environmental Genomics and Systems Biology Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Brian R. Herb
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Houri Hintiryan
- UCLA Brain Research & Artificial Intelligence Nexus, Department of Neurobiology, David Geffen School of Medicine at University of California, Los Angeles, California, United States of America
| | - Gregory Hood
- Pittsburgh Supercomputing Center, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Sam Horvath
- Kitware Inc., Albany, New York, United States of America
| | - Bingxing Huo
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Dorota Jarecka
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Shengdian Jiang
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Farzaneh Khajouei
- Data Sciences Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, United States of America
| | - Elizabeth A. Kiernan
- Data Sciences Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, United States of America
| | - Huseyin Kir
- European Bioinformatics Institute (EMBL-EBI), Wellcome Trust Genome Campus, Hinxton, Cambridge, United Kingdom
| | - Lauren Kruse
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Boudewijn Lelieveldt
- Department of Intelligent Systems, Delft University of Technology, Delft, the Netherlands
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Yang Li
- Center for Epigenomics, Department of Cellular and Molecular Medicine, UC San Diego School of Medicine, La Jolla, California, United States of America
| | - Hanqing Liu
- Genomic Analysis Laboratory, Howard Hughes Medical Institute, The Salk Institute for Biological Studies La Jolla, California, United States of America
| | - Lijuan Liu
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Anup Markuhar
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - James Mathews
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Kaylee L. Mathews
- Data Sciences Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, United States of America
| | - Chris Mezias
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Michael I. Miller
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Tyler Mollenkopf
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Shoaib Mufti
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Christopher J. Mungall
- Environmental Genomics and Systems Biology Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Joshua Orvis
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Maja A. Puchades
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Lei Qu
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Joseph P. Receveur
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Bing Ren
- Center for Epigenomics, Department of Cellular and Molecular Medicine, UC San Diego School of Medicine, La Jolla, California, United States of America
- Ludwig Institute for Cancer Research, La Jolla, California, United States of America
| | - Nathan Sjoquist
- Microsoft Corporation, Seattle, Washington, United States of America
| | - Brian Staats
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Daniel Tward
- UCLA Brain Mapping Center, University of California, Los Angeles, California, United States of America
| | | | - Quanxin Wang
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Fangming Xie
- Department of Chemistry and Biochemistry, University of California Los Angeles, California, United States of America
| | - Hua Xu
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Zizhen Yao
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Zhixi Yun
- SEU-Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu Province, China
| | - Yun Renee Zhang
- J. Craig Venter Institute, La Jolla, California, United States of America
| | - W. Jim Zheng
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Brian Zingg
- UCLA Brain Research & Artificial Intelligence Nexus, Department of Neurobiology, David Geffen School of Medicine at University of California, Los Angeles, California, United States of America
| |
Collapse
|
25
|
Zhu X, Yan H, Zhan Y, Feng F, Wei C, Yao YG, Liu C. An anatomical and connectivity atlas of the marmoset cerebellum. Cell Rep 2023; 42:112480. [PMID: 37163375 DOI: 10.1016/j.celrep.2023.112480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/01/2023] [Accepted: 04/20/2023] [Indexed: 05/12/2023] Open
Abstract
The cerebellum is essential for motor control and cognitive functioning, engaging in bidirectional communication with the cerebral cortex. The common marmoset, a small non-human primate, offers unique advantages for studying cerebello-cerebral circuits. However, the marmoset cerebellum is not well described in published resources. In this study, we present a comprehensive atlas of the marmoset cerebellum comprising (1) fine-detailed anatomical atlases and surface-analysis tools of the cerebellar cortex based on ultra-high-resolution ex vivo MRI, (2) functional connectivity and gradient patterns of the cerebellar cortex revealed by awake resting-state fMRI, and (3) structural-connectivity mapping of cerebellar nuclei using high-resolution diffusion MRI tractography. The atlas elucidates the anatomical details of the marmoset cerebellum, reveals distinct gradient patterns of intra-cerebellar and cerebello-cerebral functional connectivity, and maps the topological relationship of cerebellar nuclei in cerebello-cerebral circuits. As version 5 of the Marmoset Brain Mapping project, this atlas is publicly available at https://marmosetbrainmapping.org/MBMv5.html.
Collapse
Affiliation(s)
- Xiaojia Zhu
- Key Laboratory of Animal Models and Human Disease Mechanisms of the Chinese Academy of Sciences and Yunnan Province, and KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), National Resource Center for Non-Human Primates, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming 650201, China; Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Haotian Yan
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yafeng Zhan
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Furui Feng
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Chuanyao Wei
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yong-Gang Yao
- Key Laboratory of Animal Models and Human Disease Mechanisms of the Chinese Academy of Sciences and Yunnan Province, and KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), National Resource Center for Non-Human Primates, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming 650201, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Cirong Liu
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, CAS Key Laboratory of Primate Neurobiology, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China.
| |
Collapse
|
26
|
Johnson GA, Tian Y, Ashbrook DG, Cofer GP, Cook JJ, Gee JC, Hall A, Hornburg K, Qi Y, Yeh FC, Wang N, White LE, Williams RW. Merged magnetic resonance and light sheet microscopy of the whole mouse brain. Proc Natl Acad Sci U S A 2023; 120:e2218617120. [PMID: 37068254 PMCID: PMC10151475 DOI: 10.1073/pnas.2218617120] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/10/2023] [Indexed: 04/19/2023] Open
Abstract
We have developed workflows to align 3D magnetic resonance histology (MRH) of the mouse brain with light sheet microscopy (LSM) and 3D delineations of the same specimen. We start with MRH of the brain in the skull with gradient echo and diffusion tensor imaging (DTI) at 15 μm isotropic resolution which is ~ 1,000 times higher than that of most preclinical MRI. Connectomes are generated with superresolution tract density images of ~5 μm. Brains are cleared, stained for selected proteins, and imaged by LSM at 1.8 μm/pixel. LSM data are registered into the reference MRH space with labels derived from the ABA common coordinate framework. The result is a high-dimensional integrated volume with registration (HiDiver) with alignment precision better than 50 µm. Throughput is sufficiently high that HiDiver is being used in quantitative studies of the impact of gene variants and aging on mouse brain cytoarchitecture and connectomics.
Collapse
Affiliation(s)
| | - Yuqi Tian
- Center for In Vivo Microscopy, Duke University, Durham, NC27710
| | - David G. Ashbrook
- Department of Genetics, Genomics and Informatics, University of Tennessee Health Science Center, Memphis, TN38162
| | - Gary P. Cofer
- Center for In Vivo Microscopy, Duke University, Durham, NC27710
| | - James J. Cook
- Center for In Vivo Microscopy, Duke University, Durham, NC27710
| | - James C. Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA19104
| | - Adam Hall
- LifeCanvas Technology, Cambridge, MA02141
| | | | - Yi Qi
- Center for In Vivo Microscopy, Duke University, Durham, NC27710
| | - Fang-Cheng Yeh
- Department of Neurologic Surgery, University of Pittsburgh, Pittsburgh, PA15260
| | - Nian Wang
- Department of Radiology, Indiana University, Bloomington, IN47401
| | | | - Robert W. Williams
- Department of Genetics, Genomics and Informatics, University of Tennessee Health Science Center, Memphis, TN38162
| |
Collapse
|
27
|
Zhang S, Liu C, Wang Q, Zhou H, Wu H, Zhuang J, Cao Y, Shi H, Zhang J, Wang J. CRYAA and GJA8 promote visual development after whisker tactile deprivation. Heliyon 2023; 9:e13897. [PMID: 36915480 PMCID: PMC10006481 DOI: 10.1016/j.heliyon.2023.e13897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 02/15/2023] [Accepted: 02/15/2023] [Indexed: 02/24/2023] Open
Abstract
Deprivation of one sense can be followed by enhanced development of other senses via cross-modal plasticity mechanisms. To study the effect of whisker tactile deprivation on vision during the early stages of development, we clipped the bilateral whiskers of young mice and found that their vision was impaired but later recovered to normal levels. Our results demonstrate that inhibition of the PI3K/AKT/ERK signaling pathway caused short-term visual impairment during early development, while high expression levels of Crystallin Alpha A (CRYAA) and Gap Junction Protein Alpha 8 (GJA8) in the retina led to the recovery of developmental visual acuity. Interestingly, analysis of single-cell sequencing results from human embryonic retinas at 9-19 gestational weeks (GW) revealed that CRYAA and GJA8 display stage-specific peak expression during human embryonic retinal development, suggesting potential functions in visual development. Our data show that high expression levels of CRYAA and GJA8 in the retina after whisker deprivation rescue impaired visual development, which may provide a foundation for further research on the mechanisms of cross-modal plasticity and in particular, offer new insights into the mechanisms underlying tactile-visual cross-modal development.
Collapse
Affiliation(s)
- Shibo Zhang
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Cuiping Liu
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Qian Wang
- Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Haicong Zhou
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Hao Wu
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Junyi Zhuang
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Yiyang Cao
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Hongwei Shi
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
| | - Jingfa Zhang
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Corresponding author.
| | - Jiao Wang
- Laboratory of Molecular Neural Biology, School of Life Sciences, Shanghai University, 99 Shang Da Road, Shanghai, China
- Corresponding author.
| |
Collapse
|
28
|
Perens J, Salinas CG, Roostalu U, Skytte JL, Gundlach C, Hecksher-Sørensen J, Dahl AB, Dyrby TB. Multimodal 3D Mouse Brain Atlas Framework with the Skull-Derived Coordinate System. Neuroinformatics 2023; 21:269-286. [PMID: 36809643 DOI: 10.1007/s12021-023-09623-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/01/2023] [Indexed: 02/23/2023]
Abstract
Magnetic resonance imaging (MRI) and light-sheet fluorescence microscopy (LSFM) are technologies that enable non-disruptive 3-dimensional imaging of whole mouse brains. A combination of complementary information from both modalities is desirable for studying neuroscience in general, disease progression and drug efficacy. Although both technologies rely on atlas mapping for quantitative analyses, the translation of LSFM recorded data to MRI templates has been complicated by the morphological changes inflicted by tissue clearing and the enormous size of the raw data sets. Consequently, there is an unmet need for tools that will facilitate fast and accurate translation of LSFM recorded brains to in vivo, non-distorted templates. In this study, we have developed a bidirectional multimodal atlas framework that includes brain templates based on both imaging modalities, region delineations from the Allen's Common Coordinate Framework, and a skull-derived stereotaxic coordinate system. The framework also provides algorithms for bidirectional transformation of results obtained using either MR or LSFM (iDISCO cleared) mouse brain imaging while the coordinate system enables users to easily assign in vivo coordinates across the different brain templates.
Collapse
Affiliation(s)
- Johanna Perens
- Gubra ApS, Hørsholm, Denmark.,Section for Visual Computing, Department of Applied Mathematics and Computer Science, Technical University Denmark, Kongens Lyngby, Denmark.,Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Copenhagen, Denmark
| | | | | | | | - Carsten Gundlach
- Neutrons and X-rays for Materials Physics, Department of Physics, Technical University Denmark, Kongens Lyngby, Denmark
| | | | - Anders Bjorholm Dahl
- Section for Visual Computing, Department of Applied Mathematics and Computer Science, Technical University Denmark, Kongens Lyngby, Denmark
| | - Tim B Dyrby
- Section for Visual Computing, Department of Applied Mathematics and Computer Science, Technical University Denmark, Kongens Lyngby, Denmark.,Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Copenhagen, Denmark
| |
Collapse
|
29
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
30
|
Li Y, Wu J, Lu D, Xu C, Zheng Y, Peng H, Qu L. mBrainAligner-Web: a web server for cross-modal coherent registration of whole mouse brains. Bioinformatics 2022; 38:4654-4655. [PMID: 35951750 DOI: 10.1093/bioinformatics/btac549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 07/16/2022] [Accepted: 08/07/2022] [Indexed: 12/24/2022] Open
Abstract
SUMMARY Recent whole-brain mapping projects are collecting increasingly larger sets of high-resolution brain images using a variety of imaging, labeling and sample preparation techniques. Both mining and analysis of these data require reliable and robust cross-modal registration tools. We recently developed the mBrainAligner, a pipeline for performing cross-modal registration of the whole mouse brain. However, using this tool requires scripting or command-line skills to assemble and configure the different modules of mBrainAligner for accommodating different registration requirements and platform settings. In this application note, we present mBrainAligner-Web, a web server with a user-friendly interface that allows to configure and run mBrainAligner locally or remotely across platforms. AVAILABILITY AND IMPLEMENTATION mBrainAligner-Web is available at http://mbrainaligner.ahu.edu.cn/ with source code at https://github.com/reaneyli/mBrainAligner-web. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yuanyuan Li
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230000, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230000, China
| | - Donghuan Lu
- Tencent Jarvis Lab, Shenzhen, Guangdong 518020, China
| | - Chao Xu
- Tencent Jarvis Lab, Shenzhen, Guangdong 518020, China
| | - Yefeng Zheng
- Tencent Jarvis Lab, Shenzhen, Guangdong 518020, China.,SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 211170, China
| | - Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 211170, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230000, China.,SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 211170, China
| |
Collapse
|
31
|
Arias A, Manubens-Gil L, Dierssen M. Fluorescent transgenic mouse models for whole-brain imaging in health and disease. Front Mol Neurosci 2022; 15:958222. [PMID: 36211979 PMCID: PMC9538927 DOI: 10.3389/fnmol.2022.958222] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/08/2022] [Indexed: 11/25/2022] Open
Abstract
A paradigm shift is occurring in neuroscience and in general in life sciences converting biomedical research from a descriptive discipline into a quantitative, predictive, actionable science. Living systems are becoming amenable to quantitative description, with profound consequences for our ability to predict biological phenomena. New experimental tools such as tissue clearing, whole-brain imaging, and genetic engineering technologies have opened the opportunity to embrace this new paradigm, allowing to extract anatomical features such as cell number, their full morphology, and even their structural connectivity. These tools will also allow the exploration of new features such as their geometrical arrangement, within and across brain regions. This would be especially important to better characterize brain function and pathological alterations in neurological, neurodevelopmental, and neurodegenerative disorders. New animal models for mapping fluorescent protein-expressing neurons and axon pathways in adult mice are key to this aim. As a result of both developments, relevant cell populations with endogenous fluorescence signals can be comprehensively and quantitatively mapped to whole-brain images acquired at submicron resolution. However, they present intrinsic limitations: weak fluorescent signals, unequal signal strength across the same cell type, lack of specificity of fluorescent labels, overlapping signals in cell types with dense labeling, or undetectable signal at distal parts of the neurons, among others. In this review, we discuss the recent advances in the development of fluorescent transgenic mouse models that overcome to some extent the technical and conceptual limitations and tradeoffs between different strategies. We also discuss the potential use of these strains for understanding disease.
Collapse
Affiliation(s)
- Adrian Arias
- Department of System Biology, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Mara Dierssen
- Department of System Biology, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain
- Department of Experimental and Health Sciences, University Pompeu Fabra, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Raras (CIBERER), Barcelona, Spain
| |
Collapse
|
32
|
Zhang Y, Wu P, Chen S, Gong H, Yang X. FCE-Net: a fast image contrast enhancement method based on deep learning for biomedical optical images. BIOMEDICAL OPTICS EXPRESS 2022; 13:3521-3534. [PMID: 35781947 PMCID: PMC9208612 DOI: 10.1364/boe.459347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/30/2022] [Accepted: 05/03/2022] [Indexed: 06/15/2023]
Abstract
Optical imaging is an important tool for exploring and understanding structures of biological tissues. However, due to the heterogeneity of biological tissues, the intensity distribution of the signal is not uniform and contrast is normally degraded in the raw image. It is difficult to be used for subsequent image analysis and information extraction directly. Here, we propose a fast image contrast enhancement method based on deep learning called Fast Contrast Enhancement Network (FCE-Net). We divided network into dual-path to simultaneously obtain spatial information and large receptive field. And we introduced the spatial attention mechanism to enhance the inter-spatial relationship. We showed that the cell counting task of mouse brain images processed by FCE-Net was with average precision rate of 97.6% ± 1.6%, and average recall rate of 98.4% ± 1.4%. After processing with FCE-Net, the images from vascular extraction (DRIVE) dataset could be segmented with spatial attention U-Net (SA-UNet) to achieve state-of-the-art performance. By comparing FCE-Net with previous methods, we demonstrated that FCE-Net could obtain higher accuracy while maintaining the processing speed. The images with size of 1024 × 1024 pixels could be processed by FCE-Net with 37fps based on our workstation. Our method has great potential for further image analysis and information extraction from large-scale or dynamic biomedical optical images.
Collapse
Affiliation(s)
- Yunfei Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Peng Wu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Siqi Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| | - Xiaoquan Yang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| |
Collapse
|
33
|
Guo S, Xue J, Liu J, Ye X, Guo Y, Liu D, Zhao X, Xiong F, Han X, Peng H. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inform 2022; 9:10. [PMID: 35543774 PMCID: PMC9095808 DOI: 10.1186/s40708-022-00158-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/12/2022] [Indexed: 11/10/2022] Open
Abstract
A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to 'smart' imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Jie Xue
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jian Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiangqiao Ye
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Yichen Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Di Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Feng Xiong
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiaofeng Han
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| |
Collapse
|