1
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
2
|
Roudot P, Legant WR, Zou Q, Dean KM, Isogai T, Welf ES, David AF, Gerlich DW, Fiolka R, Betzig E, Danuser G. u-track3D: Measuring, navigating, and validating dense particle trajectories in three dimensions. CELL REPORTS METHODS 2023; 3:100655. [PMID: 38042149 PMCID: PMC10783629 DOI: 10.1016/j.crmeth.2023.100655] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 08/10/2023] [Accepted: 11/09/2023] [Indexed: 12/04/2023]
Abstract
We describe u-track3D, a software package that extends the versatile u-track framework established in 2D to address the specific challenges of 3D particle tracking. First, we present the performance of the new package in quantifying a variety of intracellular dynamics imaged by multiple 3D microcopy platforms and on the standard 3D test dataset of the particle tracking challenge. These analyses indicate that u-track3D presents a tracking solution that is competitive to both conventional and deep-learning-based approaches. We then present the concept of dynamic region of interest (dynROI), which allows an experimenter to interact with dynamic 3D processes in 2D views amenable to visual inspection. Third, we present an estimator of trackability that automatically defines a score for every trajectory, thereby overcoming the challenges of trajectory validation by visual inspection. With these combined strategies, u-track3D provides a complete framework for unbiased studies of molecular processes in complex volumetric sequences.
Collapse
Affiliation(s)
- Philippe Roudot
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA; Aix Marseille University, CNRS, Centrale Marseille, I2M, Turing Centre for Living Systems, Marseille, France.
| | - Wesley R Legant
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, North Carolina State University, Chapel Hill, NC, USA; Department of Pharmacology, University of North Carolina, Chapel Hill, NC, USA
| | - Qiongjing Zou
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Kevin M Dean
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Tadamoto Isogai
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Erik S Welf
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Ana F David
- Institute of Molecular Biotechnology of the Austrian Academy of Sciences, Vienna BioCenter, Vienna, Austria
| | - Daniel W Gerlich
- Institute of Molecular Biotechnology of the Austrian Academy of Sciences, Vienna BioCenter, Vienna, Austria
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Eric Betzig
- Department of Molecular & Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
3
|
Boorboor S, Mathew S, Ananth M, Talmage D, Role LW, Kaufman AE. NeuRegenerate: A Framework for Visualizing Neurodegeneration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1625-1637. [PMID: 34757909 PMCID: PMC10070008 DOI: 10.1109/tvcg.2021.3127132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.
Collapse
|
4
|
Guérinot C, Marcon V, Godard C, Blanc T, Verdier H, Planchon G, Raimondi F, Boddaert N, Alonso M, Sailor K, Lledo PM, Hajj B, El Beheiry M, Masson JB. New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. FRONTIERS IN BIOINFORMATICS 2022; 1:777101. [PMID: 36303792 PMCID: PMC9580868 DOI: 10.3389/fbinf.2021.777101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/15/2021] [Indexed: 01/02/2023] Open
Abstract
Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.
Collapse
Affiliation(s)
- Corentin Guérinot
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
- Sorbonne Université, Collège Doctoral, Paris, France
| | - Valentin Marcon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Charlotte Godard
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Thomas Blanc
- Sorbonne Université, Collège Doctoral, Paris, France
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
| | - Hippolyte Verdier
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Histopathology and Bio-Imaging Group, Sanofi R&D, Vitry-Sur-Seine, France
- Université de Paris, UFR de Physique, Paris, France
| | - Guillaume Planchon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Francesca Raimondi
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Unité Médicochirurgicale de Cardiologie Congénitale et Pédiatrique, Centre de Référence des Malformations Cardiaques Congénitales Complexes M3C, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Nathalie Boddaert
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Mariana Alonso
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Kurt Sailor
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Pierre-Marie Lledo
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Bassam Hajj
- Sorbonne Université, Collège Doctoral, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| |
Collapse
|
5
|
Sullivan AE, Tappan SJ, Angstman PJ, Rodriguez A, Thomas GC, Hoppes DM, Abdul-Karim MA, Heal ML, Glaser JR. A Comprehensive, FAIR File Format for Neuroanatomical Structure Modeling. Neuroinformatics 2022; 20:221-240. [PMID: 34601704 PMCID: PMC8975944 DOI: 10.1007/s12021-021-09530-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2021] [Indexed: 01/09/2023]
Abstract
With advances in microscopy and computer science, the technique of digitally reconstructing, modeling, and quantifying microscopic anatomies has become central to many fields of biological research. MBF Bioscience has chosen to openly document their digital reconstruction file format, the Neuromorphological File Specification, available at www.mbfbioscience.com/filespecification (Angstman et al., 2020). The format, created and maintained by MBF Bioscience, is broadly utilized by the neuroscience community. The data format's structure and capabilities have evolved since its inception, with modifications made to keep pace with advancements in microscopy and the scientific questions raised by worldwide experts in the field. More recent modifications to the neuromorphological file format ensure it abides by the Findable, Accessible, Interoperable, and Reusable (FAIR) data principles promoted by the International Neuroinformatics Coordinating Facility (INCF; Wilkinson et al., Scientific Data, 3, 160018,, 2016). The incorporated metadata make it easy to identify and repurpose these data types for downstream applications and investigation. This publication describes key elements of the file format and details their relevant structural advantages in an effort to encourage the reuse of these rich data files for alternative analysis or reproduction of derived conclusions.
Collapse
|
6
|
A Novel Gesture-Based Control System for Fluorescence Volumetric Data in Virtual Reality. SENSORS 2021; 21:s21248329. [PMID: 34960422 PMCID: PMC8703643 DOI: 10.3390/s21248329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 12/06/2021] [Accepted: 12/09/2021] [Indexed: 12/04/2022]
Abstract
With the development of light microscopy, it is becoming increasingly easy to obtain detailed multicolor fluorescence volumetric data. The need for their appropriate visualization has become an integral part of fluorescence imaging. Virtual reality (VR) technology provides a new way of visualizing multidimensional image data or models so that the entire 3D structure can be intuitively observed, together with different object features or details on or within the object. With the need for imaging advanced volumetric data, demands for the control of virtual object properties are increasing; this happens especially for multicolor objects obtained by fluorescent microscopy. Existing solutions with universal VR controllers or software-based controllers with the need to define sufficient space for the user to manipulate data in VR are not usable in many practical applications. Therefore, we developed a custom gesture-based VR control system with a custom controller connected to the FluoRender visualization environment. A multitouch sensor disk was used for this purpose. Our control system may be a good choice for easier and more comfortable manipulation of virtual objects and their properties, especially using confocal microscopy, which is the most widely used technique for acquiring volumetric fluorescence data so far.
Collapse
|
7
|
Liimatainen K, Latonen L, Valkonen M, Kartasalo K, Ruusuvuori P. Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration. BMC Cancer 2021; 21:1133. [PMID: 34686173 PMCID: PMC8539837 DOI: 10.1186/s12885-021-08542-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 06/29/2021] [Indexed: 11/23/2022] Open
Abstract
Background Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology to aid cancer research. Methods Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. Results In our interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects, which is not possible in the traditional 2D-image view used in digital pathology. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Conclusions Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-021-08542-9.
Collapse
Affiliation(s)
- Kaisa Liimatainen
- Faculty of Medicine and Health Technology, Tampere University, FI-33014, Tampere, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Masi Valkonen
- Faculty of Medicine and Health Technology, Tampere University, FI-33014, Tampere, Finland
| | - Kimmo Kartasalo
- Faculty of Medicine and Health Technology, Tampere University, FI-33014, Tampere, Finland
| | - Pekka Ruusuvuori
- Faculty of Medicine and Health Technology, Tampere University, FI-33014, Tampere, Finland. .,Cancer Research Unit and FICAN West Cancer Centre, Institute of Biomedicine, University of Turku and Turku University Hospital, FI-20014, Turku, Finland.
| |
Collapse
|
8
|
Venkatesan M, Mohan H, Ryan JR, Schürch CM, Nolan GP, Frakes DH, Coskun AF. Virtual and augmented reality for biomedical applications. CELL REPORTS MEDICINE 2021; 2:100348. [PMID: 34337564 PMCID: PMC8324499 DOI: 10.1016/j.xcrm.2021.100348] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
3D visualization technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) have gained popularity in the recent decade. Digital extended reality (XR) technologies have been adopted in various domains ranging from entertainment to education because of their accessibility and affordability. XR modalities create an immersive experience, enabling 3D visualization of the content without a conventional 2D display constraint. Here, we provide a perspective on XR in current biomedical applications and demonstrate case studies using cell biology concepts, multiplexed proteomics images, surgical data for heart operations, and cardiac 3D models. Emerging challenges associated with XR technologies in the context of adverse health effects and a cost comparison of distinct platforms are discussed. The presented XR platforms will be useful for biomedical education, medical training, surgical guidance, and molecular data visualization to enhance trainees' and students' learning, medical operation accuracy, and the comprehensibility of complex biological systems.
Collapse
Affiliation(s)
- Mythreye Venkatesan
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.,Interdisciplinary Bioengineering Graduate Program, Georgia Institute of Technology, Atlanta, GA, USA.,School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Harini Mohan
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Justin R Ryan
- 3D Innovations Lab, Rady Children's Hospital-San Diego, San Diego, CA, USA
| | - Christian M Schürch
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Garry P Nolan
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - David H Frakes
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.,School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Ahmet F Coskun
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.,Interdisciplinary Bioengineering Graduate Program, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
9
|
Galati A, Schoppa R, Lu A. Exploring the SenseMaking Process through Interactions and fNIRS in Immersive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2714-2724. [PMID: 33750695 DOI: 10.1109/tvcg.2021.3067693] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Theories of cognition inform our decisions when designing human-computer interfaces, and immersive systems enable us to examine these theories. This work explores the sensemaking process in an immersive environment through studying both internal and external user behaviors with a classical visualization problem: a visual comparison and clustering task. We developed an immersive system to perform a user study, collecting user behavior data from different channels: AR HMD for capturing external user interactions, functional near-infrared spectroscopy (fNIRS) for capturing internal neural sequences, and video for references. To examine sensemaking, we assessed how the layout of the interface (planar 2D vs. cylindrical 3D layout) and the challenge level of the task (low vs. high cognitive load) influenced the users' interactions, how these interactions changed over time, and how they influenced task performance. We also developed a visualization system to explore joint patterns among all the data channels. We found that increased interactions and cerebral hemodynamic responses were associated with more accurate performance, especially on cognitively demanding trials. The layout types did not reliably influence interactions or task performance. We discuss how these findings inform the design and evaluation of immersive systems, predict user performance and interaction, and offer theoretical insights about sensemaking from the perspective of embodied and distributed cognition.
Collapse
|
10
|
McDonald T, Usher W, Morrical N, Gyulassy A, Petruzza S, Federer F, Angelucci A, Pascucci V. Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:744-754. [PMID: 33055032 PMCID: PMC7891492 DOI: 10.1109/tvcg.2020.3030363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.
Collapse
|
11
|
El Beheiry M, Godard C, Caporal C, Marcon V, Ostertag C, Sliti O, Doutreligne S, Fournier S, Hajj B, Dahan M, Masson JB. DIVA: Natural Navigation Inside 3D Images Using Virtual Reality. J Mol Biol 2020; 432:4745-4749. [DOI: 10.1016/j.jmb.2020.05.026] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 12/18/2022]
|
12
|
BigTop: a three-dimensional virtual reality tool for GWAS visualization. BMC Bioinformatics 2020; 21:39. [PMID: 32005132 PMCID: PMC6995189 DOI: 10.1186/s12859-020-3373-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Accepted: 01/17/2020] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Genome-wide association studies (GWAS) are typically visualized using a two-dimensional Manhattan plot, displaying chromosomal location of SNPs along the x-axis and the negative log-10 of their p-value on the y-axis. This traditional plot provides a broad overview of the results, but offers little opportunity for interaction or expansion of specific regions, and is unable to show additional dimensions of the dataset. RESULTS We created BigTop, a visualization framework in virtual reality (VR), designed to render a Manhattan plot in three dimensions, wrapping the graph around the user in a simulated cylindrical room. BigTop uses the z-axis to display minor allele frequency of each SNP, allowing for the identification of allelic variants of genes. BigTop also offers additional interactivity, allowing users to select any individual SNP and receive expanded information, including SNP name, exact values, and gene location, if applicable. BigTop is built in JavaScript using the React and A-Frame frameworks, and can be rendered using commercially available VR headsets or in a two-dimensional web browser such as Google Chrome. Data is read into BigTop in JSON format, and can be provided as either JSON or a tab-separated text file. CONCLUSIONS Using additional dimensions and interactivity options offered through VR, we provide a new, interactive, three-dimensional representation of the traditional Manhattan plot for displaying and exploring GWAS data.
Collapse
|
13
|
Wang Y, Li Q, Liu L, Zhou Z, Ruan Z, Kong L, Li Y, Wang Y, Zhong N, Chai R, Luo X, Guo Y, Hawrylycz M, Luo Q, Gu Z, Xie W, Zeng H, Peng H. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat Commun 2019; 10:3474. [PMID: 31375678 PMCID: PMC6677772 DOI: 10.1038/s41467-019-11443-y] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 07/16/2019] [Indexed: 12/29/2022] Open
Abstract
Neuron morphology is recognized as a key determinant of cell type, yet the quantitative profiling of a mammalian neuron's complete three-dimensional (3-D) morphology remains arduous when the neuron has complex arborization and long projection. Whole-brain reconstruction of neuron morphology is even more challenging as it involves processing tens of teravoxels of imaging data. Validating such reconstructions is extremely laborious. We develop TeraVR, an open-source virtual reality annotation system, to address these challenges. TeraVR integrates immersive and collaborative 3-D visualization, interaction, and hierarchical streaming of teravoxel-scale images. Using TeraVR, we have produced precise 3-D full morphology of long-projecting neurons in whole mouse brains and developed a collaborative workflow for highly accurate neuronal reconstruction.
Collapse
Affiliation(s)
- Yimin Wang
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China. .,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China. .,Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, 200444, China.
| | - Qi Li
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lijuan Liu
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China
| | - Zhi Zhou
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China.,Allen Institute for Brain Science, Seattle, 98109, USA
| | - Zongcai Ruan
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China
| | - Lingsheng Kong
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yaoyao Li
- School of Optometry and Ophthalmology, Wenzhou Medical University, Wenzhou, 325027, China
| | - Yun Wang
- Allen Institute for Brain Science, Seattle, 98109, USA
| | - Ning Zhong
- Beijing University of Technology, 100124, Beijing, China.,Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, 371-0816, Japan
| | - Renjie Chai
- Institute of Life Sciences, Southeast University, Nanjing, 210096, China.,Key Laboratory for Developmental Genes and Human Disease, Ministry of Education, Institute of Life Sciences, Jiangsu Province High-Tech Key Laboratory for Bio-Medical Research, Southeast University, Nanjing, 210096, China.,Co-Innovation Center of Neuroregeneration, Nantong University, Nantong, 226019, China
| | - Xiangfeng Luo
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yike Guo
- Data Science Institute, Imperial College London, London, SW7 2AZ, UK
| | | | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhongze Gu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096, China
| | - Wei Xie
- Institute of Life Sciences, Southeast University, Nanjing, 210096, China
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, 98109, USA
| | - Hanchuan Peng
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China. .,Allen Institute for Brain Science, Seattle, 98109, USA.
| |
Collapse
|
14
|
Li A, Guan Y, Gong H, Luo Q. Challenges of Processing and Analyzing Big Data in Mesoscopic Whole-brain Imaging. GENOMICS, PROTEOMICS & BIOINFORMATICS 2019; 17:337-343. [PMID: 31805368 PMCID: PMC6943785 DOI: 10.1016/j.gpb.2019.10.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2019] [Revised: 09/15/2019] [Accepted: 10/12/2019] [Indexed: 01/09/2023]
Affiliation(s)
- Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China; MOE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215125, China
| | - Yue Guan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China; MOE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China; MOE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215125, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China; MOE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
15
|
Poli D, Magliaro C, Ahluwalia A. Experimental and Computational Methods for the Study of Cerebral Organoids: A Review. Front Neurosci 2019; 13:162. [PMID: 30890910 PMCID: PMC6411764 DOI: 10.3389/fnins.2019.00162] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 02/12/2019] [Indexed: 01/04/2023] Open
Abstract
Cerebral (or brain) organoids derived from human cells have enormous potential as physiologically relevant downscaled in vitro models of the human brain. In fact, these stem cell-derived neural aggregates resemble the three-dimensional (3D) cytoarchitectural arrangement of the brain overcoming not only the unrealistic somatic flatness but also the planar neuritic outgrowth of the two-dimensional (2D) in vitro cultures. Despite the growing use of cerebral organoids in scientific research, a more critical evaluation of their reliability and reproducibility in terms of cellular diversity, mature traits, and neuronal dynamics is still required. Specifically, a quantitative framework for generating and investigating these in vitro models of the human brain is lacking. To this end, the aim of this review is to inspire new computational and technology driven ideas for methodological improvements and novel applications of brain organoids. After an overview of the organoid generation protocols described in the literature, we review the computational models employed to assess their formation, organization and resource uptake. The experimental approaches currently provided to structurally and functionally characterize brain organoid networks for studying single neuron morphology and their connections at cellular and sub-cellular resolution are also discussed. Well-established techniques based on current/voltage clamp, optogenetics, calcium imaging, and Micro-Electrode Arrays (MEAs) are proposed for monitoring intra- and extra-cellular responses underlying neuronal dynamics and functional connections. Finally, we consider critical aspects of the established procedures and the physiological limitations of these models, suggesting how a complement of engineering tools could improve the current approaches and their applications.
Collapse
Affiliation(s)
- Daniele Poli
- Research Center E. Piaggio, University of Pisa, Pisa, Italy
| | | | - Arti Ahluwalia
- Research Center E. Piaggio, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| |
Collapse
|
16
|
El Beheiry M, Doutreligne S, Caporal C, Ostertag C, Dahan M, Masson JB. Virtual Reality: Beyond Visualization. J Mol Biol 2019; 431:1315-1321. [DOI: 10.1016/j.jmb.2019.01.033] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/13/2019] [Accepted: 01/22/2019] [Indexed: 12/29/2022]
|
17
|
Sicat R, Li J, Choi J, Cordeil M, Jeong WK, Bach B, Pfister H. DXR: A Toolkit for Building Immersive Data Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:715-725. [PMID: 30136991 DOI: 10.1109/tvcg.2018.2865152] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Collapse
|
18
|
Hurter C, Riche NH, Drucker SM, Cordeil M, Alligier R, Vuillemot R. FiberClay: Sculpting Three Dimensional Trajectories to Reveal Structural Insights. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:704-714. [PMID: 30136994 DOI: 10.1109/tvcg.2018.2865191] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Visualizing 3D trajectories to extract insights about their similarities and spatial configuration is a critical task in several domains. Air traffic controllers for example deal with large quantities of aircrafts routes to optimize safety in airspace and neuroscientists attempt to understand neuronal pathways in the human brain by visualizing bundles of fibers from DTI images. Extracting insights from masses of 3D trajectories is challenging as the multiple three dimensional lines have complex geometries, may overlap, cross or even merge with each other, making it impossible to follow individual ones in dense areas. As trajectories are inherently spatial and three dimensional, we propose FiberClay: a system to display and interact with 3D trajectories in immersive environments. FiberClay renders a large quantity of trajectories in real time using GP-GPU techniques. FiberClay also introduces a new set of interactive techniques for composing complex queries in 3D space leveraging immersive environment controllers and user position. These techniques enable an analyst to select and compare sets of trajectories with specific geometries and data properties. We conclude by discussing insights found using FiberClay with domain experts in air traffic control and neurology.
Collapse
|
19
|
Boorboor S, Jadhav, Ananth M, Talmage D, Role, Kaufman A. Visualization of Neuronal Structures in Wide-Field Microscopy Brain Images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:10.1109/TVCG.2018.2864852. [PMID: 30136950 PMCID: PMC6382602 DOI: 10.1109/tvcg.2018.2864852] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Wide-field microscopes are commonly used in neurobiology for experimental studies of brain samples. Available visualization tools are limited to electron, two-photon, and confocal microscopy datasets, and current volume rendering techniques do not yield effective results when used with wide-field data. We present a workflow for the visualization of neuronal structures in wide-field microscopy images of brain samples. We introduce a novel gradient-based distance transform that overcomes the out-of-focus blur caused by the inherent design of wide-field microscopes. This is followed by the extraction of the 3D structure of neurites using a multi-scale curvilinear filter and cell-bodies using a Hessian-based enhancement filter. The response from these filters is then applied as an opacity map to the raw data. Based on the visualization challenges faced by domain experts, our workflow provides multiple rendering modes to enable qualitative analysis of neuronal structures, which includes separation of cell-bodies from neurites and an intensity-based classification of the structures. Additionally, we evaluate our visualization results against both a standard image processing deconvolution technique and a confocal microscopy image of the same specimen. We show that our method is significantly faster and requires less computational resources, while producing high quality visualizations. We deploy our workflow in an immersive gigapixel facility as a paradigm for the processing and visualization of large, high-resolution, wide-field microscopy brain datasets.
Collapse
|
20
|
Ding Y, Abiri A, Abiri P, Li S, Chang CC, Baek KI, Hsu JJ, Sideris E, Li Y, Lee J, Segura T, Nguyen TP, Bui A, Sevag Packard RR, Fei P, Hsiai TK. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics. JCI Insight 2017; 2:97180. [PMID: 29202458 PMCID: PMC5752380 DOI: 10.1172/jci.insight.97180] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 10/12/2017] [Indexed: 11/17/2022] Open
Abstract
Currently, there is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3D architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3D and 4D (3D spatial + 1D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods, such as routine optical microscopes. We hereby demonstrate multiscale applicability of VR-LSFM to (a) interrogate skin fibroblasts interacting with a hyaluronic acid-based hydrogel, (b) navigate through the endocardial trabecular network during zebrafish development, and (c) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation algorithm with deformable image registration to interface a VR environment with imaging computation for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution.
Collapse
Affiliation(s)
- Yichen Ding
- Department of Medicine
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | - Arash Abiri
- Department of Medicine
- Department of Biomedical Engineering, University of California, Irvine, Irvine, California, USA
| | - Parinaz Abiri
- Department of Medicine
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | - Shuoran Li
- Chemical and Biomolecular Engineering Department
| | - Chih-Chiang Chang
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | - Kyung In Baek
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | | | | | - Yilei Li
- Electrical Engineering Department, and
| | - Juhyun Lee
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | - Tatiana Segura
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
- Chemical and Biomolecular Engineering Department
| | | | - Alexander Bui
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
- Medical Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
| | | | - Peng Fei
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, China
| | - Tzung K. Hsiai
- Department of Medicine
- Department of Bioengineering, David Geffen School of Medicine, UCLA, Los Angeles, California, USA
- Medical Engineering, California Institute of Technology, Pasadena, California, USA
| |
Collapse
|