1
|
Zhang Y, He G, Ma L, Liu X, Hjorth JJJ, Kozlov A, He Y, Zhang S, Kotaleski JH, Tian Y, Grillner S, Du K, Huang T. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nat Commun 2023; 14:5798. [PMID: 37723170 PMCID: PMC10507119 DOI: 10.1038/s41467-023-41553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
Biophysically detailed multi-compartment models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. However, the expensive computational cost severely limits the applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. Here, we present a novel Dendritic Hierarchical Scheduling (DHS) method to markedly accelerate such a process. We theoretically prove that the DHS implementation is computationally optimal and accurate. This GPU-based method performs with 2-3 orders of magnitude higher speed than that of the classic serial Hines method in the conventional CPU platform. We build a DeepDendrite framework, which integrates the DHS method and the GPU computing engine of the NEURON simulator and demonstrate applications of DeepDendrite in neuroscience tasks. We investigate how spatial patterns of spine inputs affect neuronal excitability in a detailed human pyramidal neuron model with 25,000 spines. Furthermore, we provide a brief discussion on the potential of DeepDendrite for AI, specifically highlighting its ability to enable the efficient training of biophysically detailed models in typical image classification tasks.
Collapse
Affiliation(s)
- Yichen Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Gan He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Lei Ma
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
| | - Xiaofei Liu
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, China
| | - J J Johannes Hjorth
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
| | - Alexander Kozlov
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yutao He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Shenjian Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yonghong Tian
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Electrical and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| | - Tiejun Huang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China
| |
Collapse
|
2
|
Zhang Y, Du K, Huang T. Heuristic Tree-Partition-Based Parallel Method for Biophysically Detailed Neuron Simulation. Neural Comput 2023; 35:627-644. [PMID: 36746142 DOI: 10.1162/neco_a_01565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 10/20/2022] [Indexed: 02/08/2023]
Abstract
Biophysically detailed neuron simulation is a powerful tool to explore the mechanisms behind biological experiments and bridge the gap between various scales in neuroscience research. However, the extremely high computational complexity of detailed neuron simulation restricts the modeling and exploration of detailed network models. The bottleneck is solving the system of linear equations. To accelerate detailed simulation, we propose a heuristic tree-partition-based parallel method (HTP) to parallelize the computation of the Hines algorithm, the kernel for solving linear equations, and leverage the strong parallel capability of the graphic processing unit (GPU) to achieve further speedup. We formulate the problem of how to get a fine parallel process as a tree-partition problem. Next, we present a heuristic partition algorithm to obtain an effective partition to efficiently parallelize the equation-solving process in detailed simulation. With further optimization on GPU, our HTP method achieves 2.2 to 8.5 folds speedup compared to the state-of-the-art GPU method and 36 to 660 folds speedup compared to the typical Hines algorithm.
Collapse
Affiliation(s)
- Yichen Zhang
- School of Computer Science, Peking University, Beijing 100871, China
| | - Kai Du
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Tiejun Huang
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|
3
|
Ladd A, Kim KG, Balewski J, Bouchard K, Ben-Shalom R. Scaling and Benchmarking an Evolutionary Algorithm for Constructing Biophysical Neuronal Models. Front Neuroinform 2022; 16:882552. [PMID: 35784184 PMCID: PMC9248031 DOI: 10.3389/fninf.2022.882552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 05/18/2022] [Indexed: 11/28/2022] Open
Abstract
Single neuron models are fundamental for computational modeling of the brain's neuronal networks, and understanding how ion channel dynamics mediate neural function. A challenge in defining such models is determining biophysically realistic channel distributions. Here, we present an efficient, highly parallel evolutionary algorithm for developing such models, named NeuroGPU-EA. NeuroGPU-EA uses CPUs and GPUs concurrently to simulate and evaluate neuron membrane potentials with respect to multiple stimuli. We demonstrate a logarithmic cost for scaling the stimuli used in the fitting procedure. NeuroGPU-EA outperforms the typically used CPU based evolutionary algorithm by a factor of 10 on a series of scaling benchmarks. We report observed performance bottlenecks and propose mitigation strategies. Finally, we also discuss the potential of this method for efficient simulation and evaluation of electrophysiological waveforms.
Collapse
Affiliation(s)
- Alexander Ladd
- Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
- *Correspondence: Alexander Ladd
| | - Kyung Geun Kim
- Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
| | - Jan Balewski
- NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | - Kristofer Bouchard
- Helen Wills Neuroscience Institute & Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | - Roy Ben-Shalom
- Neurology Department, MIND Institute, University of California, Davis, Sacramento, CA, United States
- Roy Ben-Shalom
| |
Collapse
|
4
|
Ben-Shalom R, Ladd A, Artherya NS, Cross C, Kim KG, Sanghevi H, Korngreen A, Bouchard KE, Bender KJ. NeuroGPU: Accelerating multi-compartment, biophysically detailed neuron simulations on GPUs. J Neurosci Methods 2022; 366:109400. [PMID: 34728257 PMCID: PMC9887806 DOI: 10.1016/j.jneumeth.2021.109400] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 10/09/2021] [Accepted: 10/27/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND The membrane potential of individual neurons depends on a large number of interacting biophysical processes operating on spatial-temporal scales spanning several orders of magnitude. The multi-scale nature of these processes dictates that accurate prediction of membrane potentials in specific neurons requires the utilization of detailed simulations. Unfortunately, constraining parameters within biologically detailed neuron models can be difficult, leading to poor model fits. This obstacle can be overcome partially by numerical optimization or detailed exploration of parameter space. However, these processes, which currently rely on central processing unit (CPU) computation, often incur orders of magnitude increases in computing time for marginal improvements in model behavior. As a result, model quality is often compromised to accommodate compute resources. NEW METHOD Here, we present a simulation environment, NeuroGPU, that takes advantage of the inherent parallelized structure of the graphics processing unit (GPU) to accelerate neuronal simulation. RESULTS & COMPARISON WITH EXISTING METHODS NeuroGPU can simulate most biologically detailed models 10-200 times faster than NEURON simulation running on a single core and 5 times faster than GPU simulators (CoreNEURON). NeuroGPU is designed for model parameter tuning and best performs when the GPU is fully utilized by running multiple (> 100) instances of the same model with different parameters. When using multiple GPUs, NeuroGPU can reach to a speed-up of 800 fold compared to single core simulations, especially when simulating the same model morphology with different parameters. We demonstrate the power of NeuoGPU through large-scale parameter exploration to reveal the response landscape of a neuron. Finally, we accelerate numerical optimization of biophysically detailed neuron models to achieve highly accurate fitting of models to simulation and experimental data. CONCLUSIONS Thus, NeuroGPU is the fastest available platform that enables rapid simulation of multi-compartment, biophysically detailed neuron models on commonly used computing systems accessible by many scientists.
Collapse
Affiliation(s)
- Roy Ben-Shalom
- Weill Institute for Neurosciences, Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA, United States,Department of Neurology, University of California, San Francisco, San Francisco, CA, United States,MIND Institute University of California, Davis, CA, United States,Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA, United States,Correspondence to: University of California, Davis MIND Institute Wet Lab 2805 50th Street, Room 2460 Sacramento, CA 95817, United States., (R. Ben-Shalom), (K.J. Bender)
| | - Alexander Ladd
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, United States
| | - Nikhil S. Artherya
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, United States
| | - Christopher Cross
- Weill Institute for Neurosciences, Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA, United States
| | - Kyung Geun Kim
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, United States
| | - Hersh Sanghevi
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, United States
| | - Alon Korngreen
- The Leslie and Susan Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel,The Mina and Everard Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan, Israel
| | - Kristofer E. Bouchard
- Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA, United States,Hellen Wills Neuroscience Institute & Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States,Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, United States
| | - Kevin J. Bender
- Weill Institute for Neurosciences, Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA, United States,Department of Neurology, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
5
|
Iyengar RS, Pithapuram MV, Singh AK, Raghavan M. Curated Model Development Using NEUROiD: A Web-Based NEUROmotor Integration and Design Platform. Front Neuroinform 2019; 13:56. [PMID: 31440153 PMCID: PMC6693358 DOI: 10.3389/fninf.2019.00056] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 07/11/2019] [Indexed: 11/24/2022] Open
Abstract
Decades of research on neuromotor circuits and systems has provided valuable information on neuronal control of movement. Computational models of several elements of the neuromotor system have been developed at various scales, from sub-cellular to system. While several small models abound, their structured integration is the key to building larger and more biologically realistic models which can predict the behavior of the system in different scenarios. This effort calls for integration of elements across neuroscience and musculoskeletal biomechanics. There is also a need for development of methods and tools for structured integration that yield larger in silico models demonstrating a set of desired system responses. We take a small step in this direction with the NEUROmotor integration and Design (NEUROiD) platform. NEUROiD helps integrate results from motor systems anatomy, physiology, and biomechanics into an integrated neuromotor system model. Simulation and visualization of the model across multiple scales is supported. Standard electrophysiological operations such as slicing, current injection, recording of membrane potential, and local field potential are part of NEUROiD. The platform allows traceability of model parameters to primary literature. We illustrate the power and utility of NEUROiD by building a simple ankle model and its controlling neural circuitry by curating a set of published components. NEUROiD allows researchers to utilize remote high-performance computers for simulation, while controlling the model using a web browser.
Collapse
Affiliation(s)
- Raghu Sesha Iyengar
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Madhav Vinodh Pithapuram
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Avinash Kumar Singh
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Mohan Raghavan
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| |
Collapse
|
6
|
Toward Whole-Body Connectomics. J Neurosci 2017; 36:11375-11383. [PMID: 27911739 DOI: 10.1523/jneurosci.2930-16.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 10/17/2016] [Accepted: 10/18/2016] [Indexed: 11/21/2022] Open
Abstract
Recent advances in neuro-technologies have revolutionized knowledge of brain structure and functions. Governments and private organizations worldwide have initiated several large-scale brain connectome projects, to further understand how the brain works at the systems levels. Most recent projects focus on only brain neurons, with the exception of an early effort to reconstruct the 302 neurons that comprise the whole body of the small worm, Caenorhabditis elegans However, to fully elucidate the neural circuitry of complex behavior, it is crucial to understand brain interactions with the whole body, which can be achieved only by mapping the whole-body connectome. In this article, we discuss the current state of connectomics study, focusing on novel optical approaches and related imaging technologies. We also discuss the challenges encountered by scientists who endeavor to map these whole-body connectomes in large animals.
Collapse
|
7
|
McDougal RA, Bulanova AS, Lytton WW. Reproducibility in Computational Neuroscience Models and Simulations. IEEE Trans Biomed Eng 2016; 63:2021-35. [PMID: 27046845 PMCID: PMC5016202 DOI: 10.1109/tbme.2016.2539602] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
OBJECTIVE Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. METHODS Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. RESULTS Building on these standard practices, model-sharing sites and tools have been developed that fit into several categories: 1) standardized neural simulators; 2) shared computational resources; 3) declarative model descriptors, ontologies, and standardized annotations; and 4) model-sharing repositories and sharing standards. CONCLUSION A number of complementary innovations have been proposed to enhance sharing, transparency, and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation, and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. SIGNIFICANCE Model management will become increasingly important as multiscale models become larger, more detailed, and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment.
Collapse
|
8
|
Almog M, Korngreen A. Is realistic neuronal modeling realistic? J Neurophysiol 2016; 116:2180-2209. [PMID: 27535372 DOI: 10.1152/jn.00360.2016] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 08/17/2016] [Indexed: 11/22/2022] Open
Abstract
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models.
Collapse
Affiliation(s)
- Mara Almog
- The Leslie and Susan Gonda Interdisciplinary Brain Research Centre, Bar-Ilan University, Ramat Gan, Israel; and.,The Mina and Everard Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat Gan, Israel
| | - Alon Korngreen
- The Leslie and Susan Gonda Interdisciplinary Brain Research Centre, Bar-Ilan University, Ramat Gan, Israel; and .,The Mina and Everard Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat Gan, Israel
| |
Collapse
|
9
|
Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading. PLoS One 2015; 10:e0116532. [PMID: 25680098 PMCID: PMC4334526 DOI: 10.1371/journal.pone.0116532] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 12/10/2014] [Indexed: 01/01/2023] Open
Abstract
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.
Collapse
|
10
|
Eklund A, Dufort P, Villani M, Laconte S. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs. Front Neuroinform 2014; 8:24. [PMID: 24672471 PMCID: PMC3953750 DOI: 10.3389/fninf.2014.00024] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2013] [Accepted: 02/24/2014] [Indexed: 11/13/2022] Open
Abstract
Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/).
Collapse
Affiliation(s)
- Anders Eklund
- Virginia Tech Carilion Research Institute, Virginia Tech Roanoke, VA, USA
| | - Paul Dufort
- Department of Medical Imaging, University of Toronto Toronto, ON, Canada
| | - Mattias Villani
- Division of Statistics, Department of Computer and Information Science, Linköping University Linköping, Sweden
| | - Stephen Laconte
- Virginia Tech Carilion Research Institute, Virginia Tech Roanoke, VA, USA ; School of Biomedical Engineering and Sciences, Virginia Tech-Wake Forest University Blacksburg, VA, USA
| |
Collapse
|
11
|
Lampert A, Korngreen A. Markov Modeling of Ion Channels. PROGRESS IN MOLECULAR BIOLOGY AND TRANSLATIONAL SCIENCE 2014; 123:1-21. [DOI: 10.1016/b978-0-12-397897-4.00009-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|