1
|
Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024:10.1038/s41593-024-01630-6. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
|
2
|
Emergence of co-tuning in inhibitory neurons as a network phenomenon mediated by randomness, correlations, and homeostatic plasticity. SCIENCE ADVANCES 2024; 10:eadi4350. [PMID: 38507489 DOI: 10.1126/sciadv.adi4350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 02/15/2024] [Indexed: 03/22/2024]
Abstract
Cortical excitatory neurons show clear tuning to stimulus features, but the tuning properties of inhibitory interneurons are ambiguous. While inhibitory neurons have been considered to be largely untuned, some studies show that some parvalbumin-expressing (PV) neurons do show feature selectivity and participate in co-tuned subnetworks with pyramidal neurons. In this study, we first use mean-field theory to demonstrate that a combination of homeostatic plasticity governing the synaptic dynamics of the connections from PV to excitatory neurons, heterogeneity in the excitatory postsynaptic potentials that impinge on PV neurons, and shared correlated input from layer 4 results in the functional and structural self-organization of PV subnetworks. Second, we show that structural and functional feature tuning of PV neurons emerges more clearly at the network level, i.e., that population-level measures identify functional and structural co-tuning of PV neurons that are not evident in pairwise individual-level measures. Finally, we show that such co-tuning can enhance network stability at the cost of reduced feature selectivity.
Collapse
|
3
|
A complete biomechanical model of Hydra contractile behaviors, from neural drive to muscle to movement. Proc Natl Acad Sci U S A 2023; 120:e2210439120. [PMID: 36897982 PMCID: PMC10089167 DOI: 10.1073/pnas.2210439120] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 01/03/2023] [Indexed: 03/12/2023] Open
Abstract
How does neural activity drive muscles to produce behavior? The recent development of genetic lines in Hydra that allow complete calcium imaging of both neuronal and muscle activity, as well as systematic machine learning quantification of behaviors, makes this small cnidarian an ideal model system to understand and model the complete transformation from neural firing to body movements. To achieve this, we have built a neuromechanical model of Hydra's fluid-filled hydrostatic skeleton, showing how drive by neuronal activity activates distinct patterns of muscle activity and body column biomechanics. Our model is based on experimental measurements of neuronal and muscle activity and assumes gap junctional coupling among muscle cells and calcium-dependent force generation by muscles. With these assumptions, we can robustly reproduce a basic set of Hydra's behaviors. We can further explain puzzling experimental observations, including the dual timescale kinetics observed in muscle activation and the engagement of ectodermal and endodermal muscles in different behaviors. This work delineates the spatiotemporal control space of Hydra movement and can serve as a template for future efforts to systematically decipher the transformations in the neural basis of behavior.
Collapse
|
4
|
Comparing rapid rule-learning strategies in humans and monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.10.523416. [PMID: 36711889 PMCID: PMC9882042 DOI: 10.1101/2023.01.10.523416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Inter-species comparisons are key to deriving an understanding of the behavioral and neural correlates of human cognition from animal models. We perform a detailed comparison of macaque monkey and human strategies on an analogue of the Wisconsin Card Sort Test, a widely studied and applied multi-attribute measure of cognitive function, wherein performance requires the inference of a changing rule given ambiguous feedback. We found that well-trained monkeys rapidly infer rules but are three times slower than humans. Model fits to their choices revealed hidden states akin to feature-based attention in both species, and decision processes that resembled a Win-stay lose-shift strategy with key differences. Monkeys and humans test multiple rule hypotheses over a series of rule-search trials and perform inference-like computations to exclude candidates. An attention-set based learning stage categorization revealed that perseveration, random exploration and poor sensitivity to negative feedback explain the under-performance in monkeys.
Collapse
|
5
|
Next-generation brain observatories. Neuron 2022; 110:3661-3666. [PMID: 36240770 DOI: 10.1016/j.neuron.2022.09.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 09/16/2022] [Accepted: 09/26/2022] [Indexed: 11/05/2022]
Abstract
We propose centralized brain observatories for large-scale recordings of neural activity in mice and non-human primates coupled with cloud-based data analysis and sharing. Such observatories will advance reproducible systems neuroscience and democratize access to the most advanced tools and data.
Collapse
|
6
|
Dopamine neurons evaluate natural fluctuations in performance quality. Cell Rep 2022; 38:110574. [PMID: 35354031 PMCID: PMC9013488 DOI: 10.1016/j.celrep.2022.110574] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 01/04/2022] [Accepted: 03/04/2022] [Indexed: 11/25/2022] Open
Abstract
Many motor skills are learned by comparing ongoing behavior to internal performance benchmarks. Dopamine neurons encode performance error in behavioral paradigms where error is externally induced, but it remains unknown whether dopamine also signals the quality of natural performance fluctuations. Here, we record dopamine neurons in singing birds and examine how spontaneous dopamine spiking activity correlates with natural fluctuations in ongoing song. Antidromically identified basal ganglia-projecting dopamine neurons correlate with recent, and not future, song variations, consistent with a role in evaluation, not production. Furthermore, maximal dopamine spiking occurs at a single vocal target, consistent with either actively maintaining the existing song or shifting the song to a nearby form. These data show that spontaneous dopamine spiking can evaluate natural behavioral fluctuations unperturbed by experimental events such as cues or rewards. Learning and producing skilled behavior requires an internal measure of performance. Duffy et al. examine dopamine neurons’ relationship to natural song in singing birds. Spontaneous dopamine activity correlates with song fluctuations in a manner consistent with evaluation of natural behavioral variations, independent of external perturbations, cues, or rewards.
Collapse
|
7
|
Context-dependent representations of movement in Drosophila dopaminergic reinforcement pathways. Nat Neurosci 2021; 24:1555-1566. [PMID: 34697455 PMCID: PMC8556349 DOI: 10.1038/s41593-021-00929-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Accepted: 09/01/2021] [Indexed: 11/09/2022]
Abstract
Dopamine plays a central role in motivating and modifying behavior, serving to invigorate current behavioral performance and guide future actions through learning. Here we examine how this single neuromodulator can contribute to such diverse forms of behavioral modulation. By recording from the dopaminergic reinforcement pathways of the Drosophila mushroom body during active odor navigation, we reveal how their ongoing motor-associated activity relates to goal-directed behavior. We found that dopaminergic neurons correlate with different behavioral variables depending on the specific navigational strategy of an animal, such that the activity of these neurons preferentially reflects the actions most relevant to odor pursuit. Furthermore, we show that these motor correlates are translated to ongoing dopamine release, and acutely perturbing dopaminergic signaling alters the strength of odor tracking. Context-dependent representations of movement and reinforcement cues are thus multiplexed within the mushroom body dopaminergic pathways, enabling them to coordinately influence both ongoing and future behavior.
Collapse
|
8
|
Gender bias in academia: A lifetime problem that needs solutions. Neuron 2021; 109:2047-2074. [PMID: 34237278 PMCID: PMC8553227 DOI: 10.1016/j.neuron.2021.06.002] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 11/19/2020] [Accepted: 06/01/2021] [Indexed: 11/18/2022]
Abstract
Despite increased awareness of the lack of gender equity in academia and a growing number of initiatives to address issues of diversity, change is slow, and inequalities remain. A major source of inequity is gender bias, which has a substantial negative impact on the careers, work-life balance, and mental health of underrepresented groups in science. Here, we argue that gender bias is not a single problem but manifests as a collection of distinct issues that impact researchers' lives. We disentangle these facets and propose concrete solutions that can be adopted by individuals, academic institutions, and society.
Collapse
|
9
|
Teaching Computation in Neuroscience: Notes on the 2019 Society for Neuroscience Professional Development Workshop on Teaching. JOURNAL OF UNDERGRADUATE NEUROSCIENCE EDUCATION : JUNE : A PUBLICATION OF FUN, FACULTY FOR UNDERGRADUATE NEUROSCIENCE 2021; 19:A185-A191. [PMID: 34552436 PMCID: PMC8437361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 01/20/2021] [Indexed: 06/13/2023]
Abstract
The 2019 Society for Neuroscience Professional Development Workshop on Teaching reviewed current tools, approaches, and examples for teaching computation in neuroscience. Robert Kass described the statistical foundations that students need to properly analyze data. Pascal Wallisch compared MATLAB and Python as programming languages for teaching students. Adrienne Fairhall discussed computational methods, training opportunities, and curricular considerations. Walt Babiec provided a view from the trenches on practical aspects of teaching computational neuroscience. Mathew Abrams concluded the session with an overview of resources for teaching and learning computational modeling in neuroscience.
Collapse
|
10
|
Abstract
Large scientific projects in genomics and astronomy are influential not because they answer any single question but because they enable investigation of continuously arising new questions from the same data-rich sources. Advances in automated mapping of the brain's synaptic connections (connectomics) suggest that the complicated circuits underlying brain function are ripe for analysis. We discuss benefits of mapping a mouse brain at the level of synapses.
Collapse
|
11
|
|
12
|
Capturing Multiple Timescales of Adaptation to Second-Order Statistics With Generalized Linear Models: Gain Scaling and Fractional Differentiation. Front Syst Neurosci 2020; 14:60. [PMID: 33013331 PMCID: PMC7509073 DOI: 10.3389/fnsys.2020.00060] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 07/23/2020] [Indexed: 11/13/2022] Open
Abstract
Single neurons can dynamically change the gain of their spiking responses to take into account shifts in stimulus variance. Moreover, gain adaptation can occur across multiple timescales. Here, we examine the ability of a simple statistical model of spike trains, the generalized linear model (GLM), to account for these adaptive effects. The GLM describes spiking as a Poisson process whose rate depends on a linear combination of the stimulus and recent spike history. The GLM successfully replicates gain scaling observed in Hodgkin-Huxley simulations of cortical neurons that occurs when the ratio of spike-generating potassium and sodium conductances approaches one. Gain scaling in the GLM depends on the length and shape of the spike history filter. Additionally, the GLM captures adaptation that occurs over multiple timescales as a fractional derivative of the stimulus envelope, which has been observed in neurons that include long timescale afterhyperpolarization conductances. Fractional differentiation in GLMs requires long spike history that span several seconds. Together, these results demonstrate that the GLM provides a tractable statistical approach for examining single-neuron adaptive computations in response to changes in stimulus variance.
Collapse
|
13
|
Abstract
Designing brain-computer interfaces (BCIs) that can be used in conjunction with ongoing motor behavior requires an understanding of how neural activity co-opted for brain control interacts with existing neural circuits. For example, BCIs may be used to regain lost motor function after stroke. This requires that neural activity controlling unaffected limbs is dissociated from activity controlling the BCI. In this study we investigated how primary motor cortex accomplishes simultaneous BCI control and motor control in a task that explicitly required both activities to be driven from the same brain region (i.e. a dual-control task). Single-unit activity was recorded from intracortical, multi-electrode arrays while a non-human primate performed this dual-control task. Compared to activity observed during naturalistic motor control, we found that both units used to drive the BCI directly (control units) and units that did not directly control the BCI (non-control units) significantly changed their tuning to wrist torque. Using a measure of effective connectivity, we observed that control units decrease their connectivity. Through an analysis of variance we found that the intrinsic variability of the control units has a significant effect on task proficiency. When this variance is accounted for, motor cortical activity is flexible enough to perform novel BCI tasks that require active decoupling of natural associations to wrist motion. This study provides insight into the neural activity that enables a dual-control brain-computer interface.
Collapse
|
14
|
The role of adaptation in neural coding. Curr Opin Neurobiol 2019; 58:135-140. [DOI: 10.1016/j.conb.2019.09.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 08/30/2019] [Accepted: 09/12/2019] [Indexed: 10/25/2022]
|
15
|
Visual-Olfactory Integration in the Human Disease Vector Mosquito Aedes aegypti. Curr Biol 2019; 29:2509-2516.e5. [PMID: 31327719 DOI: 10.1016/j.cub.2019.06.043] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 03/21/2019] [Accepted: 06/13/2019] [Indexed: 11/28/2022]
Abstract
Mosquitoes rely on the integration of multiple sensory cues, including olfactory, visual, and thermal stimuli, to detect, identify, and locate their hosts [1-4]. Although we increasingly know more about the role of chemosensory behaviors in mediating mosquito-host interactions [1], the role of visual cues is comparatively less studied [3], and how the combination of olfactory and visual information is integrated in the mosquito brain remains unknown. In the present study, we used a tethered-flight light-emitting diode (LED) arena, which allowed for quantitative control over the stimuli, and a control theoretic model to show that CO2 modulates mosquito steering responses toward vertical bars. To gain insight into the neural basis of this olfactory and visual coupling, we conducted two-photon microscopy experiments in a new GCaMP6s-expressing mosquito line. Imaging revealed that neuropil regions within the lobula exhibited strong responses to objects, such as a bar, but showed little response to a large-field motion. Approximately 20% of the lobula neuropil we imaged were modulated when CO2 preceded the presentation of a moving bar. By contrast, responses in the antennal (olfactory) lobe were not modulated by visual stimuli presented before or after an olfactory stimulus. Together, our results suggest that asymmetric coupling between these sensory systems provides enhanced steering responses to discrete objects.
Collapse
|
16
|
Abstract
Adaptation is a common principle that recurs throughout the nervous system at all stages of processing. This principle manifests in a variety of phenomena, from spike frequency adaptation, to apparent changes in receptive fields with changes in stimulus statistics, to enhanced responses to unexpected stimuli. The ubiquity of adaptation leads naturally to the question: What purpose do these different types of adaptation serve? A diverse set of theories, often highly overlapping, has been proposed to explain the functional role of adaptive phenomena. In this review, we discuss several of these theoretical frameworks, highlighting relationships among them and clarifying distinctions. We summarize observations of the varied manifestations of adaptation, particularly as they relate to these theoretical frameworks, focusing throughout on the visual system and making connections to other sensory systems.
Collapse
|
17
|
|
18
|
Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. eLife 2019; 8:44324. [PMID: 31081753 PMCID: PMC6538377 DOI: 10.7554/elife.44324] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/11/2019] [Indexed: 12/14/2022] Open
Abstract
Cognitive flexibility likely depends on modulation of the dynamics underlying how biological neural networks process information. While dynamics can be reshaped by gradually modifying connectivity, less is known about mechanisms operating on faster timescales. A compelling entrypoint to this problem is the observation that exploratory behaviors can rapidly cause selective hippocampal sequences to 'replay' during rest. Using a spiking network model, we asked whether simplified replay could arise from three biological components: fixed recurrent connectivity; stochastic 'gating' inputs; and rapid gating input scaling via long-term potentiation of intrinsic excitability (LTP-IE). Indeed, these enabled both forward and reverse replay of recent sensorimotor-evoked sequences, despite unchanged recurrent weights. LTP-IE 'tags' specific neurons with increased spiking probability under gating input, and ordering is reconstructed from recurrent connectivity. We further show how LTP-IE can implement temporary stimulus-response mappings. This elucidates a novel combination of mechanisms that might play a role in rapid cognitive flexibility.
Collapse
|
19
|
Computational Neuroscience: Mathematical and Statistical Perspectives. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION 2018; 5:183-214. [PMID: 30976604 PMCID: PMC6454918 DOI: 10.1146/annurev-statistics-041715-033733] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Mathematical and statistical models have played important roles in neuroscience, especially by describing the electrical activity of neurons recorded individually, or collectively across large networks. As the field moves forward rapidly, new challenges are emerging. For maximal effectiveness, those working to advance computational neuroscience will need to appreciate and exploit the complementary strengths of mechanistic theory and the statistical paradigm.
Collapse
|
20
|
Abstract
The nervous system extracts information from its environment and distributes and processes that information to inform and drive behaviour. In this task, the nervous system faces a type of data analysis problem, for, while a visual scene may be overflowing with information, reaching for the television remote before us requires extraction of only a relatively small fraction of that information. We could care about an almost infinite number of visual stimulus patterns, but we don't: we distinguish two actors' faces with ease but two different images of television static with significant difficulty. Equally, we could respond with an almost infinite number of movements, but we don't: the motions executed to pick up the remote are highly stereotyped and related to many other grasping motions. If we were to look at what was going on inside the brain during this task, we would find populations of neurons whose electrical activity was highly structured and correlated with the images on the screen and the action of localizing and picking up the remote.
Collapse
|
21
|
Abstract
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Collapse
|
22
|
Multiplexed Spike Coding and Adaptation in the Thalamus. Cell Rep 2017; 19:1130-1140. [PMID: 28494863 PMCID: PMC5554799 DOI: 10.1016/j.celrep.2017.04.050] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Revised: 03/18/2017] [Accepted: 04/18/2017] [Indexed: 11/26/2022] Open
Abstract
High-frequency "burst" clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ "multiplexing" to differentially encode low- and high-frequency stimulus features associated with either T-type calcium "low-threshold" or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1) burst size, (2) burst onset time, and (3) precise spike timing within bursts. Strikingly, this latter "intraburst" encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.
Collapse
|
23
|
Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface. PLoS Comput Biol 2017; 13:e1005343. [PMID: 28151957 PMCID: PMC5313237 DOI: 10.1371/journal.pcbi.1005343] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Revised: 02/16/2017] [Accepted: 01/03/2017] [Indexed: 12/19/2022] Open
Abstract
Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites eventually strengthen. It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity (STDP) rules, suggesting that STDP is key to drive these changes. However, the impact of STDP at the level of circuits, and the mechanisms governing its modification with neural implants remain poorly understood. The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. Our model successfully reproduces key experimental results, both established and new, and offers mechanistic insights into spike-triggered conditioning. Using analytical calculations and numerical simulations, we derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity.
Collapse
|
24
|
Let Music Sound while She Doth Make Her Choice. Neuron 2015; 87:1126-1128. [PMID: 26402597 DOI: 10.1016/j.neuron.2015.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
To attract females during courtship, Drosophila melanogaster males sing songs with motifs of varying temporal structure. In this issue of Neuron, Clemens et al. (2015) identify a song feature indicating male fitness and propose a neural mechanism for how it may be extracted from the auditory signal by female flies.
Collapse
|
25
|
Dual dimensionality reduction reveals independent encoding of motor features in a muscle synergy for insect flight control. PLoS Comput Biol 2015; 11:e1004168. [PMID: 25919482 PMCID: PMC4412410 DOI: 10.1371/journal.pcbi.1004168] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2014] [Accepted: 02/03/2015] [Indexed: 11/18/2022] Open
Abstract
What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies)? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke) the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS) to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity) consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in high dimensional sensory or motor transformations. Understanding movement control is challenging because the brains of nearly all animals send motor command signals to many muscles, and these signals produce complex movements. In studying animal movement, one cannot always record all the motor commands an animal uses or know all the ways in which movement varies in response. A combined approach is necessary to find the relevant patterns: the changes in movement that correspond to changes in the recorded motor commands. Techniques exist to identify simple patterns in either the motor commands or the movements, but in this paper we develop an approach that identifies patterns in both simultaneously. We use this technique to understand how agile flying insects control aerial turns. The two main downstroke muscles of moths are thought to produce turns by creating a power difference between the left and right wings. The moth’s brain may only need to specify the difference in activation between the two muscles. We discover that moth’s brain actually has independent control over each muscle, and this separate control increases the moth’s ability to adjust turning within a single wingstroke. Our computational approach reveals sophisticated patterns of movement processing even in the small nervous systems of insects.
Collapse
|
26
|
Intrinsic neuronal properties switch the mode of information transmission in networks. PLoS Comput Biol 2014; 10:e1003962. [PMID: 25474701 PMCID: PMC4256072 DOI: 10.1371/journal.pcbi.1003962] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Accepted: 10/02/2014] [Indexed: 12/03/2022] Open
Abstract
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. Differences in ion channel composition endow different neuronal types with distinct computational properties. Understanding how these biophysical differences affect network-level computation is an important frontier. We focus on a set of biophysical properties, experimentally observed in developing cortical neurons, that allow these neurons to efficiently encode their inputs despite time-varying changes in the statistical context. Large-scale propagating waves are autonomously generated by the developing brain even before the onset of sensory experience. Using multi-layered feedforward networks, we examine how changes in intrinsic properties can lead to changes in the network's ability to represent and transmit information on multiple timescales. We demonstrate that measured changes in the computational properties of immature single neurons enable the propagation of slow-varying wave-like inputs. In contrast, neurons with more mature properties are more sensitive to fast fluctuations, which modulate the slow-varying information. While slow events are transmitted with high fidelity in initial network layers, noise degrades transmission in downstream network layers. Our results show how short-term adaptation and modulation of the neurons' input-output firing curves by background synaptic noise determine the ability of neural networks to transmit information on multiple timescales.
Collapse
|
27
|
Relationship between individual neuron and network spontaneous activity in developing mouse cortex. J Neurophysiol 2014; 112:3033-45. [PMID: 25185811 DOI: 10.1152/jn.00349.2014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spontaneous synchronous activity (SSA) that propagates as electrical waves is found in numerous central nervous system structures and is critical for normal development, but the mechanisms of generation of such activity are not clear. In previous work, we showed that the ventrolateral piriform cortex is uniquely able to initiate SSA in contrast to the dorsal neocortex, which participates in, but does not initiate, SSA (Lischalk JW, Easton CR, Moody WJ. Dev Neurobiol 69: 407-414, 2009). In this study, we used Ca(2+) imaging of cultured embryonic day 18 to postnatal day 2 coronal slices (embryonic day 17 + 1-4 days in culture) of the mouse cortex to investigate the different activity patterns of individual neurons in these regions. In the piriform cortex where SSA is initiated, a higher proportion of neurons was active asynchronously between waves, and a larger number of groups of coactive cells was present compared with the dorsal cortex. When we applied GABA and glutamate synaptic antagonists, asynchronous activity and cellular clusters remained, while synchronous activity was eliminated, indicating that asynchronous activity is a result of cell-intrinsic properties that differ between these regions. To test the hypothesis that higher levels of cell-autonomous activity in the piriform cortex underlie its ability to initiate waves, we constructed a conductance-based network model in which three layers differed only in the proportion of neurons able to intrinsically generate bursting behavior. Simulations using this model demonstrated that a gradient of intrinsic excitability was sufficient to produce directionally propagating waves that replicated key experimental features, indicating that the higher level of cell-intrinsic activity in the piriform cortex may provide a substrate for SSA generation.
Collapse
|
28
|
Context-dependent coding in single neurons. J Comput Neurosci 2014; 37:459-80. [PMID: 24990803 DOI: 10.1007/s10827-014-0513-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Revised: 06/11/2014] [Accepted: 06/16/2014] [Indexed: 11/25/2022]
Abstract
The linear-nonlinear cascade model (LN model) has proven very useful in representing a neural system's encoding properties, but has proven less successful in reproducing the firing patterns of individual neurons whose behavior is strongly dependent on prior firing history. While the cell's behavior can still usefully be considered as feature detection acting on a fluctuating input, some of the coding capacity of the cell is taken up by the increased firing rate due to a constant "driving" direct current (DC) stimulus. Furthermore, both the DC input and the post-spike refractory period generate regular firing, reducing the spike-timing entropy available for encoding time-varying fluctuations. In this paper, we address these issues, focusing on the example of motoneurons in which an afterhyperpolarization (AHP) current plays a dominant role regularizing firing behavior. We explore the accuracy and generalizability of several alternative models for single neurons under changes in DC and variance of the stimulus input. We use a motoneuron simulation to compare coding models in neurons with and without the AHP current. Finally, we quantify the tradeoff between instantaneously encoding information about fluctuations and about the DC.
Collapse
|
29
|
Abstract
Nucleus laminaris (NL) neurons encode interaural time difference (ITD), the cue used to localize low-frequency sounds. A physiologically based model of NL input suggests that ITD information is contained in narrow frequency bands around harmonics of the sound frequency. This suggested a theory, which predicts that, for each tone frequency, there is an optimal time course for synaptic inputs to NL that will elicit the largest modulation of NL firing rate as a function of ITD. The theory also suggested that neurons in different tonotopic regions of NL require specialized tuning to take advantage of the input gradient. Tonotopic tuning in NL was investigated in brain slices by separating the nucleus into three regions based on its anatomical tonotopic map. Patch-clamp recordings in each region were used to measure both the synaptic and the intrinsic electrical properties. The data revealed a tonotopic gradient of synaptic time course that closely matched the theoretical predictions. We also found postsynaptic band-pass filtering. Analysis of the combined synaptic and postsynaptic filters revealed a frequency-dependent gradient of gain for the transformation of tone amplitude to NL firing rate modulation. Models constructed from the experimental data for each tonotopic region demonstrate that the tonotopic tuning measured in NL can improve ITD encoding across sound frequencies.
Collapse
|
30
|
Sensitivity of firing rate to input fluctuations depends on time scale separation between fast and slow variables in single neurons. J Comput Neurosci 2009; 27:277-90. [PMID: 19353260 DOI: 10.1007/s10827-009-0142-x] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2008] [Revised: 12/11/2008] [Accepted: 02/06/2009] [Indexed: 11/24/2022]
Abstract
Neuronal responses are often characterized by the firing rate as a function of the stimulus mean, or the f-I curve. We introduce a novel classification of neurons into Types A, B-, and B+ according to how f-I curves are modulated by input fluctuations. In Type A neurons, the f-I curves display little sensitivity to input fluctuations when the mean current is large. In contrast, Type B neurons display sensitivity to fluctuations throughout the entire range of input means. Type B- neurons do not fire repetitively for any constant input, whereas Type B+ neurons do. We show that Type B+ behavior results from a separation of time scales between a slow and fast variable. A voltage-dependent time constant for the recovery variable can facilitate sensitivity to input fluctuations. Type B+ firing rates can be approximated using a simple "energy barrier" model.
Collapse
|
31
|
Abstract
In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity. Many neurons are known to achieve a wide dynamic range by adaptively changing their computational input/output function according to the input statistics. These adaptive changes can be very rapid, and it has been suggested that a component of this adaptation could be purely input-driven: even a fixed neural system can show apparent adaptive behavior since inputs with different statistics interact with the nonlinearity of the system in different ways. In this paper, we show how a single neuron's intrinsic computational function can dictate such input-driven changes in its response to varying input statistics, which begets a relationship between two different characterizations of neural function—in terms of mean firing rate and in terms of generating precise spike timing. We then apply our results to two biophysically defined model neurons, which have significantly different response patterns to inputs with various statistics. Our model of intrinsic adaptation explains their behaviors well. Contrary to the picture that neurons carry out a stereotyped computation on their inputs, our results show that even in the simplest cases they have simple yet effective mechanisms by which they can adapt to their input. Adaptation to stimulus statistics, therefore, is built into the most basic single neuron computations.
Collapse
|
32
|
Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space. Neural Comput 2008; 20:1239-60. [PMID: 18194104 DOI: 10.1162/neco.2007.05-07-536] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent in vitro data show that neurons respond to input variance with varying sensitivities. Here we demonstrate that Hodgkin-Huxley (HH) neurons can operate in two computational regimes: one that is more sensitive to input variance (differentiating) and one that is less sensitive (integrating). A boundary plane in the 3D conductance space separates these two regimes. For a reduced HH model, this plane can be derived analytically from the V nullcline, thus suggesting a means of relating biophysical parameters to neural computation by analyzing the neuron's dynamical system.
Collapse
|
33
|
Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space. Neural Comput 2008; 20:1239-1260. [PMID: 18194104 DOI: 10.1162/neco.2007.05-07-53] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Recent in vitro data show that neurons respond to input variance with varying sensitivities. Here we demonstrate that Hodgkin-Huxley (HH) neurons can operate in two computational regimes: one that is more sensitive to input variance (differentiating) and one that is less sensitive (integrating). A boundary plane in the 3D conductance space separates these two regimes. For a reduced HH model, this plane can be derived analytically from the V nullcline, thus suggesting a means of relating biophysical parameters to neural computation by analyzing the neuron's dynamical system.
Collapse
|
34
|
Abstract
White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input and to determine how these features are combined to drive the system's spiking response. These methods have also been applied to characterize the input-output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular, the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.
Collapse
|
35
|
Reinforcement learning with modulated spike timing dependent synaptic plasticity. J Neurophysiol 2007; 98:3648-65. [PMID: 17928565 DOI: 10.1152/jn.00364.2007] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spike timing-dependent synaptic plasticity (STDP) has emerged as the preferred framework linking patterns of pre- and postsynaptic activity to changes in synaptic strength. Although synaptic plasticity is widely believed to be a major component of learning, it is unclear how STDP itself could serve as a mechanism for general purpose learning. On the other hand, algorithms for reinforcement learning work on a wide variety of problems, but lack an experimentally established neural implementation. Here, we combine these paradigms in a novel model in which a modified version of STDP achieves reinforcement learning. We build this model in stages, identifying a minimal set of conditions needed to make it work. Using a performance-modulated modification of STDP in a two-layer feedforward network, we can train output neurons to generate arbitrarily selected spike trains or population responses. Furthermore, a given network can learn distinct responses to several different input patterns. We also describe in detail how this model might be implemented biologically. Thus our model offers a novel and biologically plausible implementation of reinforcement learning that is capable of training a neural population to produce a very wide range of possible mappings between synaptic input and spiking output.
Collapse
|
36
|
Shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex. PLoS Biol 2007; 5:e19. [PMID: 17253902 PMCID: PMC1779810 DOI: 10.1371/journal.pbio.0050019] [Citation(s) in RCA: 199] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2006] [Accepted: 11/21/2006] [Indexed: 11/20/2022] Open
Abstract
Neuronal responses to ongoing stimulation in many systems change over time, or “adapt.” Despite the ubiquity of adaptation, its effects on the stimulus information carried by neurons are often unknown. Here we examine how adaptation affects sensory coding in barrel cortex. We used spike-triggered covariance analysis of single-neuron responses to continuous, rapidly varying vibrissa motion stimuli, recorded in anesthetized rats. Changes in stimulus statistics induced spike rate adaptation over hundreds of milliseconds. Vibrissa motion encoding changed with adaptation as follows. In every neuron that showed rate adaptation, the input–output tuning function scaled with the changes in stimulus distribution, allowing the neurons to maintain the quantity of information conveyed about stimulus features. A single neuron that did not show rate adaptation also lacked input–output rescaling and did not maintain information across changes in stimulus statistics. Therefore, in barrel cortex, rate adaptation occurs on a slow timescale relative to the features driving spikes and is associated with gain rescaling matched to the stimulus distribution. Our results suggest that adaptation enhances tactile representations in primary somatosensory cortex, where they could directly influence perceptual decisions. Neuronal responses to continued stimulation change over time, or “adapt.” Adaptation can be crucial to our brain's ability to successfully represent the environment: for example, when we move from a dim to a bright scene adaptation adjusts neurons' response to a given light intensity, enabling them to be maximally sensitive to the current range of stimulus variations. We analyzed how adaptation affects sensory coding in the somatosensory “barrel” cortex of the rat, which represents objects touched by the rat's whiskers, or vibrissae. Whiskers endow these nocturnal animals with impressive discrimination abilities: a rat can discern differences in texture as fine as we can distinguish using our fingertips. Neurons in the somatosensory cortex represent whisker vibrations by responding to “kinetic features,” particularly velocity fluctuations. We recorded responses of barrel cortex neurons to carefully controlled whisker motion and slowly varied the overall characteristics of the motion to provide a changing stimulus “context.” We found that stimulus–response relationships change in a particular way: the “tuning functions” that predict a neuron's response to fluctuations in whisker motion rescale according to the current stimulus distribution. The rescaling is just enough to maintain the information conveyed by the response about the stimulus. Cortical neurons adapt their responses to changes in the input statistics of a stimulus, suggesting adaptation enhances stimulus discrimination and perception.
Collapse
|
37
|
Factors affecting frequency discrimination of vibrotactile stimuli: implications for cortical encoding. PLoS One 2006; 1:e100. [PMID: 17183633 PMCID: PMC1762303 DOI: 10.1371/journal.pone.0000100] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2006] [Accepted: 11/16/2006] [Indexed: 11/19/2022] Open
Abstract
Background Measuring perceptual judgments about stimuli while manipulating their physical characteristics can uncover the neural algorithms underlying sensory processing. We carried out psychophysical experiments to examine how humans discriminate vibrotactile stimuli. Methodology/Principal Findings Subjects compared the frequencies of two sinusoidal vibrations applied sequentially to one fingertip. Performance was reduced when (1) the root mean square velocity (or energy) of the vibrations was equated by adjusting their amplitudes, and (2) the vibrations were noisy (their temporal structure was irregular). These effects were super-additive when subjects compared noisy vibrations that had equal velocity, indicating that frequency judgments became more dependent on the vibrations' temporal structure when differential information about velocity was eliminated. To investigate which areas of the somatosensory system use information about velocity and temporal structure, we required subjects to compare vibrations applied sequentially to opposite hands. This paradigm exploits the fact that tactile input to neurons at early levels (e.g., the primary somatosensory cortex, SI) is largely confined to the contralateral side of the body, so these neurons are less able to contribute to vibration comparisons between hands. The subjects' performance was still sensitive to differences in vibration velocity, but became less sensitive to noise. Conclusions/Significance We conclude that vibration frequency is represented in different ways by different mechanisms distributed across multiple cortical regions. Which mechanisms support the “readout” of frequency varies according to the information present in the vibration. Overall, the present findings are consistent with a model in which information about vibration velocity is coded in regions beyond SI. While adaptive processes within SI also contribute to the representation of frequency, this adaptation is influenced by the temporal regularity of the vibration.
Collapse
|
38
|
Abstract
Under normal viewing conditions, retinal ganglion cells transmit to the brain an encoded version of the visual world. The retina parcels the visual scene into an array of spatiotemporal features, and each ganglion cell conveys information about a small set of these features. We study the temporal features represented by salamander retinal ganglion cells by stimulating with dynamic spatially uniform flicker and recording responses using a multi-electrode array. While standard reverse correlation methods determine a single stimulus feature—the spike-triggered average—multiple features can be relevant to spike generation. We apply covariance analysis to determine the set of features to which each ganglion cell is sensitive. Using this approach, we found that salamander ganglion cells represent a rich vocabulary of different features of a temporally modulated visual stimulus. Individual ganglion cells were sensitive to at least two and sometimes as many as six features in the stimulus. While a fraction of the cells can be described by a filter-and-fire cascade model, many cells have feature selectivity that has not previously been reported. These reverse models were able to account for 80–100% of the information encoded by ganglion cells.
Collapse
|
39
|
Abstract
The spiking output of an individual neuron can represent information about the stimulus via mean rate, absolute spike time, and the time intervals between spikes. Here we discuss a distinct form of information representation, the local distribution of spike intervals, and show that the time-varying distribution of interspike intervals (ISIs) can represent parameters of the statistical context of stimuli. For many sensory neural systems the mapping between the stimulus input and spiking output is not fixed but, rather, depends on the statistical properties of the stimulus, potentially leading to ambiguity. We have shown previously that for the adaptive neural code of the fly H1, a motion-sensitive neuron in the fly visual system, information about the overall variance of the signal is obtainable from the ISI distribution. We now demonstrate the decoding of information about variance and show that a distributional code of ISIs can resolve ambiguities introduced by slow spike frequency adaptation. We examine the precision of this distributional code for the representation of stimulus variance in the H1 neuron as well as in the Hodgkin-Huxley model neuron. We find that the accuracy of the decoding depends on the shapes of the ISI distributions and the speed with which they adapt to new stimulus variances.
Collapse
|
40
|
Abstract
Avian nucleus magnocellularis (NM) spikes provide a temporal code representing sound arrival times to downstream neurons that compute sound source location. NM cells act as high-pass filters by responding only to discrete synaptic events while ignoring temporally summed EPSPs. This high degree of input selectivity insures that each output spike from NM unambiguously represents inputs that contain precise temporal information. However, we lack a quantitative description of the computation performed by NM cells. A powerful model for predicting output firing rate given an arbitrary current input is given by a linear/nonlinear cascade: the stimulus is compared with a known relevant feature by linear filtering, and based on that comparison, a nonlinear function predicts the firing response. Spike-triggered covariance analysis allows us to determine a generalization of this model in which firing depends on more than one spike-triggering feature or stimulus dimension. We found two current features relevant for NM spike generation; the most important simply smooths the current on short time scales, whereas the second confers sensitivity to rapid changes. A model based on these two features captured more mutual information between current and spikes than a model based on a single feature. We used this analysis to characterize the changes in the computation brought about by pharmacological manipulation of the biophysical properties of the neurons. Blockage of low-threshold voltage-gated potassium channels selectively eliminated the requirement for the second stimulus feature, generalizing our understanding of input selectivity by NM cells. This study demonstrates the power of covariance analysis for investigating single neuron computation.
Collapse
|
41
|
Abstract
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to "ask" neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the simplest model neuron with temporal dynamics-the leaky integrate-and-fire model-and find that for even this simple case, standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse-correlation techniques by selectively analyzing only "isolated" spikes and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the context of the leaky integrate-and-fire model, our findings are relevant for the analysis of spike trains from real neurons.
Collapse
|
42
|
Abstract
A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low-dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space” as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single threshold.
Collapse
|
43
|
Abstract
We examine the dynamics of a neural code in the context of stimuli whose statistical properties are themselves evolving dynamically. Adaptation to these statistics occurs over a wide range of timescales-from tens of milliseconds to minutes. Rapid components of adaptation serve to optimize the information that action potentials carry about rapid stimulus variations within the local statistical ensemble, while changes in the rate and statistics of action-potential firing encode information about the ensemble itself, thus resolving potential ambiguities. The speed with which information is optimized and ambiguities are resolved approaches the physical limit imposed by statistical sampling and noise.
Collapse
|
44
|
Anomalous scaling in a model of passive scalar advection: Exact results. PHYSICAL REVIEW. E, STATISTICAL PHYSICS, PLASMAS, FLUIDS, AND RELATED INTERDISCIPLINARY TOPICS 1996; 53:3518-3535. [PMID: 9964661 DOI: 10.1103/physreve.53.3518] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
|
45
|
Anomalous scaling in fluid mechanics: The case of the passive scalar. PHYSICAL REVIEW. E, STATISTICAL PHYSICS, PLASMAS, FLUIDS, AND RELATED INTERDISCIPLINARY TOPICS 1994; 50:4684-4704. [PMID: 9962548 DOI: 10.1103/physreve.50.4684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
|