1
|
Kastner DB, Williams G, Holobetz C, Romano JP, Dayan P. The choice-wide behavioral association study: data-driven identification of interpretable behavioral components. bioRxiv 2024:2024.02.26.582115. [PMID: 38464037 PMCID: PMC10925091 DOI: 10.1101/2024.02.26.582115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Behavior contains rich structure across many timescales, but there is a dearth of methods to identify relevant components, especially over the longer periods required for learning and decision-making. Inspired by the goals and techniques of genome-wide association studies, we present a data-driven method-the choice-wide behavioral association study: CBAS-that systematically identifies such behavioral features. CBAS uses a powerful, resampling-based, method of multiple comparisons correction to identify sequences of actions or choices that either differ significantly between groups or significantly correlate with a covariate of interest. We apply CBAS to different tasks and species (flies, rats, and humans) and find, in all instances, that it provides interpretable information about each behavioral task.
Collapse
Affiliation(s)
- David B. Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
- Lead Contact
| | - Greer Williams
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Cristofer Holobetz
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Joseph P. Romano
- Department of Statistics, Stanford University, Stanford, CA 94305, USA
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| |
Collapse
|
2
|
Maheswaranathan N, McIntosh LT, Tanaka H, Grant S, Kastner DB, Melander JB, Nayebi A, Brezovec LE, Wang JH, Ganguli S, Baccus SA. Interpreting the retinal neural code for natural scenes: From computations to neurons. Neuron 2023; 111:2742-2755.e4. [PMID: 37451264 PMCID: PMC10680974 DOI: 10.1016/j.neuron.2023.06.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 01/30/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Abstract
Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.
Collapse
Affiliation(s)
| | - Lane T McIntosh
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Hidenori Tanaka
- Department of Applied Physics, Stanford University, Stanford, CA, USA; Physics & Informatics Laboratories, NTT Research, Inc., Sunnyvale, CA, USA; Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Satchel Grant
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - David B Kastner
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Joshua B Melander
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Aran Nayebi
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Luke E Brezovec
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University, Stanford, CA, USA.
| |
Collapse
|
3
|
Kastner DB, Miller EA, Yang Z, Roumis DK, Liu DF, Frank LM, Dayan P. Spatial preferences account for inter-animal variability during the continual learning of a dynamic cognitive task. Cell Rep 2022; 39:110708. [PMID: 35443181 PMCID: PMC9096879 DOI: 10.1016/j.celrep.2022.110708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 02/01/2022] [Accepted: 03/29/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding the complexities of behavior is necessary to interpret neurophysiological data and establish animal models of neuropsychiatric disease. This understanding requires knowledge of the underlying information-processing structure—something often hidden from direct observation. Commonly, one assumes that behavior is solely governed by the experimenter-controlled rules that determine tasks. For example, differences in tasks that require memory of past actions are often interpreted as exclusively resulting from differences in memory. However, such assumptions are seldom tested. Here, we provide a comprehensive examination of multiple processes that contribute to behavior in a prevalent experimental paradigm. Using a combination of behavioral automation, hypothesis-driven trial design, and reinforcement learning modeling, we show that rats learn a spatial alternation task consistent with their drawing upon spatial preferences in addition to memory. Our approach also distinguishes learning based on established preferences from generalization of task structure, providing further insights into learning dynamics. Spatial alternation behaviors are commonly used to measure memory. Kastner et al. use experimental and computational approaches to show that rats learn spatial alternation in a manner consistent with their utilizing multiple computational features in addition to just memory and that variation in use of these features underlies inter-animal variability.
Collapse
Affiliation(s)
- David B Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, San Francisco, CA 94143, USA; Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Eric A Miller
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Zhuonan Yang
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Demetris K Roumis
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Daniel F Liu
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Loren M Frank
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, San Francisco, CA 94143, USA; Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, 4000 Jones Bridge Road, Chevy Chase, MD 20815, USA
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany; University of Tübingen, 72074 Tübingen, Germany
| |
Collapse
|
4
|
Gillespie AK, Astudillo Maya DA, Denovellis EL, Liu DF, Kastner DB, Coulter ME, Roumis DK, Eden UT, Frank LM. Hippocampal replay reflects specific past experiences rather than a plan for subsequent choice. Neuron 2021; 109:3149-3163.e6. [PMID: 34450026 DOI: 10.1016/j.neuron.2021.07.029] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 06/21/2021] [Accepted: 07/29/2021] [Indexed: 01/06/2023]
Abstract
Executing memory-guided behavior requires storage of information about experience and later recall of that information to inform choices. Awake hippocampal replay, when hippocampal neural ensembles briefly reactivate a representation related to prior experience, has been proposed to critically contribute to these memory-related processes. However, it remains unclear whether awake replay contributes to memory function by promoting the storage of past experiences, facilitating planning based on evaluation of those experiences, or both. We designed a dynamic spatial task that promotes replay before a memory-based choice and assessed how the content of replay related to past and future behavior. We found that replay content was decoupled from subsequent choice and instead was enriched for representations of previously rewarded locations and places that had not been visited recently, indicating a role in memory storage rather than in directly guiding subsequent behavior.
Collapse
Affiliation(s)
- Anna K Gillespie
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Daniela A Astudillo Maya
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Eric L Denovellis
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Daniel F Liu
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - David B Kastner
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Michael E Coulter
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Demetris K Roumis
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Uri T Eden
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215, USA
| | - Loren M Frank
- Departments of Physiology and Psychiatry, University of California, San Francisco, San Francisco, CA 94158, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, CA 94158, USA.
| |
Collapse
|
5
|
Care RA, Anastassov IA, Kastner DB, Kuo YM, Della Santina L, Dunn FA. Mature Retina Compensates Functionally for Partial Loss of Rod Photoreceptors. Cell Rep 2021; 31:107730. [PMID: 32521255 PMCID: PMC8049532 DOI: 10.1016/j.celrep.2020.107730] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 04/15/2020] [Accepted: 05/13/2020] [Indexed: 01/21/2023] Open
Abstract
Loss of primary neuronal inputs inevitably strikes every neural circuit. The deafferented circuit could propagate, amplify, or mitigate input loss, thus affecting the circuit’s output. How the deafferented circuit contributes to the effect on the output is poorly understood because of lack of control over loss of and access to circuit elements. Here, we control the timing and degree of rod photoreceptor ablation in mature mouse retina and uncover compensation. Following loss of half of the rods, rod bipolar cells mitigate the loss by preserving voltage output. Such mitigation allows partial recovery of ganglion cell responses. We conclude that rod death is compensated for in the circuit because ganglion cell responses to stimulation of half of the rods in an unperturbed circuit are weaker than responses after death of half of the rods. The dominant mechanism of such compensation includes homeostatic regulation of inhibition to balance the loss of excitation. Care et al. ablate half of the rods in mature mouse retina and find that primary neuron loss is functionally compensated for by balanced inhibition and excitation at the secondary neuron. Changes in cone-mediated, but not rod-mediated, output neuron spikes are recapitulated by half stimulation, demonstrating independent regulation of pathways.
Collapse
Affiliation(s)
- Rachel A Care
- Graduate Program in Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Ivan A Anastassov
- Department of Biology, San Francisco State University, San Francisco, CA 94132, USA
| | - David B Kastner
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Yien-Ming Kuo
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Luca Della Santina
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA.
| | - Felice A Dunn
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
6
|
Hsu WMM, Kastner DB, Baccus SA, Sharpee TO. How inhibitory neurons increase information transmission under threshold modulation. Cell Rep 2021; 35:109158. [PMID: 34038717 PMCID: PMC8846953 DOI: 10.1016/j.celrep.2021.109158] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 02/14/2021] [Accepted: 04/29/2021] [Indexed: 11/28/2022] Open
Abstract
Modulation of neuronal thresholds is ubiquitous in the brain. Phenomena such as figure-ground segmentation, motion detection, stimulus anticipation, and shifts in attention all involve changes in a neuron’s threshold based on signals from larger scales than its primary inputs. However, this modulation reduces the accuracy with which neurons can represent their primary inputs, creating a mystery as to why threshold modulation is so widespread in the brain. We find that modulation is less detrimental than other forms of neuronal variability and that its negative effects can be nearly completely eliminated if modulation is applied selectively to sparsely responding neurons in a circuit by inhibitory neurons. We verify these predictions in the retina where we find that inhibitory amacrine cells selectively deliver modulation signals to sparsely responding ganglion cell types. Our findings elucidate the central role that inhibitory neurons play in maximizing information transmission under modulation. Modulation of neuronal thresholds is ubiquitous in the brain but reduces the accuracy of neural signaling. Hsu et al. show that the negative impact of threshold modulation can be almost completely eliminated when modulation is not delivered uniformly to all neurons but only to a subset and via inhibitory neurons.
Collapse
Affiliation(s)
- Wei-Mien M Hsu
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA
| | - David B Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, School of Medicine, San Francisco, CA, USA; Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
7
|
Kastner DB, Kharazia V, Nevers R, Smyth C, Astudillo-Maya DA, Williams GM, Yang Z, Holobetz CM, Santina LD, Parkinson DY, Frank LM. Scalable method for micro-CT analysis enables large scale quantitative characterization of brain lesions and implants. Sci Rep 2020; 10:20851. [PMID: 33257721 PMCID: PMC7705725 DOI: 10.1038/s41598-020-77796-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 11/17/2020] [Indexed: 12/18/2022] Open
Abstract
Anatomic evaluation is an important aspect of many studies in neuroscience; however, it often lacks information about the three-dimensional structure of the brain. Micro-CT imaging provides an excellent, nondestructive, method for the evaluation of brain structure, but current applications to neurophysiological or lesion studies require removal of the skull as well as hazardous chemicals, dehydration, or embedding, limiting their scalability and utility. Here we present a protocol using eosin in combination with bone decalcification to enhance contrast in the tissue and then employ monochromatic and propagation phase-contrast micro-CT imaging to enable the imaging of brain structure with the preservation of the surrounding skull. Instead of relying on descriptive, time-consuming, or subjective methods, we develop simple quantitative analyses to map the locations of recording electrodes and to characterize the presence and extent of hippocampal brain lesions.
Collapse
Affiliation(s)
- David B Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, 94143, USA. .,Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA.
| | - Viktor Kharazia
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA
| | - Rhino Nevers
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA
| | - Clay Smyth
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA
| | - Daniela A Astudillo-Maya
- Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA
| | - Greer M Williams
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, 94143, USA
| | - Zhounan Yang
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, 94143, USA
| | - Cristofer M Holobetz
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, 94143, USA
| | - Luca Della Santina
- Deparment of Ophthalmology, University of California, San Francisco, CA, 94143, USA.,Bakar Computational Health Science Unit, University of California, San Francisco, CA, 94158, USA
| | - Dilworth Y Parkinson
- Advanced Light Source, Lawrence Berkeley National Labs, Berkeley, CA, 94720, USA
| | - Loren M Frank
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, 94143, USA.,Kavli Institute for Fundamental Neuroscience and Department of Physiology, University of California, San Francisco, CA, 94158, USA.,Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
8
|
Care RA, Kastner DB, De la Huerta I, Pan S, Khoche A, Della Santina L, Gamlin C, Santo Tomas C, Ngo J, Chen A, Kuo YM, Ou Y, Dunn FA. Partial Cone Loss Triggers Synapse-Specific Remodeling and Spatial Receptive Field Rearrangements in a Mature Retinal Circuit. Cell Rep 2020; 27:2171-2183.e5. [PMID: 31091454 PMCID: PMC6624172 DOI: 10.1016/j.celrep.2019.04.065] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 03/11/2019] [Accepted: 04/12/2019] [Indexed: 11/30/2022] Open
Abstract
Resilience of neural circuits has been observed in the persistence of function despite neuronal loss. In vision, acuity and sensitivity can be retained after 50% loss of cones. While neurons in the cortex can remodel after input loss, the contributions of cell-type-specific circuits to resilience are unknown. Here, we study the effects of partial cone loss in mature mouse retina where cell types and connections are known. At first-order synapses, bipolar cell dendrites remodel and synaptic proteins diminish at sites of input loss. Sites of remaining inputs preserve synaptic proteins. Second-order synapses between bipolar and ganglion cells remain stable. Functionally, ganglion cell spatio-temporal receptive fields retain center-surround structure following partial cone loss. We find evidence for slower temporal filters and expanded receptive field surrounds, derived mainly from inhibitory inputs. Surround expansion is absent in partially stimulated control retina. Results demonstrate functional resilience to input loss beyond pre-existing mechanisms in control retina. Care et al. find that photoreceptor ablation causes structural rearrangement of bipolar cell input synapses while output synapses endure. Functionally, recipient ganglion cells show altered receptive field sizes, an effect not seen after partial stimulation of control retina, demonstrating de novo changes that occur in inhibitory circuitry after photoreceptor loss.
Collapse
Affiliation(s)
- Rachel A Care
- Graduate Program in Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - David B Kastner
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Irina De la Huerta
- Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Simon Pan
- Graduate Program in Neuroscience, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Atrey Khoche
- Department of Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Luca Della Santina
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Clare Gamlin
- Program in Neuroscience, Department of Biological Structure, University of Washington, Seattle, WA 98195, USA
| | - Chad Santo Tomas
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - Jenita Ngo
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Allen Chen
- Department of Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Yien-Ming Kuo
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Yvonne Ou
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Felice A Dunn
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
9
|
Ozuysal Y, Kastner DB, Baccus SA. Adaptive feature detection from differential processing in parallel retinal pathways. PLoS Comput Biol 2018; 14:e1006560. [PMID: 30457994 PMCID: PMC6245510 DOI: 10.1371/journal.pcbi.1006560] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 10/11/2018] [Indexed: 11/25/2022] Open
Abstract
To transmit information efficiently in a changing environment, the retina adapts to visual contrast by adjusting its gain, latency and mean response. Additionally, the temporal frequency selectivity, or bandwidth changes to encode the absolute intensity when the stimulus environment is noisy, and intensity differences when noise is low. We show that the On pathway of On-Off retinal amacrine and ganglion cells is required to change temporal bandwidth but not other adaptive properties. This remarkably specific adaptive mechanism arises from differential effects of contrast on the On and Off pathways. We analyzed a biophysical model fit only to a cell’s membrane potential, and verified pharmacologically that it accurately revealed the two pathways. We conclude that changes in bandwidth arise mostly from differences in synaptic threshold in the two pathways, rather than synaptic release dynamics as has previously been proposed to underlie contrast adaptation. Different efficient codes are selected by different thresholds in two independently adapting neural pathways.
Collapse
Affiliation(s)
- Yusuf Ozuysal
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - David B. Kastner
- Neuroscience Program, Stanford University, Stanford, CA, United States of America
| | - Stephen A. Baccus
- Department of Neurobiology, Stanford University, Stanford, CA, United States of America
- * E-mail:
| |
Collapse
|
10
|
Maheswaranathan N, Kastner DB, Baccus SA, Ganguli S. Inferring hidden structure in multilayered neural circuits. PLoS Comput Biol 2018; 14:e1006291. [PMID: 30138312 PMCID: PMC6124781 DOI: 10.1371/journal.pcbi.1006291] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 09/05/2018] [Accepted: 06/09/2018] [Indexed: 01/26/2023] Open
Abstract
A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model’s parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data. Computation in neural circuits arises from the cascaded processing of inputs through multiple cell layers. Each of these cell layers performs operations such as filtering and thresholding in order to shape a circuit’s output. It remains a challenge to describe both the computations and the mechanisms that mediate them given limited data recorded from a neural circuit. A standard approach to describing circuit computation involves building quantitative encoding models that predict the circuit response given its input, but these often fail to map in an interpretable way onto mechanisms within the circuit. In this work, we build two layer linear-nonlinear cascade models (LN-LN) in order to describe how the retinal output is shaped by nonlinear mechanisms in the inner retina. We find that these LN-LN models, fit to ganglion cell recordings alone, identify filters and nonlinearities that are readily mapped onto individual circuit components inside the retina, namely bipolar cells and the bipolar-to-ganglion cell synaptic threshold. This work demonstrates how combining simple prior knowledge of circuit properties with partial experimental recordings of a neural circuit’s output can yield interpretable models of the entire circuit computation, including parts of the circuit that are hidden or not directly observed in neural recordings.
Collapse
Affiliation(s)
- Niru Maheswaranathan
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - David B. Kastner
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - Stephen A. Baccus
- Department of Neurobiology, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|
11
|
Zhang Y, Kastner DB, Baccus SA, Sharpee TO. Optimal Information Transmission by Overlapping Retinal Cell Mosaics. Proc Conf Inf Sci Syst 2018; 2018. [PMID: 34746939 DOI: 10.1109/ciss.2018.8362310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The retina provides an excellent system for understanding the trade-offs that influence distributed information processing across multiple neuron types. We focus here on the problem faced by the visual system of allocating a limited number neurons to encode different visual features at different spatial locations. The retina needs to solve three competing goals: 1) encode different visual features, 2) maximize spatial resolution for each feature, and 3) maximize accuracy with which each feature is encoded at each location. There is no current understanding of how these goals are optimized together. While information theory provides a platform for theoretically solving these problems, evaluating information provided by the responses of large neuronal arrays is in general challenging. Here we present a solution to this problem in the case where multi-dimensional stimuli can be decomposed into approximately independent components that are subsequently coupled by neural responses. Using this approach we quantify information transmission by multiple overlapping retinal ganglion cell mosaics. In the retina, translation invariance of input signals makes it possible to use Fourier basis as a set of independent components. The results reveal a transition where one high-density mosaic becomes less informative than two or more overlapping lower-density mosaics. The results explain differences in the fractions of multiple cell types, predict the existence of new retinal ganglion cell subtypes, relative distribution of neurons among cell types and differences in their nonlinear and dynamical response properties.
Collapse
Affiliation(s)
- Yilun Zhang
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, USA.,Department of Physics, University of California San Diego, La Jolla, California, USA
| | - David B Kastner
- Department of Psychiatry, University of California San Francisco, San Francisco, California, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University, Palo Alto, California, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, USA.,Department of Physics, University of California San Diego, La Jolla, California, USA
| |
Collapse
|
12
|
Abstract
Reconsolidation of memories has mostly been studied at the behavioral and molecular level. Here, we put forward a simple extension of existing computational models of synaptic consolidation to capture hippocampal slice experiments that have been interpreted as reconsolidation at the synaptic level. The model implements reconsolidation through stabilization of consolidated synapses by stabilizing entities combined with an activity-dependent reservoir of stabilizing entities that are immune to protein synthesis inhibition (PSI). We derive a reduced version of our model to explore the conditions under which synaptic reconsolidation does or does not occur, often referred to as the boundary conditions of reconsolidation. We find that our computational model of synaptic reconsolidation displays complex boundary conditions. Our results suggest that a limited resource of hypothetical stabilizing molecules or complexes, which may be implemented by protein phosphorylation or different receptor subtypes, can underlie the phenomenon of synaptic reconsolidation.
Collapse
Affiliation(s)
- David B Kastner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Tilo Schwalger
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Lorric Ziegler
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
13
|
Kastner DB, Baccus SA. Insights from the retina into the diverse and general computations of adaptation, detection, and prediction. Curr Opin Neurobiol 2014; 25:63-9. [DOI: 10.1016/j.conb.2013.11.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Revised: 11/24/2013] [Accepted: 11/28/2013] [Indexed: 01/26/2023]
|
14
|
Abstract
Sensory systems change their sensitivity based on recent stimuli to adjust their response range to the range of inputs and to predict future sensory input. Here, we report the presence of retinal ganglion cells that have antagonistic plasticity, showing central adaptation and peripheral sensitization. Ganglion cell responses were captured by a spatiotemporal model with independently adapting excitatory and inhibitory subunits, and sensitization requires GABAergic inhibition. Using a simple theory of signal detection, we show that the sensitizing surround conforms to an optimal inference model that continually updates the prior signal probability. This indicates that small receptive field regions have dual functionality--to adapt to the local range of signals but sensitize based upon the probability of the presence of that signal. Within this framework, we show that sensitization predicts the location of a nearby object, revealing prediction as a functional role for adapting inhibition in the nervous system.
Collapse
Affiliation(s)
- David B. Kastner
- Neuroscience Program, Stanford University School of Medicine, 299 Campus Drive W., Stanford, CA, USA
| | - Stephen A. Baccus
- Department of Neurobiology, Stanford University School of Medicine, 299 Campus Drive W., Stanford, CA, USA
| |
Collapse
|
15
|
Abstract
The range of natural inputs encoded by a neuron often exceeds its dynamic range. To overcome this limitation, neural populations divide their inputs among different cell classes, as with rod and cone photoreceptors, and adapt by shifting their dynamic range. We report that the dynamic behavior of retinal ganglion cells in salamanders, mice and rabbits is divided into two opposing forms of short-term plasticity in different cell classes. One population of cells exhibited sensitization-a persistent elevated sensitivity following a strong stimulus. This newly observed dynamic behavior compensates for the information loss caused by the known process of adaptation occurring in a separate cell population. The two populations divide the dynamic range of inputs, with sensitizing cells encoding weak signals and adapting cells encoding strong signals. In the two populations, the linear, threshold and adaptive properties are linked to preserve responsiveness when stimulus statistics change, with one population maintaining the ability to respond when the other fails.
Collapse
Affiliation(s)
- David B Kastner
- Neuroscience Program, Stanford University School of Medicine, Stanford, California, USA
| | | |
Collapse
|
16
|
Sachdev P, Menon S, Kastner DB, Chuang JZ, Yeh TY, Conde C, Caceres A, Sung CH, Sakmar TP. G protein beta gamma subunit interaction with the dynein light-chain component Tctex-1 regulates neurite outgrowth. EMBO J 2007; 26:2621-32. [PMID: 17491591 PMCID: PMC1888676 DOI: 10.1038/sj.emboj.7601716] [Citation(s) in RCA: 60] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2006] [Accepted: 04/12/2007] [Indexed: 11/08/2022] Open
Abstract
Tctex-1, a light-chain component of the cytoplasmic dynein motor complex, can function independently of dynein to regulate multiple steps in neuronal development. However, how dynein-associated and dynein-free pools of Tctex-1 are maintained in the cell is not known. Tctex-1 was recently identified as a Gbetagamma-binding protein and shown to be identical to the receptor-independent activator of G protein signaling AGS2. We propose a novel role for the interaction of Gbetagamma with Tctex-1 in neurite outgrowth. Ectopic expression of either Tctex-1 or Gbetagamma promotes neurite outgrowth whereas interfering with their function inhibits neuritogenesis. Using embryonic mouse brain extracts, we demonstrate an endogenous Gbetagamma-Tctex-1 complex and show that Gbetagamma co-segregates with dynein-free fractions of Tctex-1. Furthermore, Gbeta competes with the dynein intermediate chain for binding to Tctex-1, regulating assembly of Tctex-1 into the dynein motor complex. We propose that Tctex-1 is a novel effector of Gbetagamma, and that Gbetagamma-Tctex-1 complex plays a key role in the dynein-independent function of Tctex-1 in regulating neurite outgrowth in primary hippocampal neurons, most likely by modulating actin and microtubule dynamics.
Collapse
Affiliation(s)
- Pallavi Sachdev
- Laboratory of Molecular Biology and Biochemistry, The Rockefeller University, New York, NY, USA
| | - Santosh Menon
- Laboratory of Molecular Biology and Biochemistry, The Rockefeller University, New York, NY, USA
| | - David B Kastner
- Laboratory of Molecular Biology and Biochemistry, The Rockefeller University, New York, NY, USA
| | - Jen-Zen Chuang
- Department of Ophthalmology, Weill Medical College of Cornell University, New York, NY, USA
| | - Ting-Yu Yeh
- Department of Ophthalmology, Weill Medical College of Cornell University, New York, NY, USA
| | | | | | - Ching-Hwa Sung
- Department of Ophthalmology, Weill Medical College of Cornell University, New York, NY, USA
- Department of Cell and Developmental Biology, Weill Medical College of Cornell University, New York, NY, USA
| | - Thomas P Sakmar
- Laboratory of Molecular Biology and Biochemistry, The Rockefeller University, New York, NY, USA
- Laboratory of Molecular Biology and Biochemistry, The Rockefeller University, 1230 York Avenue, Box 187, New York City, NY 10021, USA. Tel.: +1 212 327 8288; Fax: +1 212 327 7904; E-mail:
| |
Collapse
|
17
|
Hrnjez BJ, Sultan ST, Natanov GR, Kastner DB, Rosman MR. Prediction of Supercritical Ethane Bulk Solvent Densities for Pyrazine Solvation Shell Average Occupancy by 1, 2, 3, and 4 Ethanes: Combined Experimental and ab Initio Approach. J Phys Chem A 2005; 109:10222-31. [PMID: 16833315 DOI: 10.1021/jp054150d] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
We introduce a method that addresses the elusive local density at the solute in the highly compressible regime of a supercritical fluid. Experimentally, the red shift of the pyrazine n-pi electronic transition was measured at infinite dilution in supercritical ethane as a function of pressure from 0 to about 3000 psia at two temperatures, one close (35.0 degrees C) to the critical temperature and the other remote (55.0 degrees C). Computationally, stationary points were located on the potential surfaces for pyrazine and one, two, three, and four ethanes at the MP2/6-311++G(d,p) level. The vertical n-pi ((1)B(3u)) transition energies were computed for each of these geometries with a TDDFT/B3LYP/6-311++G(d,p) method. The combination of experiment and computation allows prediction of supercritical ethane bulk densities at which the pyrazine primary solvation shell contains an average of one, two, three, and four ethane molecules. These density predictions were achieved by graphical superposition of calculated shifts on the experimental shift versus density curves for 35.0 and 55.0 degrees C. Predicted densities are 0.0635, 0.0875, and 0.0915 g cm(-3) for average pyrazine primary solvation shell occupancy by one, two, and three ethanes at both 35.0 and 55.0 degrees C. Predicted densities are 0.129 and 0.150 g cm(-3) for occupancy by four ethanes at 35.0 and 55.0 degrees C, respectively. An alternative approach, designed to "average out" geometry specific shifts, is based on the relationship Deltanu = -23.9n cm(-1), where n = ethane number. Graphical treatment gives alternative predicted densities of 0.0490, 0.0844, and 0.120 g cm(-3) for average pyrazine primary solvation shell occupancy by one, two, and three ethanes at both 35.0 and 55.0 degrees C, and densities of 0.148 and 0.174 g cm(-3) for occupancy by four ethanes at 35.0 and 55.0 degrees C, respectively.
Collapse
Affiliation(s)
- Bruce J Hrnjez
- Department of Chemistry, Yeshiva University, New York, New York 10033, USA.
| | | | | | | | | |
Collapse
|
18
|
Gross E, Kastner DB, Kaiser CA, Fass D. Structure of Ero1p, source of disulfide bonds for oxidative protein folding in the cell. Cell 2004; 117:601-10. [PMID: 15163408 DOI: 10.1016/s0092-8674(04)00418-0] [Citation(s) in RCA: 194] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2003] [Revised: 03/29/2004] [Accepted: 04/01/2004] [Indexed: 11/24/2022]
Abstract
The flavoenzyme Ero1p produces disulfide bonds for oxidative protein folding in the endoplasmic reticulum. Disulfides generated de novo within Ero1p are transferred to protein disulfide isomerase and then to substrate proteins by dithiol-disulfide exchange reactions. Despite this key role of Ero1p, little is known about the mechanism by which this enzyme catalyzes thiol oxidation. Here, we present the X-ray crystallographic structure of Ero1p, which reveals the molecular details of the catalytic center, the role of a CXXCXXC motif, and the spatial relationship between functionally significant cysteines and the bound cofactor. Remarkably, the Ero1p active site closely resembles that of the versatile thiol oxidase module of Erv2p, a protein with no sequence homology to Ero1p. Furthermore, both Ero1p and Erv2p display essential dicysteine motifs on mobile polypeptide segments, suggesting that shuttling electrons to a rigid active site using a flexible strand is a fundamental feature of disulfide-generating flavoenzymes.
Collapse
Affiliation(s)
- Einav Gross
- Department of Structural Biology, Weizmann Institute of Science, Rehovot 76100, Israel
| | | | | | | |
Collapse
|