1
|
Gou T, Matulis CA, Clark DA. Adaptation to visual sparsity enhances responses to isolated stimuli. Curr Biol 2024; 34:5697-5713.e8. [PMID: 39577424 DOI: 10.1016/j.cub.2024.10.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 09/17/2024] [Accepted: 10/18/2024] [Indexed: 11/24/2024]
Abstract
Sensory systems adapt their response properties to the statistics of their inputs. For instance, visual systems adapt to low-order statistics like mean and variance to encode stimuli efficiently or to facilitate specific downstream computations. However, it remains unclear how other statistical features affect sensory adaptation. Here, we explore how Drosophila's visual motion circuits adapt to stimulus sparsity, a measure of the signal's intermittency not captured by low-order statistics alone. Early visual neurons in both ON and OFF pathways alter their responses dramatically with stimulus sparsity, responding positively to both light and dark sparse stimuli but linearly to dense stimuli. These changes extend to downstream ON and OFF direction-selective neurons, which are activated by sparse stimuli of both polarities but respond with opposite signs to light and dark regions of dense stimuli. Thus, sparse stimuli activate both ON and OFF pathways, recruiting a larger fraction of the circuit and potentially enhancing the salience of isolated stimuli. Overall, our results reveal visual response properties that increase the fraction of the circuit responding to sparse, isolated stimuli.
Collapse
Affiliation(s)
- Tong Gou
- Department of Electrical Engineering, Yale University, New Haven, CT 06511, USA
| | | | - Damon A Clark
- Department of Physics, Yale University, New Haven, CT 06511, USA; Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Quantitative Biology Institute, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
2
|
Wu N, Zhou B, Agrochao M, Clark DA. Broken time reversal symmetry in visual motion detection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.08.598068. [PMID: 38915608 PMCID: PMC11195140 DOI: 10.1101/2024.06.08.598068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Our intuition suggests that when a movie is played in reverse, our perception of motion in the reversed movie will be perfectly inverted compared to the original. This intuition is also reflected in many classical theoretical and practical models of motion detection. However, here we demonstrate that this symmetry of motion perception upon time reversal is often broken in real visual systems. In this work, we designed a set of visual stimuli to investigate how stimulus symmetries affect time reversal symmetry breaking in the fruit fly Drosophila's well-studied optomotor rotation behavior. We discovered a suite of new stimuli with a wide variety of different properties that can lead to broken time reversal symmetries in fly behavioral responses. We then trained neural network models to predict the velocity of scenes with both natural and artificial contrast distributions. Training with naturalistic contrast distributions yielded models that break time reversal symmetry, even when the training data was time reversal symmetric. We show analytically and numerically that the breaking of time reversal symmetry in the model responses can arise from contrast asymmetry in the training data, but can also arise from other features of the contrast distribution. Furthermore, shallower neural network models can exhibit stronger symmetry breaking than deeper ones, suggesting that less flexible neural networks promote some forms of time reversal symmetry breaking. Overall, these results reveal a surprising feature of biological motion detectors and suggest that it could arise from constrained optimization in natural environments.
Collapse
Affiliation(s)
- Nathan Wu
- Yale College, New Haven, CT 06511, USA
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Damon A. Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
- Quantitative Biology Institute, Yale University, New Haven, CT 06511, USA
- Wu Tsai Institute, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
3
|
Jing T, Shan Z, Dinh T, Biswas A, Jang S, Greenwood J, Li M, Zhang Z, Gray G, Shin HJ, Zhou B, Passos D, Aiyer S, Li Z, Craigie R, Engelman AN, Kvaratskhelia M, Lyumkis D. Oligomeric HIV-1 Integrase Structures Reveal Functional Plasticity for Intasome Assembly and RNA Binding. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.26.577436. [PMID: 38328132 PMCID: PMC10849644 DOI: 10.1101/2024.01.26.577436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Integrase (IN) performs dual essential roles during HIV-1 replication. During ingress, IN functions within an oligomeric "intasome" assembly to catalyze viral DNA integration into host chromatin. During late stages of infection, tetrameric IN binds viral RNA and orchestrates the condensation of ribonucleoprotein complexes into the capsid core. The molecular architectures of HIV-1 IN assemblies that mediate these distinct events remain unknown. Furthermore, the tetramer is an important antiviral target for allosteric IN inhibitors. Here, we determined cryo-EM structures of wildtype HIV-1 IN tetramers and intasome hexadecamers. Our structures unveil a remarkable plasticity that leverages IN C-terminal domains and abutting linkers to assemble functionally distinct oligomeric forms. Alteration of a newly recognized conserved interface revealed that both IN functions track with tetramerization in vitro and during HIV-1 infection. Collectively, our findings reveal how IN plasticity orchestrates its diverse molecular functions, suggest a working model for IN-viral RNA binding, and provide atomic blueprints for allosteric IN inhibitor development.
Collapse
Affiliation(s)
- Tao Jing
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Zelin Shan
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Tung Dinh
- Division of Infectious Diseases, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Avik Biswas
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Sooin Jang
- Department of Cancer Immunology and Virology, Dana-Farber Cancer Institute, Boston, MA 02215, USA
- Department of Medicine, Harvard Medical School, Boston, MA 02115, USA
| | - Juliet Greenwood
- Department of Cancer Immunology and Virology, Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Min Li
- National Institutes of Health, National Institute of Diabetes and Digestive Diseases, Bethesda, MD, 20892, USA
| | - Zeyuan Zhang
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Gennavieve Gray
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Hye Jeong Shin
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Bo Zhou
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Dario Passos
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Sriram Aiyer
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
| | - Zhen Li
- Department of Cancer Immunology and Virology, Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Robert Craigie
- National Institutes of Health, National Institute of Diabetes and Digestive Diseases, Bethesda, MD, 20892, USA
| | - Alan N. Engelman
- Department of Cancer Immunology and Virology, Dana-Farber Cancer Institute, Boston, MA 02215, USA
- Department of Medicine, Harvard Medical School, Boston, MA 02115, USA
| | - Mamuka Kvaratskhelia
- Division of Infectious Diseases, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Dmitry Lyumkis
- The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
- Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, CA, 92037, USA
- Graduate School of Biological Sciences, Section of Molecular Biology, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
4
|
Zhao J, Xie Q, Shuang F, Yue S. An Angular Acceleration Based Looming Detector for Moving UAVs. Biomimetics (Basel) 2024; 9:22. [PMID: 38248596 PMCID: PMC11154257 DOI: 10.3390/biomimetics9010022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/15/2023] [Accepted: 12/23/2023] [Indexed: 01/23/2024] Open
Abstract
Visual perception equips unmanned aerial vehicles (UAVs) with increasingly comprehensive and instant environmental perception, rendering it a crucial technology in intelligent UAV obstacle avoidance. However, the rapid movements of UAVs cause significant changes in the field of view, affecting the algorithms' ability to extract the visual features of collisions accurately. As a result, algorithms suffer from a high rate of false alarms and a delay in warning time. During the study of visual field angle curves of different orders, it was found that the peak times of the curves of higher-order information on the angular size of looming objects are linearly related to the time to collision (TTC) and occur before collisions. This discovery implies that encoding higher-order information on the angular size could resolve the issue of response lag. Furthermore, the fact that the image of a looming object adjusts to meet several looming visual cues compared to the background interference implies that integrating various field-of-view characteristics will likely enhance the model's resistance to motion interference. Therefore, this paper presents a concise A-LGMD model for detecting looming objects. The model is based on image angular acceleration and addresses problems related to imprecise feature extraction and insufficient time series modeling to enhance the model's ability to rapidly and precisely detect looming objects during the rapid self-motion of UAVs. The model draws inspiration from the lobula giant movement detector (LGMD), which shows high sensitivity to acceleration information. In the proposed model, higher-order information on the angular size is abstracted by the network and fused with multiple visual field angle characteristics to promote the selective response to looming objects. Experiments carried out on synthetic and real-world datasets reveal that the model can efficiently detect the angular acceleration of an image, filter out insignificant background motion, and provide early warnings. These findings indicate that the model could have significant potential in embedded collision detection systems of micro or small UAVs.
Collapse
Affiliation(s)
- Jiannan Zhao
- Guangxi Key Laboratory of Intelligent Control and Maintenance of Power Equipment, School of Electrical Engineering, Guangxi University, Nanning 530004, China; (J.Z.); (Q.X.)
| | - Quansheng Xie
- Guangxi Key Laboratory of Intelligent Control and Maintenance of Power Equipment, School of Electrical Engineering, Guangxi University, Nanning 530004, China; (J.Z.); (Q.X.)
| | - Feng Shuang
- Guangxi Key Laboratory of Intelligent Control and Maintenance of Power Equipment, School of Electrical Engineering, Guangxi University, Nanning 530004, China; (J.Z.); (Q.X.)
| | - Shigang Yue
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
5
|
Chen J, Gish CM, Fransen JW, Salazar-Gatzimas E, Clark DA, Borghuis BG. Direct comparison reveals algorithmic similarities in fly and mouse visual motion detection. iScience 2023; 26:107928. [PMID: 37810236 PMCID: PMC10550730 DOI: 10.1016/j.isci.2023.107928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
Evolution has equipped vertebrates and invertebrates with neural circuits that selectively encode visual motion. While similarities in the computations performed by these circuits in mouse and fruit fly have been noted, direct experimental comparisons have been lacking. Because molecular mechanisms and neuronal morphology in the two species are distinct, we directly compared motion encoding in these two species at the algorithmic level, using matched stimuli and focusing on a pair of analogous neurons, the mouse ON starburst amacrine cell (ON SAC) and Drosophila T4 neurons. We find that the cells share similar spatiotemporal receptive field structures, sensitivity to spatiotemporal correlations, and tuning to sinusoidal drifting gratings, but differ in their responses to apparent motion stimuli. Both neuron types showed a response to summed sinusoids that deviates from models for motion processing in these cells, underscoring the similarities in their processing and identifying response features that remain to be explained.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
| | - Caitlin M Gish
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - James W Fransen
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| | | | - Damon A Clark
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Molecular, Cellular, Developmental Biology, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Bart G Borghuis
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| |
Collapse
|
6
|
Cai LT, Krishna VS, Hladnik TC, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Arrenberg AB, Thiele TR, Cooper EA. Spatiotemporal visual statistics of aquatic environments in the natural habitats of zebrafish. Sci Rep 2023; 13:12028. [PMID: 37491571 PMCID: PMC10368656 DOI: 10.1038/s41598-023-36099-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 05/29/2023] [Indexed: 07/27/2023] Open
Abstract
Animal sensory systems are tightly adapted to the demands of their environment. In the visual domain, research has shown that many species have circuits and systems that exploit statistical regularities in natural visual signals. The zebrafish is a popular model animal in visual neuroscience, but relatively little quantitative data is available about the visual properties of the aquatic habitats where zebrafish reside, as compared to terrestrial environments. Improving our understanding of the visual demands of the aquatic habitats of zebrafish can enhance the insights about sensory neuroscience yielded by this model system. We analyzed a video dataset of zebrafish habitats captured by a stationary camera and compared this dataset to videos of terrestrial scenes in the same geographic area. Our analysis of the spatiotemporal structure in these videos suggests that zebrafish habitats are characterized by low visual contrast and strong motion when compared to terrestrial environments. Similar to terrestrial environments, zebrafish habitats tended to be dominated by dark contrasts, particularly in the lower visual field. We discuss how these properties of the visual environment can inform the study of zebrafish visual behavior and neural processing and, by extension, can inform our understanding of the vertebrate brain.
Collapse
Affiliation(s)
- Lanya T Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA
| | - Venkatesh S Krishna
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
| | - Tim C Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
- Graduate Training Centre for Neuroscience, University of Tübingen, Tübingen, Germany
| | - Nicholas C Guilbeault
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, Department of Zoology, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Muthukumarasamy Arunachalam
- Department of Zoology, School of Biological Sciences, Central University of Kerala, Kasaragod, Kerala, India
- Centre for Inland Fishes and Conservation, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Scott A Juntti
- Department of Biology, University of Maryland, College Park, MD, USA
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada.
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada.
| | - Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA.
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.
| |
Collapse
|
7
|
Egelhaaf M. Optic flow based spatial vision in insects. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2023:10.1007/s00359-022-01610-w. [PMID: 36609568 DOI: 10.1007/s00359-022-01610-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/06/2022] [Accepted: 12/24/2022] [Indexed: 01/09/2023]
Abstract
The optic flow, i.e., the displacement of retinal images of objects in the environment induced by self-motion, is an important source of spatial information, especially for fast-flying insects. Spatial information over a wide range of distances, from the animal's immediate surroundings over several hundred metres to kilometres, is necessary for mediating behaviours, such as landing manoeuvres, collision avoidance in spatially complex environments, learning environmental object constellations and path integration in spatial navigation. To facilitate the processing of spatial information, the complexity of the optic flow is often reduced by active vision strategies. These result in translations and rotations being largely separated by a saccadic flight and gaze mode. Only the translational components of the optic flow contain spatial information. In the first step of optic flow processing, an array of local motion detectors provides a retinotopic spatial proximity map of the environment. This local motion information is then processed in parallel neural pathways in a task-specific manner and used to control the different components of spatial behaviour. A particular challenge here is that the distance information extracted from the optic flow does not represent the distances unambiguously, but these are scaled by the animal's speed of locomotion. Possible ways of coping with this ambiguity are discussed.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Universitätsstraße 25, 33615, Bielefeld, Germany.
| |
Collapse
|
8
|
Dewell RB, Zhu Y, Eisenbrandt M, Morse R, Gabbiani F. Contrast polarity-specific mapping improves efficiency of neuronal computation for collision detection. eLife 2022; 11:e79772. [PMID: 36314775 PMCID: PMC9674337 DOI: 10.7554/elife.79772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 10/27/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons receive information through their synaptic inputs, but the functional significance of how those inputs are mapped on to a cell's dendrites remains unclear. We studied this question in a grasshopper visual neuron that tracks approaching objects and triggers escape behavior before an impending collision. In response to black approaching objects, the neuron receives OFF excitatory inputs that form a retinotopic map of the visual field onto compartmentalized, distal dendrites. Subsequent processing of these OFF inputs by active membrane conductances allows the neuron to discriminate the spatial coherence of such stimuli. In contrast, we show that ON excitatory synaptic inputs activated by white approaching objects map in a random manner onto a more proximal dendritic field of the same neuron. The lack of retinotopic synaptic arrangement results in the neuron's inability to discriminate the coherence of white approaching stimuli. Yet, the neuron retains the ability to discriminate stimulus coherence for checkered stimuli of mixed ON/OFF polarity. The coarser mapping and processing of ON stimuli thus has a minimal impact, while reducing the total energetic cost of the circuit. Further, we show that these differences in ON/OFF neuronal processing are behaviorally relevant, being tightly correlated with the animal's escape behavior to light and dark stimuli of variable coherence. Our results show that the synaptic mapping of excitatory inputs affects the fine stimulus discrimination ability of single neurons and document the resulting functional impact on behavior.
Collapse
Affiliation(s)
| | - Ying Zhu
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
| | | | | | - Fabrizio Gabbiani
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
| |
Collapse
|
9
|
Zhou B, Li Z, Kim S, Lafferty J, Clark DA. Shallow neural networks trained to detect collisions recover features of visual loom-selective neurons. eLife 2022; 11:72067. [PMID: 35023828 PMCID: PMC8849349 DOI: 10.7554/elife.72067] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/11/2022] [Indexed: 11/13/2022] Open
Abstract
Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.
Collapse
Affiliation(s)
- Baohua Zhou
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| | - Zifan Li
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Sunnie Kim
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - John Lafferty
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| |
Collapse
|
10
|
James JV, Cazzolato BS, Grainger S, Wiederman SD. Nonlinear, neuronal adaptation in insect vision models improves target discrimination within repetitively moving backgrounds. BIOINSPIRATION & BIOMIMETICS 2021; 16:066015. [PMID: 34555824 DOI: 10.1088/1748-3190/ac2988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 09/23/2021] [Indexed: 06/13/2023]
Abstract
Neurons which respond selectively to small moving targets, even against a cluttered background, have been identified in several insect species. To investigate what underlies these robust and highly selective responses, researchers have probed the neuronal circuitry in target-detecting, visual pathways. Observations in flies reveal nonlinear adaptation over time, composed of a fast onset and gradual decay. This adaptive processing is seen in both of the independent, parallel pathways encoding either luminance increments (ON channel) or decrements (OFF channel). The functional significance of this adaptive phenomenon has not been determined from physiological studies, though the asymmetrical time course suggests a role in suppressing responses to repetitive stimuli. We tested this possibility by comparing an implementation of fast adaptation against alternatives, using a model of insect 'elementary small target motion detectors'. We conducted target-detecting simulations on various natural backgrounds, that were shifted via several movement profiles (and target velocities). Using performance metrics, we confirmed that the fast adaptation observed in neuronal systems enhances target detection against a repetitively moving background. Such background movement would be encountered via natural ego-motion as the insect travels through the world. These findings show that this form of nonlinear, fast-adaptation (suitably implementable via cellular biophysics) plays a role analogous to background subtraction techniques in conventional computer vision.
Collapse
Affiliation(s)
- John V James
- School of Mechanical Engineering, University of Adelaide, Adelaide SA, Australia
- Adelaide Medical School, University of Adelaide, Adelaide SA, Australia
| | - Benjamin S Cazzolato
- School of Mechanical Engineering, University of Adelaide, Adelaide SA, Australia
| | - Steven Grainger
- School of Mechanical Engineering, University of Adelaide, Adelaide SA, Australia
| | | |
Collapse
|
11
|
Mano O, Creamer MS, Badwan BA, Clark DA. Predicting individual neuron responses with anatomically constrained task optimization. Curr Biol 2021; 31:4062-4075.e4. [PMID: 34324832 DOI: 10.1016/j.cub.2021.06.090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/24/2021] [Accepted: 06/29/2021] [Indexed: 01/28/2023]
Abstract
Artificial neural networks trained to solve sensory tasks can develop statistical representations that match those in biological circuits. However, it remains unclear whether they can reproduce properties of individual neurons. Here, we investigated how artificial networks predict individual neuron properties in the visual motion circuits of the fruit fly Drosophila. We trained anatomically constrained networks to predict movement in natural scenes, solving the same inference problem as fly motion detectors. Units in the artificial networks adopted many properties of analogous individual neurons, even though they were not explicitly trained to match these properties. Among these properties was the split into ON and OFF motion detectors, which is not predicted by classical motion detection models. The match between model and neurons was closest when models were trained to be robust to noise. These results demonstrate how anatomical, task, and noise constraints can explain properties of individual neurons in a small neural network.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
12
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
13
|
Kaushik PK, Olsson SB. Using virtual worlds to understand insect navigation for bio-inspired systems. CURRENT OPINION IN INSECT SCIENCE 2020; 42:97-104. [PMID: 33010476 DOI: 10.1016/j.cois.2020.09.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 09/18/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
Insects perform a wide array of intricate behaviors over large spatial and temporal scales in complex natural environments. A mechanistic understanding of insect cognition has direct implications on how brains integrate multimodal information and can inspire bio-based solutions for autonomous robots. Virtual Reality (VR) offers an opportunity assess insect neuroethology while presenting complex, yet controlled, stimuli. Here, we discuss the use of insects as inspiration for artificial systems, recent advances in different VR technologies, current knowledge gaps, and the potential for application of insect VR research to bio-inspired robots. Finally, we advocate the need to diversify our model organisms, behavioral paradigms, and embrace the complexity of the natural world. This will help us to uncover the proximate and ultimate basis of brain and behavior and extract general principles for common challenging problems.
Collapse
Affiliation(s)
- Pavan Kumar Kaushik
- National Centre for Biological Sciences, Tata Institute of Fundamental Research, GKVK Campus, Bellary Road, Bengaluru, 560064, India.
| | - Shannon B Olsson
- National Centre for Biological Sciences, Tata Institute of Fundamental Research, GKVK Campus, Bellary Road, Bengaluru, 560064, India.
| |
Collapse
|
14
|
Agrochao M, Tanaka R, Salazar-Gatzimas E, Clark DA. Mechanism for analogous illusory motion perception in flies and humans. Proc Natl Acad Sci U S A 2020; 117:23044-23053. [PMID: 32839324 PMCID: PMC7502748 DOI: 10.1073/pnas.2002937117] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual motion detection is one of the most important computations performed by visual circuits. Yet, we perceive vivid illusory motion in stationary, periodic luminance gradients that contain no true motion. This illusion is shared by diverse vertebrate species, but theories proposed to explain this illusion have remained difficult to test. Here, we demonstrate that in the fruit fly Drosophila, the illusory motion percept is generated by unbalanced contributions of direction-selective neurons' responses to stationary edges. First, we found that flies, like humans, perceive sustained motion in the stationary gradients. The percept was abolished when the elementary motion detector neurons T4 and T5 were silenced. In vivo calcium imaging revealed that T4 and T5 neurons encode the location and polarity of stationary edges. Furthermore, our proposed mechanistic model allowed us to predictably manipulate both the magnitude and direction of the fly's illusory percept by selectively silencing either T4 or T5 neurons. Interestingly, human brains possess the same mechanistic ingredients that drive our model in flies. When we adapted human observers to moving light edges or dark edges, we could manipulate the magnitude and direction of their percepts as well, suggesting that mechanisms similar to the fly's may also underlie this illusion in humans. By taking a comparative approach that exploits Drosophila neurogenetics, our results provide a causal, mechanistic account for a long-known visual illusion. These results argue that this illusion arises from architectures for motion detection that are shared across phyla.
Collapse
Affiliation(s)
- Margarida Agrochao
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT 06511
| | - Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511
| | | | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT 06511;
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511
- Department of Physics, Yale University, New Haven, CT 06511
- Department of Neuroscience, Yale University, New Haven, CT 06511
| |
Collapse
|
15
|
Zavatone-Veth JA, Badwan BA, Clark DA. A minimal synaptic model for direction selective neurons in Drosophila. J Vis 2020; 20:2. [PMID: 32040161 PMCID: PMC7343402 DOI: 10.1167/jov.20.2.2] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomic and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results but have rarely been compared across experiments. Here we use the wealth of available anatomical and physiological data to construct a minimal, biophysically inspired synaptic model for Drosophila’s first-order direction-selective T4 cells. We show how this model relates mathematically to classical models of motion detection, including the Hassenstein-Reichardt correlator model. We used numerical simulation to test how well this synaptic model could reproduce measurements of T4 cells across many datasets and stimulus modalities. These comparisons include responses to sinusoid gratings, to apparent motion stimuli, to stochastic stimuli, and to natural scenes. Without fine-tuning this model, it sufficed to reproduce many, but not all, response properties of T4 cells. Since this model is flexible and based on straightforward biophysical properties, it provides an extensible framework for developing a mechanistic understanding of T4 neural response properties. Moreover, it can be used to assess the sufficiency of simple biophysical mechanisms to describe features of the direction-selective computation and identify where our understanding must be improved.
Collapse
|
16
|
Yildizoglu T, Riegler C, Fitzgerald JE, Portugues R. A Neural Representation of Naturalistic Motion-Guided Behavior in the Zebrafish Brain. Curr Biol 2020; 30:2321-2333.e6. [PMID: 32386533 DOI: 10.1016/j.cub.2020.04.043] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/13/2020] [Accepted: 04/20/2020] [Indexed: 11/20/2022]
Abstract
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Collapse
Affiliation(s)
- Tugce Yildizoglu
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany
| | - Clemens Riegler
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ruben Portugues
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany; Institute of Neuroscience, Technical University of Munich, Munich 80802, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich 80802, Germany.
| |
Collapse
|
17
|
Matulis CA, Chen J, Gonzalez-Suarez AD, Behnia R, Clark DA. Heterogeneous Temporal Contrast Adaptation in Drosophila Direction-Selective Circuits. Curr Biol 2020; 30:222-236.e6. [PMID: 31928874 PMCID: PMC7003801 DOI: 10.1016/j.cub.2019.11.077] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 11/06/2019] [Accepted: 11/26/2019] [Indexed: 11/23/2022]
Abstract
In visual systems, neurons adapt both to the mean light level and to the range of light levels, or the contrast. Contrast adaptation has been studied extensively, but it remains unclear how it is distributed among neurons in connected circuits, and how early adaptation affects subsequent computations. Here, we investigated temporal contrast adaptation in neurons across Drosophila's visual motion circuitry. Several ON-pathway neurons showed strong adaptation to changes in contrast over time. One of these neurons, Mi1, showed almost complete adaptation on fast timescales, and experiments ruled out several potential mechanisms for its adaptive properties. When contrast adaptation reduced the gain in ON-pathway cells, it was accompanied by decreased motion responses in downstream direction-selective cells. Simulations show that contrast adaptation can substantially improve motion estimates in natural scenes. The benefits are larger for ON-pathway adaptation, which helps explain the heterogeneous distribution of contrast adaptation in these circuits.
Collapse
Affiliation(s)
- Catherine A Matulis
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA
| | | | - Rudy Behnia
- Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Damon A Clark
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, 260 Whitney Avenue, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, 333 Cedar Street, New Haven, CT 06510, USA.
| |
Collapse
|
18
|
Chen J, Mandel HB, Fitzgerald JE, Clark DA. Asymmetric ON-OFF processing of visual motion cancels variability induced by the structure of natural scenes. eLife 2019; 8:e47579. [PMID: 31613221 PMCID: PMC6884396 DOI: 10.7554/elife.47579] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 10/12/2019] [Indexed: 02/05/2023] Open
Abstract
Animals detect motion using a variety of visual cues that reflect regularities in the natural world. Experiments in animals across phyla have shown that motion percepts incorporate both pairwise and triplet spatiotemporal correlations that could theoretically benefit motion computation. However, it remains unclear how visual systems assemble these cues to build accurate motion estimates. Here, we used systematic behavioral measurements of fruit fly motion perception to show how flies combine local pairwise and triplet correlations to reduce variability in motion estimates across natural scenes. By generating synthetic images with statistics controlled by maximum entropy distributions, we show that the triplet correlations are useful only when images have light-dark asymmetries that mimic natural ones. This suggests that asymmetric ON-OFF processing is tuned to the particular statistics of natural scenes. Since all animals encounter the world's light-dark asymmetries, many visual systems are likely to use asymmetric ON-OFF processing to improve motion estimation.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
| | - Holly B Mandel
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
| | - James E Fitzgerald
- Janelia Research CampusHoward Hughes Medical InstituteAshburnUnited States
| | - Damon A Clark
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
- Department of PhysicsYale UniversityNew HavenUnited States
- Department of NeuroscienceYale UniversityNew HavenUnited States
| |
Collapse
|