1
|
Christenson MP, Díez ÁS, Heath SL, Saavedra-Weisenhaus M, Adachi A, Abbott LF, Behnia R. Hue selectivity from recurrent circuitry in Drosophila. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.12.548573. [PMID: 37502934 PMCID: PMC10369983 DOI: 10.1101/2023.07.12.548573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
A universal principle of sensory perception is the progressive transformation of sensory information from broad non-specific signals to stimulus-selective signals that form the basis of perception. To perceive color, our brains must transform the wavelengths of light reflected off objects into the derived quantities of brightness, saturation and hue. Neurons responding selectively to hue have been reported in primate cortex, but it is unknown how their narrow tuning in color space is produced by upstream circuit mechanisms. To enable circuit level analysis of color perception, we here report the discovery of neurons in the Drosophila optic lobe with hue selective properties. Using the connectivity graph of the fly brain, we construct a connectomics-constrained circuit model that accounts for this hue selectivity. Unexpectedly, our model predicts that recurrent connections in the circuit are critical for hue selectivity. Experiments using genetic manipulations to perturb recurrence in adult flies confirms this prediction. Our findings reveal the circuit basis for hue selectivity in color vision.
Collapse
|
2
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
3
|
Paiton DM, Frye CG, Lundquist SY, Bowen JD, Zarcone R, Olshausen BA. Selectivity and robustness of sparse coding networks. J Vis 2020; 20:10. [PMID: 33237290 PMCID: PMC7691792 DOI: 10.1167/jov.20.12.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigate how the population nonlinearities resulting from lateral inhibition and thresholding in sparse coding networks influence neural response selectivity and robustness. We show that when compared to pointwise nonlinear models, such population nonlinearities improve the selectivity to a preferred stimulus and protect against adversarial perturbations of the input. These findings are predicted from the geometry of the single-neuron iso-response surface, which provides new insight into the relationship between selectivity and adversarial robustness. Inhibitory lateral connections curve the iso-response surface outward in the direction of selectivity. Since adversarial perturbations are orthogonal to the iso-response surface, adversarial attacks tend to be aligned with directions of selectivity. Consequently, the network is less easily fooled by perceptually irrelevant perturbations to the input. Together, these findings point to benefits of integrating computational principles found in biological vision systems into artificial neural networks.
Collapse
Affiliation(s)
- Dylan M Paiton
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,
| | - Charles G Frye
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,
| | - Sheng Y Lundquist
- Department of Computer Science, Portland State University, Portland, OR, USA.,
| | - Joel D Bowen
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,
| | - Ryan Zarcone
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Biophysics, University of California Berkeley, Berkeley, CA, USA.,
| | - Bruno A Olshausen
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,
| |
Collapse
|
4
|
Giraldo LGS, Schwartz O. Integrating Flexible Normalization into Midlevel Representations of Deep Convolutional Neural Networks. Neural Comput 2019; 31:2138-2176. [PMID: 31525314 DOI: 10.1162/neco_a_01226] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Deep convolutional neural networks (CNNs) are becoming increasingly popular models to predict neural responses in visual cortex. However, contextual effects, which are prevalent in neural processing and in perception, are not explicitly handled by current CNNs, including those used for neural prediction. In primary visual cortex, neural responses are modulated by stimuli spatially surrounding the classical receptive field in rich ways. These effects have been modeled with divisive normalization approaches, including flexible models, where spatial normalization is recruited only to the degree that responses from center and surround locations are deemed statistically dependent. We propose a flexible normalization model applied to midlevel representations of deep CNNs as a tractable way to study contextual normalization mechanisms in midlevel cortical areas. This approach captures nontrivial spatial dependencies among midlevel features in CNNs, such as those present in textures and other visual stimuli, that arise from tiling high-order features geometrically. We expect that the proposed approach can make predictions about when spatial normalization might be recruited in midlevel cortical areas. We also expect this approach to be useful as part of the CNN tool kit, therefore going beyond more restrictive fixed forms of normalization.
Collapse
Affiliation(s)
| | - Odelia Schwartz
- Computer Science Department, University of Miami, Coral Gables, FL 33146, U.S.A.
| |
Collapse
|
5
|
Hansen BC, Field DJ, Greene MR, Olson C, Miskovic V. Towards a state-space geometry of neural responses to natural scenes: A steady-state approach. Neuroimage 2019; 201:116027. [PMID: 31325643 DOI: 10.1016/j.neuroimage.2019.116027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 06/13/2019] [Accepted: 07/16/2019] [Indexed: 10/26/2022] Open
Abstract
Our understanding of information processing by the mammalian visual system has come through a variety of techniques ranging from psychophysics and fMRI to single unit recording and EEG. Each technique provides unique insights into the processing framework of the early visual system. Here, we focus on the nature of the information that is carried by steady state visual evoked potentials (SSVEPs). To study the information provided by SSVEPs, we presented human participants with a population of natural scenes and measured the relative SSVEP response. Rather than focus on particular features of this signal, we focused on the full state-space of possible responses and investigated how the evoked responses are mapped onto this space. Our results show that it is possible to map the relatively high-dimensional signal carried by SSVEPs onto a 2-dimensional space with little loss. We also show that a simple biologically plausible model can account for a high proportion of the explainable variance (~73%) in that space. Finally, we describe a technique for measuring the mutual information that is available about images from SSVEPs. The techniques introduced here represent a new approach to understanding the nature of the information carried by SSVEPs. Crucially, this approach is general and can provide a means of comparing results across different neural recording methods. Altogether, our study sheds light on the encoding principles of early vision and provides a much needed reference point for understanding subsequent transformations of the early visual response space to deeper knowledge structures that link different visual environments.
Collapse
Affiliation(s)
- Bruce C Hansen
- Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton, NY, USA.
| | - David J Field
- Cornell University, Department of Psychology, Ithaca, NY, USA
| | | | - Cassady Olson
- Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton, NY, USA; Current Address: University of Chicago, Committee on Computational Neuroscience, Chicago, IL, USA
| | - Vladimir Miskovic
- State University of New York at Binghamton, Department of Psychology, Binghamton, NY, USA
| |
Collapse
|
6
|
Sanchez-Giraldo LG, Laskar MNU, Schwartz O. Normalization and pooling in hierarchical models of natural images. Curr Opin Neurobiol 2019; 55:65-72. [PMID: 30785005 DOI: 10.1016/j.conb.2019.01.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 12/29/2018] [Accepted: 01/13/2019] [Indexed: 11/17/2022]
Abstract
Divisive normalization and subunit pooling are two canonical classes of computation that have become widely used in descriptive (what) models of visual cortical processing. Normative (why) models from natural image statistics can help constrain the form and parameters of such classes of models. We focus on recent advances in two particular directions, namely deriving richer forms of divisive normalization, and advances in learning pooling from image statistics. We discuss the incorporation of such components into hierarchical models. We consider both hierarchical unsupervised learning from image statistics, and discriminative supervised learning in deep convolutional neural networks (CNNs). We further discuss studies on the utility and extensions of the convolutional architecture, which has also been adopted by recent descriptive models. We review the recent literature and discuss the current promises and gaps of using such approaches to gain a better understanding of how cortical neurons represent and process complex visual stimuli.
Collapse
Affiliation(s)
- Luis G Sanchez-Giraldo
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States.
| | - Md Nasir Uddin Laskar
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States
| | - Odelia Schwartz
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States
| |
Collapse
|
7
|
Turner MH, Sanchez Giraldo LG, Schwartz O, Rieke F. Stimulus- and goal-oriented frameworks for understanding natural vision. Nat Neurosci 2019; 22:15-24. [PMID: 30531846 PMCID: PMC8378293 DOI: 10.1038/s41593-018-0284-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/22/2018] [Indexed: 12/21/2022]
Abstract
Our knowledge of sensory processing has advanced dramatically in the last few decades, but this understanding remains far from complete, especially for stimuli with the large dynamic range and strong temporal and spatial correlations characteristic of natural visual inputs. Here we describe some of the issues that make understanding the encoding of natural images a challenge. We highlight two broad strategies for approaching this problem: a stimulus-oriented framework and a goal-oriented one. Different contexts can call for one framework or the other. Looking forward, recent advances, particularly those based in machine learning, show promise in borrowing key strengths of both frameworks and by doing so illuminating a path to a more comprehensive understanding of the encoding of natural stimuli.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
| | | | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA.
| |
Collapse
|
8
|
Measurements of neuronal color tuning: Procedures, pitfalls, and alternatives. Vision Res 2017; 151:53-60. [PMID: 29133032 DOI: 10.1016/j.visres.2017.08.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2016] [Revised: 07/21/2017] [Accepted: 08/11/2017] [Indexed: 11/21/2022]
Abstract
Measuring the color tuning of visual neurons is important for understanding the neural basis of vision, but it is challenging because of the inherently three-dimensional nature of color. Color tuning cannot be represented by a one-dimensional curve, and measuring three-dimensional tuning curves is difficult. One approach to addressing this challenge is to analyze neuronal color tuning data through the lens of mathematical models that make assumptions about the shapes of tuning curves. In this paper, we discuss the linear-nonlinear cascade model as a platform for measuring neuronal color tuning. We compare fitting this model by three techniques: two using response-weighted averaging and one using numerical optimization of likelihood. We highlight the advantages and disadvantages of each technique and emphasize the effects of the stimulus distribution on color tuning measurements.
Collapse
|
9
|
The equivalent internal orientation and position noise for contour integration. Sci Rep 2017; 7:13048. [PMID: 29026194 PMCID: PMC5638929 DOI: 10.1038/s41598-017-13244-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Accepted: 09/13/2017] [Indexed: 11/11/2022] Open
Abstract
Contour integration is the joining-up of local responses to parts of a contour into a continuous percept. In typical studies observers detect contours formed of discrete wavelets, presented against a background of random wavelets. This measures performance for detecting contours in the limiting external noise that background provides. Our novel task measures contour integration without requiring any background noise. This allowed us to perform noise-masking experiments using orientation and position noise. From these we measure the equivalent internal noise for contour integration. We found an orientation noise of 6° and position noise of 3 arcmin. Orientation noise was 2.6x higher in contour integration compared to an orientation discrimination control task. Comparing against a position discrimination task found position noise in contours to be 2.4x lower. This suggests contour integration involves intermediate processing that enhances the quality of element position representation at the expense of element orientation. Efficiency relative to the ideal observer was lower for the contour tasks (36% in orientation noise, 21% in position noise) compared to the controls (54% and 57%).
Collapse
|
10
|
Elder JH, Victor J, Zucker SW. Understanding the statistics of the natural environment and their implications for vision. Vision Res 2016; 120:1-4. [PMID: 26851343 DOI: 10.1016/j.visres.2016.01.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- James H Elder
- Department of Electrical Engineering & Computer Science, Department of Psychology, Centre for Vision Research, York University, 4700 Keele Street Toronto, Ontario M3J 1P3, Canada.
| | - Jonathan Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA.
| | - Steven W Zucker
- Depts. of Computer Science and Biomedical Engineering, Yale University, 51 Prospect St., New Haven, CT 06520-8285, USA.
| |
Collapse
|