1
|
Srinath R, Czarnik MM, Cohen MR. Coordinated Response Modulations Enable Flexible Use of Visual Information. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.10.602774. [PMID: 39071390 PMCID: PMC11275750 DOI: 10.1101/2024.07.10.602774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
We use sensory information in remarkably flexible ways. We can generalize by ignoring task-irrelevant features, report different features of a stimulus, and use different actions to report a perceptual judgment. These forms of flexible behavior are associated with small modulations of the responses of sensory neurons. While the existence of these response modulations is indisputable, efforts to understand their function have been largely relegated to theory, where they have been posited to change information coding or enable downstream neurons to read out different visual and cognitive information using flexible weights. Here, we tested these ideas using a rich, flexible behavioral paradigm, multi-neuron, multi-area recordings in primary visual cortex (V1) and mid-level visual area V4. We discovered that those response modulations in V4 (but not V1) contain the ingredients necessary to enable flexible behavior, but not via those previously hypothesized mechanisms. Instead, we demonstrated that these response modulations are precisely coordinated across the population such that downstream neurons have ready access to the correct information to flexibly guide behavior without making changes to information coding or synapses. Our results suggest a novel computational role for task-dependent response modulations: they enable flexible behavior by changing the information that gets out of a sensory area, not by changing information coding within it.
Collapse
Affiliation(s)
- Ramanujan Srinath
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
| | - Martyna M. Czarnik
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- Current affiliation: Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Marlene R. Cohen
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
2
|
Li J, Rentzeperis I, van Leeuwen C. Functional and spatial rewiring principles jointly regulate context-sensitive computation. PLoS Comput Biol 2023; 19:e1011325. [PMID: 37566628 PMCID: PMC10446201 DOI: 10.1371/journal.pcbi.1011325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 08/23/2023] [Accepted: 07/03/2023] [Indexed: 08/13/2023] Open
Abstract
Adaptive rewiring provides a basic principle of self-organizing connectivity in evolving neural network topology. By selectively adding connections to regions with intense signal flow and deleting underutilized connections, adaptive rewiring generates optimized brain-like, i.e. modular, small-world, and rich club connectivity structures. Besides topology, neural self-organization also follows spatial optimization principles, such as minimizing the neural wiring distance and topographic alignment of neural pathways. We simulated the interplay of these spatial principles and adaptive rewiring in evolving neural networks with weighted and directed connections. The neural traffic flow within the network is represented by the equivalent of diffusion dynamics for directed edges: consensus and advection. We observe a constructive synergy between adaptive and spatial rewiring, which contributes to network connectedness. In particular, wiring distance minimization facilitates adaptive rewiring in creating convergent-divergent units. These units support the flow of neural information and enable context-sensitive information processing in the sensory cortex and elsewhere. Convergent-divergent units consist of convergent hub nodes, which collect inputs from pools of nodes and project these signals via a densely interconnected set of intermediate nodes onto divergent hub nodes, which broadcast their output back to the network. Convergent-divergent units vary in the degree to which their intermediate nodes are isolated from the rest of the network. This degree, and hence the context-sensitivity of the network's processing style, is parametrically determined in the evolving network model by the relative prominence of spatial versus adaptive rewiring.
Collapse
Affiliation(s)
- Jia Li
- Brain and Cognition unit, Faculty of psychology and educational sciences, KU Leuven, Leuven, Belgium
| | - Ilias Rentzeperis
- Brain and Cognition unit, Faculty of psychology and educational sciences, KU Leuven, Leuven, Belgium
| | - Cees van Leeuwen
- Brain and Cognition unit, Faculty of psychology and educational sciences, KU Leuven, Leuven, Belgium
- Cognitive and developmental psychology unit, Faculty of social science, University of Kaiserslautern, Kaiserslautern, Germany
| |
Collapse
|
3
|
Sainsbury TTJ, Diana G, Meyer MP. Topographically Localized Modulation of Tectal Cell Spatial Tuning by Complex Natural Scenes. eNeuro 2023; 10:ENEURO.0223-22.2022. [PMID: 36543538 PMCID: PMC9833049 DOI: 10.1523/eneuro.0223-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 10/26/2022] [Accepted: 11/01/2022] [Indexed: 12/24/2022] Open
Abstract
The tuning properties of neurons in the visual system can be contextually modulated by the statistics of the area surrounding their receptive field (RF), particularly when the surround contains natural features. However, stimuli presented in specific egocentric locations may have greater behavioral relevance, raising the possibility that the extent of contextual modulation may vary with position in visual space. To explore this possibility, we utilized the small size and optical transparency of the larval zebrafish to describe the form and spatial arrangement of contextually modulated cells throughout an entire tectal hemisphere. We found that the spatial tuning of tectal neurons to a prey-like stimulus sharpens when the stimulus is presented against a background with the statistics of complex natural scenes, relative to a featureless background. These neurons are confined to a spatially restricted region of the tectum and have receptive fields centered within a region of visual space in which the presence of prey preferentially triggers hunting behavior. Our results suggest that contextual modulation of tectal neurons by complex backgrounds may facilitate prey-localization in cluttered visual environments.
Collapse
Affiliation(s)
- Thomas T J Sainsbury
- The Centre for Developmental Neurobiology and MRC Center for Neurodevelopmental Disorders, King's College London, London, United Kingdom, SE1 1UL
| | - Giovanni Diana
- The Centre for Developmental Neurobiology and MRC Center for Neurodevelopmental Disorders, King's College London, London, United Kingdom, SE1 1UL
- Insitut Pasteur, University of Paris, Paris, France, 75015
- Sampled Analytics, Arcueil, France, 94110
| | - Martin P Meyer
- The Centre for Developmental Neurobiology and MRC Center for Neurodevelopmental Disorders, King's College London, London, United Kingdom, SE1 1UL
- Lundbeck Foundation, Copenhagen, Denmark, 2100
| |
Collapse
|
4
|
Canoluk MU, Moors P, Goffaux V. Contributions of low- and high-level contextual mechanisms to human face perception. PLoS One 2023; 18:e0285255. [PMID: 37130144 PMCID: PMC10153715 DOI: 10.1371/journal.pone.0285255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 04/18/2023] [Indexed: 05/03/2023] Open
Abstract
Contextual modulations at primary stages of visual processing depend on the strength of local input. Contextual modulations at high-level stages of (face) processing show a similar dependence to local input strength. Namely, the discriminability of a facial feature determines the amount of influence of the face context on that feature. How high-level contextual modulations emerge from primary mechanisms is unclear due to the scarcity of empirical research systematically addressing the functional link between the two. We tested (62) young adults' ability to process local input independent of the context using contrast detection and (upright and inverted) morphed facial feature matching tasks. We first investigated contextual modulation magnitudes across tasks to address their shared variance. A second analysis focused on the profile of performance across contextual conditions. In upright eye matching and contrast detection tasks, contextual modulations only correlated at the level of their profile (averaged Fisher-Z transformed r = 1.18, BF10 > 100), but not magnitude (r = .15, BF10 = .61), suggesting the functional independence but similar working principles of the mechanisms involved. Both the profile (averaged Fisher-Z transformed r = .32, BF10 = 9.7) and magnitude (r = .28, BF10 = 4.58) of the contextual modulations correlated between inverted eye matching and contrast detection tasks. Our results suggest that non-face-specialized high-level contextual mechanisms (inverted faces) work in connection to primary contextual mechanisms, but that the engagement of face-specialized mechanisms for upright faces obscures this connection. Such combined study of low- and high-level contextual modulations sheds new light on the functional relationship between different levels of the visual processing hierarchy, and thus on its functional organization.
Collapse
Affiliation(s)
- Mehmet Umut Canoluk
- Research Institute for Psychological Science (IPSY), UCLouvain, Louvain-la-Neuve, Belgium
| | - Pieter Moors
- Department of Brain and Cognition, Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium
| | - Valerie Goffaux
- Research Institute for Psychological Science (IPSY), UCLouvain, Louvain-la-Neuve, Belgium
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
5
|
Dumbalska T, Rudzka K, Smithson HE, Summerfield C. How do (perceptual) distracters distract? PLoS Comput Biol 2022; 18:e1010609. [PMID: 36228038 PMCID: PMC9595561 DOI: 10.1371/journal.pcbi.1010609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 10/25/2022] [Accepted: 09/27/2022] [Indexed: 11/06/2022] Open
Abstract
When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results. In the real world, visual scenes usually contain many objects. As a consequence, we often have to make perceptual judgments about a specific ‘target’ stimulus in the presence of irrelevant ‘distracter’ stimuli. For instance, when hanging a picture frame, we want to discern whether it is hanging straight, ignoring the surrounding, potentially tilted frames. Laboratory experiments have shown that the presence of distracter stimuli (i.e. other picture frames) makes this type of perceptual judgment less accurate. However, the specific effect distracters have on judgments is controversial. Here, we conducted a series of experiments to compare two alternative theories of distracter influence: one in which distracters compete with targets to determine choices (independent model) and one in which distracters wield a more indirect influence on choices (interaction model). We found evidence for the latter account. Our results suggest distracters affect perceptual decisions by adjusting how sensitive decisions are to the target stimulus.
Collapse
Affiliation(s)
- Tsvetomira Dumbalska
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- * E-mail:
| | - Katarzyna Rudzka
- Division of Biosciences, University College London, London, United Kingdom
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Hannah E. Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
6
|
Noise Generation Methods Preserving Image Color Intensity Distributions. CYBERNETICS AND INFORMATION TECHNOLOGIES 2022. [DOI: 10.2478/cait-2022-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Abstract
In many visual perception studies, external visual noise is used as a methodology to broaden the understanding of information processing of visual stimuli. The underlying assumption is that two sources of noise limit sensory processing: the external noise inherent in the environmental signals and the internal noise or internal variability at different levels of the neural system. Usually, when external noise is added to an image, it is evenly distributed. However, the color intensity and image contrast are modified in this way, and it is unclear whether the visual system responds to their change or the noise presence. We aimed to develop several methods of noise generation with different distributions that keep the global image characteristics. These methods are appropriate in various applications for evaluating the internal noise in the visual system and its ability to filter the added noise. As these methods destroy the correlation in image intensity of neighboring pixels, they could be used to evaluate the role of local spatial structure in image processing.
Collapse
|
7
|
Qiu S, Caldwell C, You J, Mendola J. Binocular rivalry from luminance and contrast. Vision Res 2020; 175:41-50. [DOI: 10.1016/j.visres.2020.06.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 06/21/2020] [Accepted: 06/24/2020] [Indexed: 11/16/2022]
|
8
|
Gheorghiu E, Kingdom FAA. Luminance-contrast properties of texture-shape and texture-surround suppression of contour shape. J Vis 2019; 19:4. [PMID: 31613953 DOI: 10.1167/19.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Studies have revealed that textures suppress the processing of the shapes of contours they surround. One manifestation of texture-surround suppression is the reduction in the magnitude of adaptation-induced contour-shape aftereffects when the adaptor contour is surrounded by a texture. Here we utilize this phenomenon to investigate the nature of the first-order inputs to texture-surround suppression of contour shape by examining its selectivity to luminance polarity and the magnitude of luminance contrast. Stimuli were constructed from sinusoidal-shaped strings of either "bright" or "dark" elongated Gaussians. Observers adapted to pairs of contours, and the aftereffect was measured as the shift in the apparent shape frequency of subsequently presented test contours. We found that the suppression of the contour-shape aftereffect by a surround texture made of similar contours was maximal when the adaptor's center and surround contours were of the same polarity, revealing polarity specificity of the surround-suppression effect. We also measured the effect of varying the relative contrasts of the adaptor's center and surround and found that the reduction in the contour-shape aftereffect was determined by the surround-to-center contrast ratio. Finally, we measured the selectivity to luminance polarity of the texture-shape aftereffect itself and found that it was reduced when the adaptors and tests were of opposite luminance polarity. We conclude that texture-surround suppression of contour-shape as well as texture-shape processing itself depend on "on-off" luminance-polarity channel interactions. These selectivities may constitute an important neural substrate underlying efficient figure-ground segregation and image segmentation.
Collapse
Affiliation(s)
- Elena Gheorghiu
- Department of Psychology, University of Stirling, Stirling, Scotland, United Kingdom
| | - Frederick A A Kingdom
- Department of Ophthalmology, McGill Vision Research, McGill University, Montreal, QC, Canada
| |
Collapse
|
9
|
Dematties D, Rizzi S, Thiruvathukal GK, Wainselboim A, Zanutto BS. Phonetic acquisition in cortical dynamics, a computational approach. PLoS One 2019; 14:e0217966. [PMID: 31173613 PMCID: PMC6555517 DOI: 10.1371/journal.pone.0217966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 05/23/2019] [Indexed: 11/25/2022] Open
Abstract
Many computational theories have been developed to improve artificial phonetic classification performance from linguistic auditory streams. However, less attention has been given to psycholinguistic data and neurophysiological features recently found in cortical tissue. We focus on a context in which basic linguistic units–such as phonemes–are extracted and robustly classified by humans and other animals from complex acoustic streams in speech data. We are especially motivated by the fact that 8-month-old human infants can accomplish segmentation of words from fluent audio streams based exclusively on the statistical relationships between neighboring speech sounds without any kind of supervision. In this paper, we introduce a biologically inspired and fully unsupervised neurocomputational approach that incorporates key neurophysiological and anatomical cortical properties, including columnar organization, spontaneous micro-columnar formation, adaptation to contextual activations and Sparse Distributed Representations (SDRs) produced by means of partial N-Methyl-D-aspartic acid (NMDA) depolarization. Its feature abstraction capabilities show promising phonetic invariance and generalization attributes. Our model improves the performance of a Support Vector Machine (SVM) classifier for monosyllabic, disyllabic and trisyllabic word classification tasks in the presence of environmental disturbances such as white noise, reverberation, and pitch and voice variations. Furthermore, our approach emphasizes potential self-organizing cortical principles achieving improvement without any kind of optimization guidance which could minimize hypothetical loss functions by means of–for example–backpropagation. Thus, our computational model outperforms multiresolution spectro-temporal auditory feature representations using only the statistical sequential structure immerse in the phonotactic rules of the input stream.
Collapse
Affiliation(s)
- Dario Dematties
- Universidad de Buenos Aires, Facultad de Ingeniería, Instituto de Ingeniería Biomédica, Ciudad Autónoma de Buenos Aires, Argentina
- * E-mail:
| | - Silvio Rizzi
- Argonne National Laboratory, Lemont, Illinois, United States of America
| | - George K. Thiruvathukal
- Argonne National Laboratory, Lemont, Illinois, United States of America
- Computer Science Department, Loyola University Chicago, Chicago, Illinois, United States of America
| | - Alejandro Wainselboim
- Instituto de Ciencias Humanas, Sociales y Ambientales, Centro Científico Tecnológico-CONICET, Ciudad de Mendoza, Mendoza, Argentina
| | - B. Silvano Zanutto
- Universidad de Buenos Aires, Facultad de Ingeniería, Instituto de Ingeniería Biomédica, Ciudad Autónoma de Buenos Aires, Argentina
- Instituto de Biología y Medicina Experimental-CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
10
|
Going with the Flow: The Neural Mechanisms Underlying Illusions of Complex-Flow Motion. J Neurosci 2019; 39:2664-2685. [PMID: 30777886 DOI: 10.1523/jneurosci.2112-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 11/21/2022] Open
Abstract
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.SIGNIFICANCE STATEMENT The neural basis of the transformation from objective reality to illusory percepts of rotation, expansion, and contraction remains unknown. We demonstrate psychophysically that macaques perceive these illusory complex-flow motions in a manner similar to that of human observers. At the neural level, we show that medial superior temporal (MSTd) neurons represent illusory flow motions as if they were real by globally integrating middle temporal area (MT) local motion signals. Furthermore, while MT neurons reliably encode real and illusory local motions with similar temporal delays, MSTd neurons take a significantly longer time to process the signals associated with illusory percepts. Our work extends previous complex-flow motion studies by providing the first detailed analysis of the neuron-specific mechanisms underlying complex forms of illusory motion integration from MT to MSTd.
Collapse
|
11
|
Wang H, Wang Z, Zhou Y, Tzvetanov T. Near- and Far-Surround Suppression in Human Motion Discrimination. Front Neurosci 2018; 12:206. [PMID: 29651233 PMCID: PMC5884933 DOI: 10.3389/fnins.2018.00206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 03/15/2018] [Indexed: 11/27/2022] Open
Abstract
The spatial context has strong effects on visual processing. Psychophysics and modeling studies have provided evidence that the surround context can systematically modulate the perception of center stimuli. For motion direction, these center-surround interactions are considered to come from spatio-directional interactions between direction of motion tuned neurons, which are attributed to the middle temporal (MT) area. Here, we investigated through psychophysics experiments on human subjects changes with spatial separation in center-surround inhibition and motion direction interactions. Center-surround motion repulsion effects were measured under near-and far-surround conditions. Using a simple physiological model of the repulsion effect we extracted theoretical population parameters of surround inhibition strength and tuning widths with spatial distance. All 11 subjects showed clear motion repulsion effects under the near-surround condition, while only 10 subjects showed clear motion repulsion effects under the far-surround condition. The model predicted human performance well. Surround inhibition under the near-surround condition was significantly stronger than that under the far-surround condition, and the tuning widths were smaller under the near-surround condition. These results demonstrate that spatial separation can both modulate the surround inhibition strength and surround to center tuning width.
Collapse
Affiliation(s)
- Huan Wang
- Hefei National Laboratory for Physical Sciences at Microscale, School of Life Science, University of Science and Technology of China, Hefei, China
| | | | - Yifeng Zhou
- Hefei National Laboratory for Physical Sciences at Microscale, School of Life Science, University of Science and Technology of China, Hefei, China.,State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Tzvetomir Tzvetanov
- Hefei National Laboratory for Physical Sciences at Microscale, School of Life Science, University of Science and Technology of China, Hefei, China.,Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, and School of Computer and Information, Hefei University of Technology, Hefei, China
| |
Collapse
|
12
|
A Unifying Motif for Spatial and Directional Surround Suppression. J Neurosci 2017; 38:989-999. [PMID: 29229704 DOI: 10.1523/jneurosci.2386-17.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Revised: 11/13/2017] [Accepted: 12/02/2017] [Indexed: 11/21/2022] Open
Abstract
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas.SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex.
Collapse
|
13
|
Gheorghiu E, Kingdom FAA. Dynamics of contextual modulation of perceived shape in human vision. Sci Rep 2017; 7:43274. [PMID: 28230085 PMCID: PMC5322363 DOI: 10.1038/srep43274] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Accepted: 01/19/2017] [Indexed: 11/30/2022] Open
Abstract
In biological vision, contextual modulation refers to the influence of a surround pattern on either the perception of, or the neural responses to, a target pattern. One studied form of contextual modulation deals with the effect of a surround texture on the perceived shape of a contour, in the context of the phenomenon known as the shape aftereffect. In the shape aftereffect, prolonged viewing, or adaptation to a particular contour's shape causes a shift in the perceived shape of a subsequently viewed contour. Shape aftereffects are suppressed when the adaptor contour is surrounded by a texture of similarly-shaped contours, a surprising result given that the surround contours are all potential adaptors. Here we determine the motion and temporal properties of this form of contextual modulation. We varied the relative motion directions, speeds and temporal phases between the central adaptor contour and the surround texture and measured for each manipulation the degree to which the shape aftereffect was suppressed. Results indicate that contextual modulation of shape processing is selective to motion direction, temporal frequency and temporal phase. These selectivities are consistent with one aim of vision being to segregate contours that define objects from those that form textured surfaces.
Collapse
Affiliation(s)
- Elena Gheorghiu
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| | - Frederick A. A. Kingdom
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Qc, Canada
| |
Collapse
|
14
|
Abstract
While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems. A close look at the cortical areas that support vision and touch suggests that the brain uses similar computational strategies to handle different kinds of sensory inputs.
Collapse
|
15
|
Simard F, Pack CC. Dissociation of sensory facilitation and decision bias in statistical learning. VISUAL COGNITION 2015. [DOI: 10.1080/13506285.2015.1085477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|