1
|
Ziemba CM, Goris RLT, Stine GM, Perez RK, Simoncelli EP, Movshon JA. Neuronal and Behavioral Responses to Naturalistic Texture Images in Macaque Monkeys. J Neurosci 2024; 44:e0349242024. [PMID: 39197942 DOI: 10.1523/jneurosci.0349-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 06/19/2024] [Accepted: 08/10/2024] [Indexed: 09/01/2024] Open
Abstract
The visual world is richly adorned with texture, which can serve to delineate important elements of natural scenes. In anesthetized macaque monkeys, selectivity for the statistical features of natural texture is weak in V1, but substantial in V2, suggesting that neuronal activity in V2 might directly support texture perception. To test this, we investigated the relation between single cell activity in macaque V1 and V2 and simultaneously measured behavioral judgments of texture. We generated stimuli along a continuum between naturalistic texture and phase-randomized noise and trained two macaque monkeys to judge whether a sample texture more closely resembled one or the other extreme. Analysis of responses revealed that individual V1 and V2 neurons carried much less information about texture naturalness than behavioral reports. However, the sensitivity of V2 neurons, especially those preferring naturalistic textures, was significantly closer to that of behavior compared with V1. The firing of both V1 and V2 neurons predicted perceptual choices in response to repeated presentations of the same ambiguous stimulus in one monkey, despite low individual neural sensitivity. However, neither population predicted choice in the second monkey. We conclude that neural responses supporting texture perception likely continue to develop downstream of V2. Further, combined with neural data recorded while the same two monkeys performed an orientation discrimination task, our results demonstrate that choice-correlated neural activity in early sensory cortex is unstable across observers and tasks, untethered from neuronal sensitivity, and therefore unlikely to directly reflect the formation of perceptual decisions.
Collapse
Affiliation(s)
- Corey M Ziemba
- Center for Neural Science, New York University, New York, NY
| | - Robbe L T Goris
- Center for Neural Science, New York University, New York, NY
| | - Gabriel M Stine
- Center for Neural Science, New York University, New York, NY
| | - Richard K Perez
- Center for Neural Science, New York University, New York, NY
| | - Eero P Simoncelli
- Center for Neural Science, New York University, New York, NY
- Center for Computational Neuroscience, Flatiron Institute, New York, NY
| | | |
Collapse
|
2
|
Bao Y, Zhou B, Yu X, Mao L, Gutyrchik E, Paolini M, Logothetis N, Pöppel E. Conscious vision in blindness: A new perceptual phenomenon implemented on the "wrong" side of the brain. Psych J 2024. [PMID: 39019467 DOI: 10.1002/pchj.787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 06/04/2024] [Indexed: 07/19/2024]
Abstract
Patients with lesions in the visual cortex are blind in corresponding regions of the visual field, but they still may process visual information, a phenomenon referred to as residual vision or "blindsight". Here we report behavioral and fMRI observations with a patient who reports conscious vision across an extended area of blindness for moving, but not for stationary stimuli. This completion effect is shown to be of perceptual and not of conceptual origin, most likely mediated by spared representations of the visual field in the striate cortex. The neural output to extra-striate areas from regions of the deafferented striate cortex is apparently still intact; this is, for instance, indicated by preserved size constancy of visually completed stimuli. Neural responses as measured with fMRI reveal an activation only for moving stimuli, but importantly on the ipsilateral side of the brain. In a conceptual model this shift of activation to the "wrong" hemisphere is explained on the basis of an imbalance of excitatory and inhibitory interactions within and between the striate cortices due to the brain injury. The observed neuroplasticity indicated by this shift together with the behavioral observations provide important new insights into the functional architecture of the human visual system and provide new insight into the concept of consciousness.
Collapse
Affiliation(s)
- Yan Bao
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Bin Zhou
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xinchi Yu
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, USA
- Department of Linguistics, University of Maryland, College Park, Maryland, USA
| | - Lihua Mao
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Evgeny Gutyrchik
- Institute of Medical Psychology, Ludwig Maximilian University Munich, Munich, Germany
| | - Marco Paolini
- Department of Radiology, University Hospital, Ludwig Maximilian University Munich, Munich, Germany
| | - Nikos Logothetis
- International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
| | - Ernst Pöppel
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Institute of Medical Psychology, Ludwig Maximilian University Munich, Munich, Germany
| |
Collapse
|
3
|
Van Grootel TJ, Raghavan RT, Kelly JG, Movshon JA, Kiorpes L. Responses to visual motion of neurons in the extrastriate visual cortex of macaque monkeys with experimental amblyopia. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.01.601564. [PMID: 39005459 PMCID: PMC11244960 DOI: 10.1101/2024.07.01.601564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Amblyopia is a developmental disorder that results from abnormal visual experience in early life. Amblyopia typically reduces visual performance in one eye. We studied the representation of visual motion information in area MT and nearby extrastriate visual areas in two monkeys made amblyopic by creating an artificial strabismus in early life, and in a single age-matched control monkey. Tested monocularly, cortical responses to moving dot patterns, gratings, and plaids were qualitatively normal in awake, fixating amblyopic monkeys, with primarily subtle differences between the eyes. However, the number of binocularly driven neurons was substantially lower than normal; of the neurons driven predominantly by one eye, the great majority responded only to stimuli presented to the fellow eye. The small population driven by the amblyopic eye showed reduced coherence sensitivity and a preference for faster speeds in much the same way as behavioral deficits. We conclude that, while we do find important differences between neurons driven by the two eyes, amblyopia does not lead to a large scale reorganization of visual receptive fields in the dorsal stream when tested through the amblyopic eye, but rather creates a substantial shift in eye preference toward the fellow eye.
Collapse
Affiliation(s)
- Tom J Van Grootel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - R T Raghavan
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Jenna G Kelly
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Lynne Kiorpes
- Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
4
|
Boundy-Singer ZM, Ziemba CM, Hénaff OJ, Goris RLT. How does V1 population activity inform perceptual certainty? J Vis 2024; 24:12. [PMID: 38884544 PMCID: PMC11185272 DOI: 10.1167/jov.24.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 05/06/2024] [Indexed: 06/18/2024] Open
Abstract
Neural population activity in sensory cortex informs our perceptual interpretation of the environment. Oftentimes, this population activity will support multiple alternative interpretations. The larger the spread of probability over different alternatives, the more uncertain the selected perceptual interpretation. We test the hypothesis that the reliability of perceptual interpretations can be revealed through simple transformations of sensory population activity. We recorded V1 population activity in fixating macaques while presenting oriented stimuli under different levels of nuisance variability and signal strength. We developed a decoding procedure to infer from V1 activity the most likely stimulus orientation as well as the certainty of this estimate. Our analysis shows that response magnitude, response dispersion, and variability in response gain all offer useful proxies for orientation certainty. Of these three metrics, the last one has the strongest association with the decoder's uncertainty estimates. These results clarify that the nature of neural population activity in sensory cortex provides downstream circuits with multiple options to assess the reliability of perceptual interpretations.
Collapse
Affiliation(s)
- Zoe M Boundy-Singer
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Corey M Ziemba
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | | | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
5
|
Kreyenmeier P, Kumbhani R, Movshon JA, Spering M. Shared Mechanisms Drive Ocular Following and Motion Perception. eNeuro 2024; 11:ENEURO.0204-24.2024. [PMID: 38834301 PMCID: PMC11208981 DOI: 10.1523/eneuro.0204-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 05/11/2024] [Indexed: 06/06/2024] Open
Abstract
How features of complex visual patterns are combined to drive perception and eye movements is not well understood. Here we simultaneously assessed human observers' perceptual direction estimates and ocular following responses (OFR) evoked by moving plaids made from two summed gratings with varying contrast ratios. When the gratings were of equal contrast, observers' eye movements and perceptual reports followed the motion of the plaid pattern. However, when the contrasts were unequal, eye movements and reports during early phases of the OFR were biased toward the direction of the high-contrast grating component; during later phases, both responses followed the plaid pattern direction. The shift from component- to pattern-driven behavior resembles the shift in tuning seen under similar conditions in neuronal responses recorded from monkey MT. Moreover, for some conditions, pattern tracking and perceptual reports were correlated on a trial-by-trial basis. The OFR may therefore provide a precise behavioral readout of the dynamics of neural motion integration for complex visual patterns.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| | - Romesh Kumbhani
- Center for Neural Science, New York University, New York, New York 10003
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| |
Collapse
|
6
|
DePiero VJ, Deng Z, Chen C, Savier EL, Chen H, Wei W, Cang J. Transformation of Motion Pattern Selectivity from Retina to Superior Colliculus. J Neurosci 2024; 44:e1704232024. [PMID: 38569924 PMCID: PMC11097260 DOI: 10.1523/jneurosci.1704-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/07/2024] [Accepted: 03/26/2024] [Indexed: 04/05/2024] Open
Abstract
The superior colliculus (SC) is a prominent and conserved visual center in all vertebrates. In mice, the most superficial lamina of the SC is enriched with neurons that are selective for the moving direction of visual stimuli. Here, we study how these direction selective neurons respond to complex motion patterns known as plaids, using two-photon calcium imaging in awake male and female mice. The plaid pattern consists of two superimposed sinusoidal gratings moving in different directions, giving an apparent pattern direction that lies between the directions of the two component gratings. Most direction selective neurons in the mouse SC respond robustly to the plaids and show a high selectivity for the moving direction of the plaid pattern but not of its components. Pattern motion selectivity is seen in both excitatory and inhibitory SC neurons and is especially prevalent in response to plaids with large cross angles between the two component gratings. However, retinal inputs to the SC are ambiguous in their selectivity to pattern versus component motion. Modeling suggests that pattern motion selectivity in the SC can arise from a nonlinear transformation of converging retinal inputs. In contrast, the prevalence of pattern motion selective neurons is not seen in the primary visual cortex (V1). These results demonstrate an interesting difference between the SC and V1 in motion processing and reveal the SC as an important site for encoding pattern motion.
Collapse
Affiliation(s)
- Victor J DePiero
- Department of Biology, University of Virginia, Charlottesville, Virginia 22904
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
| | - Zixuan Deng
- Committee on Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Chen Chen
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
| | - Elise L Savier
- Department of Biology, University of Virginia, Charlottesville, Virginia 22904
- Department of Physiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Hui Chen
- Department of Biology, University of Virginia, Charlottesville, Virginia 22904
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
| | - Wei Wei
- Department of Neurobiology, Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| | - Jianhua Cang
- Department of Biology, University of Virginia, Charlottesville, Virginia 22904
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
| |
Collapse
|
7
|
Magrou L, Joyce MKP, Froudist-Walsh S, Datta D, Wang XJ, Martinez-Trujillo J, Arnsten AFT. The meso-connectomes of mouse, marmoset, and macaque: network organization and the emergence of higher cognition. Cereb Cortex 2024; 34:bhae174. [PMID: 38771244 PMCID: PMC11107384 DOI: 10.1093/cercor/bhae174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/29/2024] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
The recent publications of the inter-areal connectomes for mouse, marmoset, and macaque cortex have allowed deeper comparisons across rodent vs. primate cortical organization. In general, these show that the mouse has very widespread, "all-to-all" inter-areal connectivity (i.e. a "highly dense" connectome in a graph theoretical framework), while primates have a more modular organization. In this review, we highlight the relevance of these differences to function, including the example of primary visual cortex (V1) which, in the mouse, is interconnected with all other areas, therefore including other primary sensory and frontal areas. We argue that this dense inter-areal connectivity benefits multimodal associations, at the cost of reduced functional segregation. Conversely, primates have expanded cortices with a modular connectivity structure, where V1 is almost exclusively interconnected with other visual cortices, themselves organized in relatively segregated streams, and hierarchically higher cortical areas such as prefrontal cortex provide top-down regulation for specifying precise information for working memory storage and manipulation. Increased complexity in cytoarchitecture, connectivity, dendritic spine density, and receptor expression additionally reveal a sharper hierarchical organization in primate cortex. Together, we argue that these primate specializations permit separable deconstruction and selective reconstruction of representations, which is essential to higher cognition.
Collapse
Affiliation(s)
- Loïc Magrou
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Mary Kate P Joyce
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Sean Froudist-Walsh
- School of Engineering Mathematics and Technology, University of Bristol, Bristol, BS8 1QU, United Kingdom
| | - Dibyadeep Datta
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Xiao-Jing Wang
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Julio Martinez-Trujillo
- Departments of Physiology and Pharmacology, and Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
| | - Amy F T Arnsten
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| |
Collapse
|
8
|
Zarei Eskikand P, Grayden DB, Kameneva T, Burkitt AN, Ibbotson MR. Understanding visual processing of motion: completing the picture using experimentally driven computational models of MT. Rev Neurosci 2024; 35:243-258. [PMID: 37725397 DOI: 10.1515/revneuro-2023-0052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023]
Abstract
Computational modeling helps neuroscientists to integrate and explain experimental data obtained through neurophysiological and anatomical studies, thus providing a mechanism by which we can better understand and predict the principles of neural computation. Computational modeling of the neuronal pathways of the visual cortex has been successful in developing theories of biological motion processing. This review describes a range of computational models that have been inspired by neurophysiological experiments. Theories of local motion integration and pattern motion processing are presented, together with suggested neurophysiological experiments designed to test those hypotheses.
Collapse
Affiliation(s)
- Parvin Zarei Eskikand
- Department of Biomedical Engineering, The University of Melbourne, Parkville 3052, Australia
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Parkville 3052, Australia
| | - Tatiana Kameneva
- Department of Biomedical Engineering, The University of Melbourne, Parkville 3052, Australia
- Faculty of Science, Engineering and Technology, Swinburne University of Technology, 3122 Hawthorn, Australia
| | - Anthony N Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Parkville 3052, Australia
| | - Michael R Ibbotson
- National Vision Research Institute, Australian College of Optometry, Carlton 3053, Australia
| |
Collapse
|
9
|
Ziemba CM, Goris RLT, Stine GM, Perez RK, Simoncelli EP, Movshon JA. Neuronal and behavioral responses to naturalistic texture images in macaque monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.22.581645. [PMID: 38464304 PMCID: PMC10925125 DOI: 10.1101/2024.02.22.581645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
The visual world is richly adorned with texture, which can serve to delineate important elements of natural scenes. In anesthetized macaque monkeys, selectivity for the statistical features of natural texture is weak in V1, but substantial in V2, suggesting that neuronal activity in V2 might directly support texture perception. To test this, we investigated the relation between single cell activity in macaque V1 and V2 and simultaneously measured behavioral judgments of texture. We generated stimuli along a continuum between naturalistic texture and phase-randomized noise and trained two macaque monkeys to judge whether a sample texture more closely resembled one or the other extreme. Analysis of responses revealed that individual V1 and V2 neurons carried much less information about texture naturalness than behavioral reports. However, the sensitivity of V2 neurons, especially those preferring naturalistic textures, was significantly closer to that of behavior compared with V1. The firing of both V1 and V2 neurons predicted perceptual choices in response to repeated presentations of the same ambiguous stimulus in one monkey, despite low individual neural sensitivity. However, neither population predicted choice in the second monkey. We conclude that neural responses supporting texture perception likely continue to develop downstream of V2. Further, combined with neural data recorded while the same two monkeys performed an orientation discrimination task, our results demonstrate that choice-correlated neural activity in early sensory cortex is unstable across observers and tasks, untethered from neuronal sensitivity, and thus unlikely to reflect a critical aspect of the formation of perceptual decisions. Significance statement As visual signals propagate along the cortical hierarchy, they encode increasingly complex aspects of the sensory environment and likely have a more direct relationship with perceptual experience. We replicate and extend previous results from anesthetized monkeys differentiating the selectivity of neurons along the first step in cortical vision from area V1 to V2. However, our results further complicate efforts to establish neural signatures that reveal the relationship between perception and the neuronal activity of sensory populations. We find that choice-correlated activity in V1 and V2 is unstable across different observers and tasks, and also untethered from neuronal sensitivity and other features of nonsensory response modulation.
Collapse
|
10
|
Bogatova D, Smirnakis SM, Palagina G. Tug-of-Peace: Visual Rivalry and Atypical Visual Motion Processing in MECP2 Duplication Syndrome of Autism. eNeuro 2024; 11:ENEURO.0102-23.2023. [PMID: 37940561 PMCID: PMC10792601 DOI: 10.1523/eneuro.0102-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 06/25/2023] [Accepted: 08/12/2023] [Indexed: 11/10/2023] Open
Abstract
Extracting common patterns of neural circuit computations in the autism spectrum and confirming them as a cause of specific core traits of autism is the first step toward identifying cell-level and circuit-level targets for effective clinical intervention. Studies in humans with autism have identified functional links and common anatomic substrates between core restricted behavioral repertoire, cognitive rigidity, and overstability of visual percepts during visual rivalry. To study these processes with single-cell precision and comprehensive neuronal population coverage, we developed the visual bistable perception paradigm for mice based on ambiguous moving plaid patterns consisting of two transparent gratings drifting at an angle of 120°. This results in spontaneous reversals of the perception between local component motion (plaid perceived as two separate moving grating components) and integrated global pattern motion (plaid perceived as a fused moving texture). This robust paradigm does not depend on the explicit report of the mouse, since the direction of the optokinetic nystagmus (OKN) is used to infer the dominant percept. Using this paradigm, we found that the rate of perceptual reversals between global and local motion interpretations is reduced in the methyl-CpG-binding protein 2 duplication syndrome (MECP2-ds) mouse model of autism. Moreover, the stability of local motion percepts is greatly increased in MECP2-ds mice at the expense of global motion percepts. Thus, our model reproduces a subclass of the core features in human autism (reduced rate of visual rivalry and atypical perception of visual motion). This further offers a well-controlled approach for dissecting neuronal circuits underlying these core features.
Collapse
Affiliation(s)
- Daria Bogatova
- Department of Neurology, Brigham and Women's Hospital, Boston, MA 02115
- Department of Biology, Boston University, Boston, MA 02115
- Harvard Medical School, Boston, MA 02115
| | - Stelios M Smirnakis
- Department of Neurology, Brigham and Women's Hospital, Boston, MA 02115
- Harvard Medical School, Boston, MA 02115
- Jamaica Plain Veterans Affairs Hospital, Boston, MA 02130
| | - Ganna Palagina
- Department of Neurology, Brigham and Women's Hospital, Boston, MA 02115
- Harvard Medical School, Boston, MA 02115
- Jamaica Plain Veterans Affairs Hospital, Boston, MA 02130
| |
Collapse
|
11
|
Thompson LW, Kim B, Rokers B, Rosenberg A. Hierarchical computation of 3D motion across macaque areas MT and FST. Cell Rep 2023; 42:113524. [PMID: 38064337 PMCID: PMC10791528 DOI: 10.1016/j.celrep.2023.113524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/25/2023] [Accepted: 11/15/2023] [Indexed: 12/30/2023] Open
Abstract
Computing behaviorally relevant representations of three-dimensional (3D) motion from two-dimensional (2D) retinal signals is critical for survival. To ascertain where and how the primate visual system performs this computation, we recorded from the macaque middle temporal (MT) area and its downstream target, the fundus of the superior temporal sulcus (area FST). Area MT is a key site of 2D motion processing, but its role in 3D motion processing is controversial. The functions of FST remain highly underexplored. To distinguish representations of 3D motion from those of 2D retinal motion, we contrast responses to multiple motion cues during a motion discrimination task. The results reveal a hierarchical transformation whereby many FST but not MT neurons are selective for 3D motion. Modeling results further show how generalized, cue-invariant representations of 3D motion in FST may be created by selectively integrating the output of 2D motion selective MT neurons.
Collapse
Affiliation(s)
- Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA.
| |
Collapse
|
12
|
Matteucci G, Bellacosa Marotti R, Zattera B, Zoccolan D. Truly pattern: Nonlinear integration of motion signals is required to account for the responses of pattern cells in rat visual cortex. SCIENCE ADVANCES 2023; 9:eadh4690. [PMID: 37939191 PMCID: PMC10631736 DOI: 10.1126/sciadv.adh4690] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 10/06/2023] [Indexed: 11/10/2023]
Abstract
A key feature of advanced motion processing in the primate dorsal stream is the existence of pattern cells-specialized cortical neurons that integrate local motion signals into pattern-invariant representations of global direction. Pattern cells have also been reported in rodent visual cortex, but it is unknown whether the tuning of these neurons results from truly integrative, nonlinear mechanisms or trivially arises from linear receptive fields (RFs) with a peculiar geometry. Here, we show that pattern cells in rat primary (V1) and lateromedial (LM) visual cortex process motion direction in a way that cannot be explained by the linear spatiotemporal structure of their RFs. Instead, their tuning properties are consistent with and well explained by those of units in a state-of-the-art neural network model of the dorsal stream. This suggests that similar cortical processes underlay motion representation in primates and rodents. The latter could thus serve as powerful model systems to unravel the underlying circuit-level mechanisms.
Collapse
|
13
|
Singer Y, Taylor L, Willmore BDB, King AJ, Harper NS. Hierarchical temporal prediction captures motion processing along the visual pathway. eLife 2023; 12:e52599. [PMID: 37844199 PMCID: PMC10629830 DOI: 10.7554/elife.52599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 10/04/2023] [Indexed: 10/18/2023] Open
Abstract
Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.
Collapse
Affiliation(s)
- Yosef Singer
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Luke Taylor
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Ben DB Willmore
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
14
|
Kreyenmeier P, Kumbhani R, Movshon JA, Spering M. Shared mechanisms drive ocular following and motion perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.02.560543. [PMID: 37873151 PMCID: PMC10592915 DOI: 10.1101/2023.10.02.560543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
How features of complex visual patterns combine to drive perception and eye movements is not well understood. We simultaneously assessed human observers' perceptual direction estimates and ocular following responses (OFR) evoked by moving plaids made from two summed gratings with varying contrast ratios. When the gratings were of equal contrast, observers' eye movements and perceptual reports followed the motion of the plaid pattern. However, when the contrasts were unequal, eye movements and reports during early phases of the OFR were biased toward the direction of the high-contrast grating component; during later phases, both responses more closely followed the plaid pattern direction. The shift from component- to pattern-driven behavior resembles the shift in tuning seen under similar conditions in neuronal responses recorded from monkey MT. Moreover, for some conditions, pattern tracking and perceptual reports were correlated on a trial-by-trial basis. The OFR may therefore provide a precise behavioural read-out of the dynamics of neural motion integration for complex visual patterns.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC V5Z 3N9 Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
| | - Romesh Kumbhani
- Center for Neural Science, New York University, New York NY 10003, USA
| | - J. Anthony Movshon
- Center for Neural Science, New York University, New York NY 10003, USA
- Department of Psychology, New York University, New York NY 10003, USA
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC V5Z 3N9 Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
| |
Collapse
|
15
|
Ladret HJ, Cortes N, Ikan L, Chavane F, Casanova C, Perrinet LU. Cortical recurrence supports resilience to sensory variance in the primary visual cortex. Commun Biol 2023; 6:667. [PMID: 37353519 PMCID: PMC10290066 DOI: 10.1038/s42003-023-05042-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 06/13/2023] [Indexed: 06/25/2023] Open
Abstract
Our daily endeavors occur in a complex visual environment, whose intrinsic variability challenges the way we integrate information to make decisions. By processing myriads of parallel sensory inputs, our brain is theoretically able to compute the variance of its environment, a cue known to guide our behavior. Yet, the neurobiological and computational basis of such variance computations are still poorly understood. Here, we quantify the dynamics of sensory variance modulations of cat primary visual cortex neurons. We report two archetypal neuronal responses, one of which is resilient to changes in variance and co-encodes the sensory feature and its variance, improving the population encoding of orientation. The existence of these variance-specific responses can be accounted for by a model of intracortical recurrent connectivity. We thus propose that local recurrent circuits process uncertainty as a generic computation, advancing our understanding of how the brain handles naturalistic inputs.
Collapse
Affiliation(s)
- Hugo J Ladret
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France.
- School of Optometry, Université de Montréal, Montréal, Canada.
| | - Nelson Cortes
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Lamyae Ikan
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| | | | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| |
Collapse
|
16
|
Sachse EM, Snyder AC. Dynamic attention signalling in V4: Relation to fast-spiking/non-fast-spiking cell class and population coupling. Eur J Neurosci 2023; 57:918-939. [PMID: 36732934 DOI: 10.1111/ejn.15928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 01/09/2023] [Accepted: 01/24/2023] [Indexed: 02/04/2023]
Abstract
The computational role of a neuron during attention depends on its firing properties, neurotransmitter expression and functional connectivity. Neurons in the visual cortical area V4 are reliably engaged by selective attention but exhibit diversity in the effect of attention on firing rates and correlated variability. It remains unclear what specific neuronal properties shape these attention effects. In this study, we quantitatively characterised the distribution of attention modulation of firing rates across populations of V4 neurons. Neurons exhibited a continuum of time-varying attention effects. At one end of the continuum, neurons' spontaneous firing rates were slightly depressed with attention (compared to when unattended), whereas their stimulus responses were enhanced with attention. The other end of the continuum showed the converse pattern: attention depressed stimulus responses but increased spontaneous activity. We tested whether the particular pattern of time-varying attention effects that a neuron exhibited was related to the shape of their actions potentials (so-called 'fast-spiking' [FS] neurons have been linked to inhibition) and the strength of their coupling to the overall population. We found an interdependence among neural attention effects, neuron type and population coupling. In particular, we found neurons for which attention enhanced spontaneous activity but suppressed stimulus responses were less likely to be fast-spiking (more likely to be non-fast-spiking) and tended to have stronger population coupling, compared to neurons with other types of attention effects. These results add important information to our understanding of visual attention circuits at the cellular level.
Collapse
Affiliation(s)
- Elizabeth M Sachse
- Psychiatry, University of Minnesota, Minneapolis, Minnesota, USA
- Neuroscience, University of Minnesota, Minneapolis, Minnesota, USA
| | - Adam C Snyder
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York, USA
- Neuroscience, University of Rochester, Rochester, New York, USA
- Center for Visual Sciences, University of Rochester, Rochester, New York, USA
| |
Collapse
|
17
|
Korai Y, Miura K. A dynamical model of visual motion processing for arbitrary stimuli including type II plaids. Neural Netw 2023; 162:46-68. [PMID: 36878170 DOI: 10.1016/j.neunet.2023.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 02/23/2023] [Accepted: 02/25/2023] [Indexed: 03/04/2023]
Abstract
To explore the operating principle of visual motion processing in the brain underlying perception and eye movements, we model the information processing of velocity estimate of the visual stimulus at the algorithmic level using the dynamical system approach. In this study, we formulate the model as an optimization process of an appropriately defined objective function. The model is applicable to arbitrary visual stimuli. We find that our theoretical predictions qualitatively agree with time evolution of eye movement reported by previous works across various types of stimulus. Our results suggest that the brain implements the present framework as the internal model of motion vision. We anticipate our model to be a promising building block for more profound understanding of visual motion processing as well as for the development of robotics.
Collapse
Affiliation(s)
- Yusuke Korai
- Integrated Clinical Education Center, Kyoto University Hospital, Kyoto University, Kyoto 606-8507, Japan.
| | - Kenichiro Miura
- Graduate School of Medicine, Kyoto University, Kyoto 606-8501, Japan; Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Tokyo 187-8551, Japan.
| |
Collapse
|
18
|
Chunharas C, Rademaker RL, Brady TF, Serences JT. An adaptive perspective on visual working memory distortions. J Exp Psychol Gen 2022; 151:2300-2323. [PMID: 35191726 PMCID: PMC9392817 DOI: 10.1037/xge0001191] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
When holding multiple items in visual working memory, representations of individual items are often attracted to, or repelled from, each other. While this is empirically well-established, existing frameworks do not account for both types of distortions, which appear to be in opposition. Here, we demonstrate that both types of memory distortion may confer functional benefits under different circumstances. When there are many items to remember and subjects are near their capacity to accurately remember each item individually, memories for each item become more similar (attraction). However, when remembering smaller sets of highly similar but discernible items, memory for each item becomes more distinct (repulsion), possibly to support better discrimination. Importantly, this repulsion grows stronger with longer delays, suggesting that it dynamically evolves in memory and is not just a differentiation process that occurs during encoding. Furthermore, both attraction and repulsion occur even in tasks designed to mitigate response bias concerns, suggesting they are genuine changes in memory representations. Together, these results are in line with the theory that attraction biases act to stabilize memory signals by capitalizing on information about an entire group of items, whereas repulsion biases reflect a tradeoff between maintaining accurate but distinct representations. Both biases suggest that human memory systems may sacrifice veridical representations in favor of representations that better support specific behavioral goals. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Chaipat Chunharas
- Department of Psychology, University of California San Diego, La Jolla, California, USA
- Department of Medicine, King Chulalongkorn Memorial Hospital, Chulalongkorn University, Bangkok, Thailand
- Chulalongkorn Cognitive, Clinical & Computational Neuroscience research group, Chulalongkorn University, Bangkok, Thailand
| | - Rosanne L. Rademaker
- Department of Psychology, University of California San Diego, La Jolla, California, USA
- Ernst Strüngmann Institute for Neuroscience in cooperation with the Max Planck Society, Frankfurt, Germany
| | - Timothy F. Brady
- Department of Psychology, University of California San Diego, La Jolla, California, USA
| | - John T. Serences
- Department of Psychology, University of California San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California San Diego, La Jolla, California, USA
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
19
|
Falconbridge M, Hewitt K, Haille J, Badcock DR, Edwards M. The induced motion effect is a high-level visual phenomenon: Psychophysical evidence. Iperception 2022; 13:20416695221118111. [PMID: 36092511 PMCID: PMC9459461 DOI: 10.1177/20416695221118111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/20/2022] [Indexed: 11/16/2022] Open
Abstract
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway where self-motion is assessed. We provide evidence for a high-level mechanism in two broad ways. Firstly, we show that the effect is insensitive to a set of low-level spatial aspects of the scene, namely, the spatial arrangement, the spatial frequency content and the orientation content of the background relative to the target. Secondly, we show that the effect is the same whether the target and background are composed of the same kind of local elements-one-dimensional (1D) or two-dimensional (2D)-or one is composed of one, and the other composed of the other. The latter finding is significant because 1D and 2D local elements are integrated by two different mechanisms so the induced motion effect is likely to be mediated in a visual motion processing area that follows the two separate integration mechanisms. Area medial superior temporal in monkeys and the equivalent in humans is suggested as a viable site. We present a simple flow-parsing-inspired model and demonstrate a good fit to our data and to data from a previous induced motion study.
Collapse
|
20
|
A neural correlate of perceptual segmentation in macaque middle temporal cortical area. Nat Commun 2022; 13:4967. [PMID: 36002445 PMCID: PMC9402536 DOI: 10.1038/s41467-022-32555-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Accepted: 08/04/2022] [Indexed: 11/09/2022] Open
Abstract
High-resolution vision requires fine retinal sampling followed by integration to recover object properties. Importantly, accuracy is lost if local samples from different objects are intermixed. Thus, segmentation, grouping of image regions for separate processing, is crucial for perception. Previous work has used bi-stable plaid patterns, which can be perceived as either a single or multiple moving surfaces, to study this process. Here, we report a relationship between activity in a mid-level site in the primate visual pathways and segmentation judgments. Specifically, we find that direction selective middle temporal neurons are sensitive to texturing cues used to bias the perception of bi-stable plaids and exhibit a significant trial-by-trial correlation with subjective perception of a constant stimulus. This correlation is greater in units that signal global motion in patterns with multiple local orientations. Thus, we conclude the middle temporal area contains a signal for segmenting complex scenes into constituent objects and surfaces.
Collapse
|
21
|
Barthélemy FV, Fleuriet J, Perrinet LU, Masson GS. A behavioral receptive field for ocular following in monkeys: Spatial summation and its spatial frequency tuning. eNeuro 2022; 9:ENEURO.0374-21.2022. [PMID: 35760525 PMCID: PMC9275147 DOI: 10.1523/eneuro.0374-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 11/21/2022] Open
Abstract
In human and non-human primates, reflexive tracking eye movements can be initiated at very short latency in response to a rapid shift of the image. Previous studies in humans have shown that only a part of the central visual field is optimal for driving ocular following responses. Herein, we have investigated spatial summation of motion information across a wide range of spatial frequencies and speeds of drifting gratings by recording short-latency ocular following responses in macaque monkeys. We show that optimal stimulus size for driving ocular responses cover a small (<20° diameter), central part of the visual field that shrinks with higher spatial frequency. This signature of linear motion integration remains invariant with speed and temporal frequency. For low and medium spatial frequencies, we found a strong suppressive influence from surround motion, evidenced by a decrease of response amplitude for stimulus sizes larger than optimal. Such suppression disappears with gratings at high frequencies. The contribution of peripheral motion was investigated by presenting grating annuli of increasing eccentricity. We observed an exponential decay of response amplitude with grating eccentricity, the decrease being faster for higher spatial frequencies. Weaker surround suppression can thus be explained by sparser eccentric inputs at high frequencies. A Difference-of-Gaussians model best renders the antagonistic contributions of peripheral and central motions. Its best-fit parameters coincide with several, well-known spatial properties of area MT neuronal populations. These results describe the mechanism by which central motion information is automatically integrated in a context-dependent manner to drive ocular responses.Significance statementOcular following is driven by visual motion at ultra-short latency in both humans and monkeys. Its dynamics reflect the properties of low-level motion integration. Here, we show that a strong center-surround suppression mechanism modulates initial eye velocity. Its spatial properties are dependent upon visual inputs' spatial frequency but are insensitive to either its temporal frequency or speed. These properties are best described with a Difference-of-Gaussian model of spatial integration. The model parameters reflect many spatial characteristics of motion sensitive neuronal populations in monkey area MT. Our results further outline the computational properties of the behavioral receptive field underpinning automatic, context-dependent motion integration.
Collapse
Affiliation(s)
- Frédéric V Barthélemy
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Jérome Fleuriet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
- Assistance Publique-Hôpitaux de Paris, Intensive Care Unit, Raymond Poincaré Hospital, Garches, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| |
Collapse
|
22
|
Meso AI, Gekas N, Mamassian P, Masson GS. Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions. eNeuro 2022; 9:ENEURO.0511-21.2022. [PMID: 35470228 PMCID: PMC9113919 DOI: 10.1523/eneuro.0511-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/21/2022] Open
Abstract
Sensing the movement of fast objects within our visual environments is essential for controlling actions. It requires online estimation of motion direction and speed. We probed human speed representation using ocular tracking of stimuli of different statistics. First, we compared ocular responses to single drifting gratings (DGs) with a given set of spatiotemporal frequencies to broadband motion clouds (MCs) of matched mean frequencies. Motion energy distributions of gratings and clouds are point-like, and ellipses oriented along the constant speed axis, respectively. Sampling frequency space, MCs elicited stronger, less variable, and speed-tuned responses. DGs yielded weaker and more frequency-tuned responses. Second, we measured responses to patterns made of two or three components covering a range of orientations within Fourier space. Early tracking initiation of the patterns was best predicted by a linear combination of components before nonlinear interactions emerged to shape later dynamics. Inputs are supralinearly integrated along an iso-velocity line and sublinearly integrated away from it. A dynamical probabilistic model characterizes these interactions as an excitatory pooling along the iso-velocity line and inhibition along the orthogonal "scale" axis. Such crossed patterns of interaction would appropriately integrate or segment moving objects. This study supports the novel idea that speed estimation is better framed as a dynamic channel interaction organized along speed and scale axes.
Collapse
Affiliation(s)
- Andrew Isaac Meso
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College, London SE5 8AF, United Kingdom
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille 13005, France
| | - Nikos Gekas
- Department of Psychology, Edinburgh Napier University, Edinburgh, EH11 4BN, United Kingdom
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille 13005, France
| |
Collapse
|
23
|
A Stable Population Code for Attention in Prefrontal Cortex Leads a Dynamic Attention Code in Visual Cortex. J Neurosci 2021; 41:9163-9176. [PMID: 34583956 DOI: 10.1523/jneurosci.0608-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/13/2021] [Accepted: 09/15/2021] [Indexed: 11/21/2022] Open
Abstract
Attention often requires maintaining a stable mental state over time while simultaneously improving perceptual sensitivity. These requirements place conflicting demands on neural populations, as sensitivity implies a robust response to perturbation by incoming stimuli, which is antithetical to stability. Functional specialization of cortical areas provides one potential mechanism to resolve this conflict. We reasoned that attention signals in executive control areas might be highly stable over time, reflecting maintenance of the cognitive state, thereby freeing up sensory areas to be more sensitive to sensory input (i.e., unstable), which would be reflected by more dynamic attention signals in those areas. To test these predictions, we simultaneously recorded neural populations in prefrontal cortex (PFC) and visual cortical area V4 in rhesus macaque monkeys performing an endogenous spatial selective attention task. Using a decoding approach, we found that the neural code for attention states in PFC was substantially more stable over time compared with the attention code in V4 on a moment-by-moment basis, in line with our guiding thesis. Moreover, attention signals in PFC predicted the future attention state of V4 better than vice versa, consistent with a top-down role for PFC in attention. These results suggest a functional specialization of attention mechanisms across cortical areas with a division of labor. PFC signals the cognitive state and maintains this state stably over time, whereas V4 responds to sensory input in a manner dynamically modulated by that cognitive state.SIGNIFICANCE STATEMENT Attention requires maintaining a stable mental state while simultaneously improving perceptual sensitivity. We hypothesized that these two demands (stability and sensitivity) are distributed between prefrontal and visual cortical areas, respectively. Specifically, we predicted attention signals in visual cortex would be less stable than in prefrontal cortex, and furthermore prefrontal cortical signals would predict attention signals in visual cortex in line with the hypothesized role of prefrontal cortex in top-down executive control. Our results are consistent with suggestions deriving from previous work using separate recordings in the two brain areas in different animals performing different tasks and represent the first direct evidence in support of this hypothesis with simultaneous multiarea recordings within individual animals.
Collapse
|
24
|
Primary visual cortex straightens natural video trajectories. Nat Commun 2021; 12:5982. [PMID: 34645787 PMCID: PMC8514453 DOI: 10.1038/s41467-021-25939-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 09/08/2021] [Indexed: 11/08/2022] Open
Abstract
Many sensory-driven behaviors rely on predictions about future states of the environment. Visual input typically evolves along complex temporal trajectories that are difficult to extrapolate. We test the hypothesis that spatial processing mechanisms in the early visual system facilitate prediction by constructing neural representations that follow straighter temporal trajectories. We recorded V1 population activity in anesthetized macaques while presenting static frames taken from brief video clips, and developed a procedure to measure the curvature of the associated neural population trajectory. We found that V1 populations straighten naturally occurring image sequences, but entangle artificial sequences that contain unnatural temporal transformations. We show that these effects arise in part from computational mechanisms that underlie the stimulus selectivity of V1 cells. Together, our findings reveal that the early visual system uses a set of specialized computations to build representations that can support prediction in the natural environment. Many behaviours depend on predictions about the environment. Here the authors find neural populations in primary visual cortex to straighten the temporal trajectories of natural video clips, facilitating the extrapolation of past observations.
Collapse
|
25
|
Matteucci G, Zattera B, Bellacosa Marotti R, Zoccolan D. Rats spontaneously perceive global motion direction of drifting plaids. PLoS Comput Biol 2021; 17:e1009415. [PMID: 34520476 PMCID: PMC8462730 DOI: 10.1371/journal.pcbi.1009415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/24/2021] [Accepted: 09/01/2021] [Indexed: 11/19/2022] Open
Abstract
Computing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when used as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for the G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments.
Collapse
Affiliation(s)
- Giulio Matteucci
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Benedetta Zattera
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
- * E-mail:
| |
Collapse
|
26
|
Chow A, Silva AE, Tsang K, Ng G, Ho C, Thompson B. Binocular Integration of Perceptually Suppressed Visual Information in Amblyopia. Invest Ophthalmol Vis Sci 2021; 62:11. [PMID: 34515731 PMCID: PMC8444466 DOI: 10.1167/iovs.62.12.11] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 08/20/2021] [Indexed: 01/01/2023] Open
Abstract
Purpose The purpose of this study was to assess whether motion information from suppressed amblyopic eyes can influence visual perception. Methods Participants with normal vision (n = 20) and with amblyopia (n = 20; 11 anisometropic and 9 strabismic/mixed) viewed dichoptic, orthogonal drifting gratings through a mirror stereoscope. Participants continuously reported form and motion percepts as gratings rivaled for 60 seconds. Responses were binned into categories ranging from binocular integration to complete suppression. Periods when the grating presented to the nondominant/amblyopic eye was suppressed were analyzed further to determine the extent of binocular integration of motion. Results Individuals with amblyopia experienced longer periods of non-preferred eye suppression than controls. When the non-preferred eye grating was suppressed, binocular integration of motion occurred 48.1 ± 6.2% and 31.2 ± 5.8% of the time in control and amblyopic participants, respectively. Periods of motion integration from the suppressed eye were significantly non-zero for both groups. Conclusions Visual information seen only by a suppressed amblyopic eye can be binocularly integrated and influence the overall visual percept. These findings reveal that visual information subjected to interocular suppression can still contribute to binocular vision and suggest the use of appropriate optical correction for the amblyopic eye to improve image quality for binocular combination.
Collapse
Affiliation(s)
- Amy Chow
- Department of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Andrew E. Silva
- Department of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Katelyn Tsang
- Department of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Gabriel Ng
- Mount Pleasant Optometry Centre, Vancouver, British Columbia, Canada
| | - Cindy Ho
- Mount Pleasant Optometry Centre, Vancouver, British Columbia, Canada
| | - Benjamin Thompson
- Department of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
- Center for Eye and Vision Research, 17W Science Park, Hong Kong
- Liggins Institute, University of Auckland, Auckland, New Zealand
| |
Collapse
|
27
|
Kozak RA, Corneil BD. High-contrast, moving targets in an emerging target paradigm promote fast visuomotor responses during visually guided reaching. J Neurophysiol 2021; 126:68-81. [PMID: 34077283 DOI: 10.1152/jn.00057.2021] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans have a remarkable capacity to rapidly interact with the surrounding environment, often by transforming visual input into motor output on a moment-to-moment basis. But what visual features promote rapid reaching? High-contrast, fast-moving targets elicit strong responses in the superior colliculus (SC), a structure associated with express saccades and implicated in rapid electromyographic (EMG) responses on upper limb muscles. To test the influence of stimulus properties on rapid reaches, we had human subjects perform visually guided reaches to moving targets varied by speed (experiment 1) or speed and contrast (experiment 2) in an emerging target paradigm that has recently been shown to robustly elicit fast visuomotor responses. Our analysis focused on stimulus-locked responses (SLRs) on upper limb muscles. SLRs appear within <100 ms of target presentation, and as the first wave of muscle recruitment they have been hypothesized to arise from the SC. Across 32 subjects studied in both experiments, 97% expressed SLRs in the emerging target paradigm, whereas only 69% expressed SLRs in an immediate response paradigm toward static targets. Faster-moving targets (experiment 1) evoked large-magnitude SLRs, whereas high-contrast fast-moving targets (experiment 2) evoked short-latency, large-magnitude SLRs. In some instances, SLR magnitude exceeded the magnitude of movement-aligned activity. Both large-magnitude and short-latency SLRs were correlated with short-latency reach reaction times. Our results support the hypothesis that, in scenarios requiring expedited responses, a subcortical pathway originating in the SC elicits the earliest wave of muscle recruitment, expediting reaction times.NEW & NOTEWORTHY How does the brain rapidly transform vision into action? Here, by recording upper limb muscle activity, we find that high-contrast and fast-moving targets are highly effective at evoking rapid visually guided reaches. We surmise that a brain stem circuit originating in the superior colliculus contributes to the most rapid reaching responses. When time is of the essence, cortical areas may serve to prime this circuit and elaborate subsequent phases of recruitment.
Collapse
Affiliation(s)
- Rebecca A Kozak
- Graduate Program in Neuroscience, Western University, London, Ontario, Canada.,Robarts Research Institute, London, Ontario, Canada
| | - Brian D Corneil
- Graduate Program in Neuroscience, Western University, London, Ontario, Canada.,Department of Psychology, Western University, London, Ontario, Canada.,Department of Physiology and Pharmacology, Western University, London, Ontario, Canada.,Robarts Research Institute, London, Ontario, Canada
| |
Collapse
|
28
|
Lempel AA, Nielsen KJ. Development of visual motion integration involves coordination of multiple cortical stages. eLife 2021; 10:59798. [PMID: 33749595 PMCID: PMC7984838 DOI: 10.7554/elife.59798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 03/08/2021] [Indexed: 11/13/2022] Open
Abstract
A central feature of cortical function is hierarchical processing of information. Little is currently known about how cortical processing cascades develop. Here, we investigate the joint development of two nodes of the ferret’s visual motion pathway, primary visual cortex (V1), and higher-level area PSS. In adult animals, motion processing transitions from local to global computations between these areas. We now show that PSS global motion signals emerge a week after the development of V1 and PSS direction selectivity. Crucially, V1 responses to more complex motion stimuli change in parallel, in a manner consistent with supporting increased PSS motion integration. At the same time, these V1 responses depend on feedback from PSS. Our findings suggest that development does not just proceed in parallel in different visual areas, it is coordinated across network nodes. This has important implications for understanding how visual experience and developmental disorders can influence the developing visual system.
Collapse
Affiliation(s)
- Augusto A Lempel
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, United States.,Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, United States
| | - Kristina J Nielsen
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, United States.,Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
29
|
Martin CZ, Lapierre P, Haché S, Lucien D, Green AM. Vestibular contributions to online reach execution are processed via mechanisms with knowledge about limb biomechanics. J Neurophysiol 2021; 125:1022-1045. [PMID: 33502952 DOI: 10.1152/jn.00688.2019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Studies of reach control with the body stationary have shown that proprioceptive and visual feedback signals contributing to rapid corrections during reaching are processed by neural circuits that incorporate knowledge about the physical properties of the limb (an internal model). However, among the most common spatial and mechanical perturbations to the limb are those caused by our body's own motion, suggesting that processing of vestibular signals for online reach control may reflect a similar level of sophistication. We investigated this hypothesis using galvanic vestibular stimulation (GVS) to selectively activate the vestibular sensors, simulating body rotation, as human subjects reached to remembered targets in different directions (forward, leftward, rightward). If vestibular signals contribute to purely kinematic/spatial corrections for body motion, GVS should evoke reach trajectory deviations of similar size in all directions. In contrast, biomechanical modeling predicts that if vestibular processing for online reach control takes into account knowledge of the physical properties of the limb and the forces applied on it by body motion, then GVS should evoke trajectory deviations that are significantly larger during forward and leftward reaches as compared with rightward reaches. When GVS was applied during reaching, the observed deviations were on average consistent with this prediction. In contrast, when GVS was instead applied before reaching, evoked deviations were similar across directions, as predicted for a purely spatial correction mechanism. These results suggest that vestibular signals, like proprioceptive and visual feedback, are processed for online reach control via sophisticated neural mechanisms that incorporate knowledge of limb biomechanics.NEW & NOTEWORTHY Studies examining proprioceptive and visual contributions to rapid corrections for externally applied mechanical and spatial perturbations during reaching have provided evidence for flexible processing of sensory feedback that accounts for musculoskeletal system dynamics. Notably, however, such perturbations commonly arise from our body's own motion. In line with this, we provide compelling evidence that, similar to proprioceptive and visual signals, vestibular signals are processed for online reach control via sophisticated mechanisms that incorporate knowledge of limb biomechanics.
Collapse
Affiliation(s)
- Christophe Z Martin
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Philippe Lapierre
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Simon Haché
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Diderot Lucien
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Andrea M Green
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
30
|
Zarei Eskikand P, Kameneva T, Burkitt AN, Grayden DB, Ibbotson MR. Adaptive Surround Modulation of MT Neurons: A Computational Model. Front Neural Circuits 2020; 14:529345. [PMID: 33192335 PMCID: PMC7649322 DOI: 10.3389/fncir.2020.529345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Accepted: 09/22/2020] [Indexed: 11/13/2022] Open
Abstract
The classical receptive field (CRF) of a spiking visual neuron is defined as the region in the visual field that can generate spikes when stimulated by a visual stimulus. Many visual neurons also have an extra-classical receptive field (ECRF) that surrounds the CRF. The presence of a stimulus in the ECRF does not generate spikes but rather modulates the response to a stimulus in the neuron's CRF. Neurons in the primate Middle Temporal (MT) area, which is a motion specialist region, can have directionally antagonistic or facilitatory surrounds. The surround's effect switches between directionally antagonistic or facilitatory based on the characteristics of the stimulus, with antagonistic effects when there are directional discontinuities but facilitatory effects when there is directional coherence. Here, we present a computational model of neurons in area MT that replicates this observation and uses computational building blocks that correlate with observed cell types in the visual pathways to explain the mechanism of this modulatory effect. The model shows that the categorization of MT neurons based on the effect of their surround depends on the input stimulus rather than being a property of the neurons. Also, in agreement with neurophysiological findings, the ECRFs of the modeled MT neurons alter their center-surround interactions depending on image contrast.
Collapse
Affiliation(s)
- Parvin Zarei Eskikand
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - Tatiana Kameneva
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia.,Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Anthony N Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - Michael R Ibbotson
- National Vision Research Institute, Australian College of Optometry, Carlton, VIC, Australia
| |
Collapse
|
31
|
Khanna SB, Scott JA, Smith MA. Dynamic shifts of visual and saccadic signals in prefrontal cortical regions 8Ar and FEF. J Neurophysiol 2020; 124:1774-1791. [PMID: 33026949 DOI: 10.1152/jn.00669.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Active vision is a fundamental process by which primates gather information about the external world. Multiple brain regions have been studied in the context of simple active vision tasks in which a visual target's appearance is temporally separated from saccade execution. Most neurons have tight spatial registration between visual and saccadic signals, and in areas such as prefrontal cortex (PFC), some neurons show persistent delay activity that links visual and motor epochs and has been proposed as a basis for spatial working memory. Many PFC neurons also show rich dynamics, which have been attributed to alternative working memory codes and the representation of other task variables. Our study investigated the transition between processing a visual stimulus and generating an eye movement in populations of PFC neurons in macaque monkeys performing a memory guided saccade task. We found that neurons in two subregions of PFC, the frontal eye fields (FEF) and area 8Ar, differed in their dynamics and spatial response profiles. These dynamics could be attributed largely to shifts in the spatial profile of visual and motor responses in individual neurons. This led to visual and motor codes for particular spatial locations that were instantiated by different mixtures of neurons, which could be important in PFC's flexible role in multiple sensory, cognitive, and motor tasks.NEW & NOTEWORTHY A central question in neuroscience is how the brain transitions from sensory representations to motor outputs. The prefrontal cortex contains neurons that have long been implicated as important in this transition and in working memory. We found evidence for rich and diverse tuning in these neurons, which was often spatially misaligned between visual and saccadic responses. This feature may play an important role in flexible working memory capabilities.
Collapse
Affiliation(s)
- Sanjeev B Khanna
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jonathan A Scott
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Matthew A Smith
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania.,Carnegie Mellon Neuroscience Institute, Pittsburgh, Pennsylvania
| |
Collapse
|
32
|
Foik AT, Scholl LR, Lean GA, Lyon DC. Visual Response Characteristics in Lateral and Medial Subdivisions of the Rat Pulvinar. Neuroscience 2020; 441:117-130. [PMID: 32599121 PMCID: PMC7398122 DOI: 10.1016/j.neuroscience.2020.06.030] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 06/18/2020] [Accepted: 06/19/2020] [Indexed: 12/21/2022]
Abstract
The pulvinar is a higher-order thalamic relay and a central component of the extrageniculate visual pathway, with input from the superior colliculus and visual cortex and output to all of visual cortex. Rodent pulvinar, more commonly called the lateral posterior nucleus (LP), consists of three highly-conserved subdivisions, and offers the advantage of simplicity in its study compared to more subdivided primate pulvinar. Little is known about receptive field properties of LP, let alone whether functional differences exist between different LP subdivisions, making it difficult to understand what visual information is relayed and what kinds of computations the pulvinar might support. Here, we characterized single-cell response properties in two V1 recipient subdivisions of rat pulvinar, the rostromedial (LPrm) and lateral (LPl), and found that a fourth of the cells were selective for orientation, compared to half in V1, and that LP tuning widths were significantly broader. Response latencies were also significantly longer and preferred size more than three times larger on average than in V1; the latter suggesting pulvinar as a source of spatial context to V1. Between subdivisons, LPl cells preferred higher temporal frequencies, whereas LPrm showed a greater degree of direction selectivity and pattern motion detection. Taken together with known differences in connectivity patterns, these results suggest two separate visual feature processing channels in the pulvinar, one in LPl related to higher speed processing which likely derives from superior colliculus input, and the other in LPrm for motion processing derived through input from visual cortex. SIGNIFICANCE STATEMENT: The pulvinar has a perplexing role in visual cognition as no clear link has been found between the functional properties of its neurons and behavioral deficits that arise when it is damaged. The pulvinar, called the lateral posterior nucleus (LP) in rats, is a higher order thalamic relay with input from the superior colliculus and visual cortex and output to all of visual cortex. By characterizing single-cell response properties in anatomically distinct subdivisions we found two separate visual feature processing channels in the pulvinar, one in lateral LP related to higher speed processing which likely derives from superior colliculus input, and the other in rostromedial LP for motion processing derived through input from visual cortex.
Collapse
Affiliation(s)
- Andrzej T Foik
- Department of Anatomy and Neurobiology, School of Medicine, University of California, Irvine, United States
| | - Leo R Scholl
- Department of Anatomy and Neurobiology, School of Medicine, University of California, Irvine, United States; Department of Cognitive Sciences, School of Social Sciences, University of California, Irvine, United States
| | - Georgina A Lean
- Department of Anatomy and Neurobiology, School of Medicine, University of California, Irvine, United States; Department of Cognitive Sciences, School of Social Sciences, University of California, Irvine, United States
| | - David C Lyon
- Department of Anatomy and Neurobiology, School of Medicine, University of California, Irvine, United States.
| |
Collapse
|
33
|
Hu J, Ma H, Zhu S, Li P, Xu H, Fang Y, Chen M, Han C, Fang C, Cai X, Yan K, Lu HD. Visual Motion Processing in Macaque V2. Cell Rep 2020; 25:157-167.e5. [PMID: 30282025 DOI: 10.1016/j.celrep.2018.09.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 07/05/2018] [Accepted: 09/06/2018] [Indexed: 11/26/2022] Open
Abstract
In the primate visual system, direction-selective (DS) neurons are critical for visual motion perception. While DS neurons in the dorsal visual pathway have been well characterized, the response properties of DS neurons in other major visual areas are largely unexplored. Recent optical imaging studies in monkey visual cortex area 2 (V2) revealed clusters of DS neurons. This imaging method facilitates targeted recordings from these neurons. Using optical imaging and single-cell recording, we characterized detailed response properties of DS neurons in macaque V2. Compared with DS neurons in the dorsal areas (e.g., middle temporal area [MT]), V2 DS neurons have a smaller receptive field and a stronger antagonistic surround. They do not code speed or plaid motion but are sensitive to motion contrast. Our results suggest that V2 DS neurons play an important role in figure-ground segregation. The clusters of V2 DS neurons are likely specialized functional systems for detecting motion contrast.
Collapse
Affiliation(s)
- Jiaming Hu
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Interdisciplinary Institute of Neuroscience and Technology, Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China
| | - Heng Ma
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Shude Zhu
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Peichao Li
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Haoran Xu
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yang Fang
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ming Chen
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Chao Han
- Institute of Neuroscience, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai 200031, China; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Chen Fang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xingya Cai
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Kun Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Haidong D Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Interdisciplinary Institute of Neuroscience and Technology, Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China.
| |
Collapse
|
34
|
Compound Stimuli Reveal the Structure of Visual Motion Selectivity in Macaque MT Neurons. eNeuro 2019; 6:ENEURO.0258-19.2019. [PMID: 31604815 PMCID: PMC6868477 DOI: 10.1523/eneuro.0258-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 08/15/2019] [Accepted: 08/22/2019] [Indexed: 11/26/2022] Open
Abstract
Motion selectivity in primary visual cortex (V1) is approximately separable in orientation, spatial frequency, and temporal frequency (“frequency-separable”). Models for area MT neurons posit that their selectivity arises by combining direction-selective V1 afferents whose tuning is organized around a tilted plane in the frequency domain, specifying a particular direction and speed (“velocity-separable”). This construction explains “pattern direction-selective” MT neurons, which are velocity-selective but relatively invariant to spatial structure, including spatial frequency, texture and shape. We designed a set of experiments to distinguish frequency-separable and velocity-separable models and executed them with single-unit recordings in macaque V1 and MT. Surprisingly, when tested with single drifting gratings, most MT neurons’ responses are fit equally well by models with either form of separability. However, responses to plaids (sums of two moving gratings) tend to be better described as velocity-separable, especially for pattern neurons. We conclude that direction selectivity in MT is primarily computed by summing V1 afferents, but pattern-invariant velocity tuning for complex stimuli may arise from local, recurrent interactions.
Collapse
|
35
|
Wallisch P, Movshon JA. Responses of neurons in macaque MT to unikinetic plaids. J Neurophysiol 2019; 122:1937-1945. [PMID: 31509468 DOI: 10.1152/jn.00486.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Response properties of MT neurons are often studied with "bikinetic" plaid stimuli, which consist of two superimposed sine wave gratings moving in different directions. Oculomotor studies using "unikinetic plaids" in which only one of the two superimposed gratings moves suggest that the eyes first move reflexively in the direction of the moving grating and only later converge on the perceived direction of the moving pattern. MT has been implicated as the source of visual signals that drives these responses. We wanted to know whether stationary gratings, which have little effect on MT cells when presented alone, would influence MT responses when paired with a moving grating. We recorded extracellularly from neurons in area MT and measured responses to stationary and moving gratings, and to their sums: bikinetic and unikinetic plaids. As expected, stationary gratings presented alone had a very modest influence on the activity of MT neurons. Responses to moving gratings and bikinetic plaids were similar to those previously reported and revealed cells selective for the motion of plaid patterns and of their components (pattern and component cells). When these neurons were probed with unikinetic plaids, pattern cells shifted their direction preferences in a way that revealed the influence of the static grating. Component cell preferences shifted little or not at all. These results support the notion that pattern-selective neurons in area MT integrate component motions that differ widely in speed, and that they do so in a way that is consistent with an intersection-of-constraints model.NEW & NOTEWORTHY Human perceptual and eye movement responses to moving gratings are influenced by adding a second, static grating to create a "unikinetic" plaid. Cells in MT do not respond to static gratings, but those gratings still influence the direction selectivity of some MT cells. The cells influenced by static gratings are those tuned for the motion of global patterns, but not those tuned only for the individual components of moving targets.
Collapse
Affiliation(s)
- Pascal Wallisch
- Center for Neural Science, New York University, New York, New York
| | | |
Collapse
|
36
|
Zarei Eskikand P, Kameneva T, Burkitt AN, Grayden DB, Ibbotson MR. Pattern Motion Processing by MT Neurons. Front Neural Circuits 2019; 13:43. [PMID: 31293393 PMCID: PMC6598444 DOI: 10.3389/fncir.2019.00043] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 06/03/2019] [Indexed: 11/13/2022] Open
Abstract
Based on stimulation with plaid patterns, neurons in the Middle Temporal (MT) area of primate visual cortex are divided into two types: pattern and component cells. The prevailing theory suggests that pattern selectivity results from the summation of the outputs of component cells as part of a hierarchical visual pathway. We present a computational model of the visual pathway from primary visual cortex (V1) to MT that suggests an alternate model where the progression from component to pattern selectivity is not required. Using standard orientation-selective V1 cells, end-stopped V1 cells, and V1 cells with extra-classical receptive fields (RFs) as inputs to MT, the model shows that the degree of pattern or component selectivity in MT could arise from the relative strengths of the three V1 input types. Dominance of end-stopped V1 neurons in the model leads to pattern selectivity in MT, while dominance of V1 cells with extra-classical RFs result in component selectivity. This model may assist in designing experiments to further understand motion processing mechanisms in primate MT.
Collapse
Affiliation(s)
- Parvin Zarei Eskikand
- NeuroEngineering Laboratory, Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - Tatiana Kameneva
- NeuroEngineering Laboratory, Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia.,Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Anthony N Burkitt
- NeuroEngineering Laboratory, Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - David B Grayden
- NeuroEngineering Laboratory, Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC, Australia
| | - Michael R Ibbotson
- National Vision Research Institute, Australian College of Optometry, Carlton, VIC, Australia
| |
Collapse
|
37
|
Going with the Flow: The Neural Mechanisms Underlying Illusions of Complex-Flow Motion. J Neurosci 2019; 39:2664-2685. [PMID: 30777886 DOI: 10.1523/jneurosci.2112-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 11/21/2022] Open
Abstract
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.SIGNIFICANCE STATEMENT The neural basis of the transformation from objective reality to illusory percepts of rotation, expansion, and contraction remains unknown. We demonstrate psychophysically that macaques perceive these illusory complex-flow motions in a manner similar to that of human observers. At the neural level, we show that medial superior temporal (MSTd) neurons represent illusory flow motions as if they were real by globally integrating middle temporal area (MT) local motion signals. Furthermore, while MT neurons reliably encode real and illusory local motions with similar temporal delays, MSTd neurons take a significantly longer time to process the signals associated with illusory percepts. Our work extends previous complex-flow motion studies by providing the first detailed analysis of the neuron-specific mechanisms underlying complex forms of illusory motion integration from MT to MSTd.
Collapse
|
38
|
Contrast-dependent phase sensitivity in area MT of macaque visual cortex. Neuroreport 2019; 30:195-201. [PMID: 30614909 DOI: 10.1097/wnr.0000000000001183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
In primate visual cortex (V1), about half the neurons are sensitive to the spatial phases of grating stimuli and generate highly modulated responses to drifting gratings (simple cells). The remaining cells show far less phase sensitivity and relatively unmodulated responses to moving gratings (complex cells). In the second visual area (V2) and the motion processing area MT (or V5), the majority of cells have unmodulated responses to drifting gratings - they are phase invariant. At just-detectable contrasts, 44% of V1 complex cells show highly modulated responses, but this contrast-dependent phase sensitivity is found in only 7% of V2 complex cells. We recorded from 149 cells in macaque MT - 142 classed as complex cells at high contrast. Approximately 14% (20/142) of MT complex cells showed significantly modulated responses to drifting gratings at just-detectable contrasts. A general feature of MT cells is that they can be divided into pattern and component selective types, but we found no correlation between this classification and contrast-dependent phase sensitivity. Phase sensitivity in MT is discussed in relation to MT's input structure.
Collapse
|
39
|
Lempel AA, Nielsen KJ. Ferrets as a Model for Higher-Level Visual Motion Processing. Curr Biol 2018; 29:179-191.e5. [PMID: 30595516 DOI: 10.1016/j.cub.2018.11.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 11/03/2018] [Accepted: 11/05/2018] [Indexed: 10/27/2022]
Abstract
Ferrets are a major developmental animal model due to their early parturition. Here we show for the first time that ferrets could be used to study development of higher-level visual processes previously identified in primates. In primates, complex motion processing involves primary visual cortex (V1), which generates local motion signals, and higher-level visual area MT, which integrates these signals over more global spatial regions. Our data show similar transformations in motion signals between ferret V1 and higher-level visual area PSS, located in the posterior bank of the suprasylvian sulcus. We found that PSS neurons, like MT neurons, were tuned for stimulus motion and showed strong suppression between opposing direction inputs. Most strikingly, PSS, like MT, exhibited robust global motion signals when tested with coherent plaids-the classic test for motion integration across multiple moving elements. These PSS responses were described well by computational models developed for MT. Our findings establish the ferret as a strong animal model for development of higher-level visual processing.
Collapse
Affiliation(s)
- Augusto A Lempel
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Kristina J Nielsen
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
40
|
De Franceschi G, Solomon SG. Visual response properties of neurons in the superficial layers of the superior colliculus of awake mouse. J Physiol 2018; 596:6307-6332. [PMID: 30281795 PMCID: PMC6292807 DOI: 10.1113/jp276964] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Accepted: 09/24/2018] [Indexed: 12/28/2022] Open
Abstract
KEY POINTS In rodents, including mice, the superior colliculus is the major target of the retina, but its visual response is not well characterized. In the present study, extracellular recordings from single nerve cells in the superficial layers of the superior colliculus were made in awake, head-restrained mice, and their responses to visual stimuli were measured. It was found that these neurons show brisk, highly sensitive and short latency visual responses, a preference for black over white stimuli, and diverse responses to moving patterns. At least five broad classes can be defined by variation in functional properties among units. The results of the present study demonstrate that eye movements have a measurable impact on visual responses in awake animals and show how they may be mitigated in analyses. ABSTRACT The mouse is an increasingly important animal model of visual function in health and disease. In mice, most retinal signals are routed through the superficial layers of the midbrain superior colliculus, and it is well established that much of the visual behaviour of mice relies on activity in the superior colliculus. The functional organization of visual signals in the mouse superior colliculus is, however, not well established in awake animals. We therefore made extracellular recordings from the superficial layers of the superior colliculus in awake mice, while the animals were viewing visual stimuli including flashed spots and drifting gratings. We find that neurons in the superficial layers of the superior colliculus of awake mouse generally show short latency, brisk responses. Receptive fields are usually 'ON-OFF' with a preference for black stimuli, and are weakly non-linear in response to gratings and other forms of luminance modulation. Population responses to drifting gratings are highly contrast sensitive, with a robust response to spatial frequencies above 0.3 cycles degree-1 and temporal frequencies above 15 Hz. The receptive fields are also often speed-tuned or direction-selective. Analysis of the response across multiple stimulus dimensions reveals at least five functionally distinct groups of units. We also find that eye movements affect measurements of receptive field properties in awake animals, and show how these may be mitigated in analyses. Qualitatively similar responses were obtained in urethane-anaesthetized animals, although receptive fields in awake animals had higher contrast sensitivity, shorter visual latency and a stronger response to high temporal frequencies.
Collapse
Affiliation(s)
- Gioia De Franceschi
- Institute of Behavioural Neuroscience, Department of Experimental Psychology
University College LondonLondonUK
| | - Samuel G. Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology
University College LondonLondonUK
| |
Collapse
|
41
|
A video-driven model of response statistics in the primate middle temporal area. Neural Netw 2018; 108:424-444. [PMID: 30312959 DOI: 10.1016/j.neunet.2018.09.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 07/20/2018] [Accepted: 09/06/2018] [Indexed: 11/23/2022]
Abstract
Neurons in the primate middle temporal area (MT) encode information about visual motion and binocular disparity. MT has been studied intensively for decades, so there is a great deal of information in the literature about MT neuron tuning. In this study, our goal is to consolidate some of this information into a statistical model of the MT population response. The model accepts arbitrary stereo video as input. It uses computer-vision methods to calculate known correlates of the responses (such as motion velocity), and then predicts activity using a combination of tuning functions that have previously been used to describe data in various experiments. To construct the population response, we also estimate the distributions of many model parameters from data in the electrophysiology literature. We show that the model accounts well for a separate dataset of MT speed tuning that was not used in developing the model. The model may be useful for studying relationships between MT activity and behavior in ethologically relevant tasks. As an example, we show that the model can provide regression targets for internal activity in a deep convolutional network that performs a visual odometry task, so that its representations become more physiologically realistic.
Collapse
|
42
|
Goris RLT, Ziemba CM, Movshon JA, Simoncelli EP. Slow gain fluctuations limit benefits of temporal integration in visual cortex. J Vis 2018; 18:8. [PMID: 30140890 PMCID: PMC6107324 DOI: 10.1167/18.8.8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Sensory neurons represent stimulus information with sequences of action potentials that differ across repeated measurements. This variability limits the information that can be extracted from momentary observations of a neuron's response. It is often assumed that integrating responses over time mitigates this limitation. However, temporal response correlations can reduce the benefits of temporal integration. We examined responses of individual orientation-selective neurons in the primary visual cortex of two macaque monkeys performing an orientation-discrimination task. The signal-to-noise ratio of temporally integrated responses increased for durations up to a few hundred milliseconds but saturated for longer durations. This was true even when cells exhibited little or no adaptation in their response levels. These observations are well explained by a statistical response model in which spikes arise from a Poisson process whose stimulus-dependent rate is modulated by slow, stimulus-independent fluctuations in gain. The response variability arising from the Poisson process is reduced by temporal integration, but the slow modulatory nature of variability due to gain fluctuations is not. Slow gain fluctuations therefore impose a fundamental limit on the benefits of temporal integration.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Neural Science, New York University, New York, NY, USA.,Howard Hughes Medical Institute, New York University, New York, NY, USA.,Present address: Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| | - Corey M Ziemba
- Center for Neural Science, New York University, New York, NY, USA.,Howard Hughes Medical Institute, New York University, New York, NY, USA.,Present address: Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| | | | - Eero P Simoncelli
- Center for Neural Science, New York University, New York, NY, USA.,Howard Hughes Medical Institute, New York University, New York, NY, USA
| |
Collapse
|
43
|
Ziemba CM, Freeman J, Simoncelli EP, Movshon JA. Contextual modulation of sensitivity to naturalistic image structure in macaque V2. J Neurophysiol 2018; 120:409-420. [PMID: 29641304 PMCID: PMC6139455 DOI: 10.1152/jn.00900.2017] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The stimulus selectivity of neurons in V1 is well known, as is the finding that their responses can be affected by visual input to areas outside of the classical receptive field. Less well understood are the ways selectivity is modified as signals propagate to visual areas beyond V1, such as V2. We recently proposed a role for V2 neurons in representing the higher order statistical dependencies found in images of naturally occurring visual texture. V2 neurons, but not V1 neurons, respond more vigorously to "naturalistic" images that contain these dependencies than to "noise" images that lack them. In this work, we examine the dependency of these effects on stimulus size. For most V2 neurons, the preference for naturalistic over noise stimuli was modest when presented in small patches and gradually strengthened with increasing size, suggesting that the mechanisms responsible for this enhanced sensitivity operate over regions of the visual field that are larger than the classical receptive field. Indeed, we found that surround suppression was stronger for noise than for naturalistic stimuli and that the preference for large naturalistic stimuli developed over a delayed time course consistent with lateral or feedback connections. These findings are compatible with a spatially broad facilitatory mechanism that is absent in V1 and suggest that a distinct role for the receptive field surround emerges in V2 along with sensitivity for more complex image structure. NEW & NOTEWORTHY The responses of neurons in visual cortex are often affected by visual input delivered to regions of the visual field outside of the conventionally defined receptive field, but the significance of such contextual modulations are not well understood outside of area V1. We studied the importance of regions beyond the receptive field in establishing a novel form of selectivity for the statistical dependencies contained in natural visual textures that first emerges in area V2.
Collapse
Affiliation(s)
- Corey M Ziemba
- Center for Neural Science, New York University , New York, New York.,Howard Hughes Medical Institute, New York University , New York, New York
| | - Jeremy Freeman
- Center for Neural Science, New York University , New York, New York
| | - Eero P Simoncelli
- Center for Neural Science, New York University , New York, New York.,Howard Hughes Medical Institute, New York University , New York, New York
| | | |
Collapse
|
44
|
Flexible egocentric and allocentric representations of heading signals in parietal cortex. Proc Natl Acad Sci U S A 2018; 115:E3305-E3312. [PMID: 29555744 DOI: 10.1073/pnas.1715625115] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
By systematically manipulating head position relative to the body and eye position relative to the head, previous studies have shown that vestibular tuning curves of neurons in the ventral intraparietal (VIP) area remain invariant when expressed in body-/world-centered coordinates. However, body orientation relative to the world was not manipulated; thus, an egocentric, body-centered representation could not be distinguished from an allocentric, world-centered reference frame. We manipulated the orientation of the body relative to the world such that we could distinguish whether vestibular heading signals in VIP are organized in body- or world-centered reference frames. We found a hybrid representation, depending on gaze direction. When gaze remained fixed relative to the body, the vestibular heading tuning of VIP neurons shifted systematically with body orientation, indicating an egocentric, body-centered reference frame. In contrast, when gaze remained fixed relative to the world, this representation changed to be intermediate between body- and world-centered. We conclude that the neural representation of heading in posterior parietal cortex is flexible, depending on gaze and possibly attentional demands.
Collapse
|
45
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
46
|
Rasmussen R, Yonehara K. Circuit Mechanisms Governing Local vs. Global Motion Processing in Mouse Visual Cortex. Front Neural Circuits 2017; 11:109. [PMID: 29311845 PMCID: PMC5743699 DOI: 10.3389/fncir.2017.00109] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 12/14/2017] [Indexed: 11/21/2022] Open
Abstract
A withstanding question in neuroscience is how neural circuits encode representations and perceptions of the external world. A particularly well-defined visual computation is the representation of global object motion by pattern direction-selective (PDS) cells from convergence of motion of local components represented by component direction-selective (CDS) cells. However, how PDS and CDS cells develop their distinct response properties is still unresolved. The visual cortex of the mouse is an attractive model for experimentally solving this issue due to the large molecular and genetic toolbox available. Although mouse visual cortex lacks the highly ordered orientation columns of primates, it is organized in functional sub-networks and contains striate- and extrastriate areas like its primate counterparts. In this Perspective article, we provide an overview of the experimental and theoretical literature on global motion processing based on works in primates and mice. Lastly, we propose what types of experiments could illuminate what circuit mechanisms are governing cortical global visual motion processing. We propose that PDS cells in mouse visual cortex appear as the perfect arena for delineating and solving how individual sensory features extracted by neural circuits in peripheral brain areas are integrated to build our rich cohesive sensory experiences.
Collapse
Affiliation(s)
- Rune Rasmussen
- The Danish Research Institute of Translational Neuroscience-DANDRITE, Nordic EMBL Partnership for Molecular Medicine, Department of Biomedicine, Aarhus University, Aarhus, Denmark
| | - Keisuke Yonehara
- The Danish Research Institute of Translational Neuroscience-DANDRITE, Nordic EMBL Partnership for Molecular Medicine, Department of Biomedicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
47
|
A Unifying Motif for Spatial and Directional Surround Suppression. J Neurosci 2017; 38:989-999. [PMID: 29229704 DOI: 10.1523/jneurosci.2386-17.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Revised: 11/13/2017] [Accepted: 12/02/2017] [Indexed: 11/21/2022] Open
Abstract
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas.SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex.
Collapse
|
48
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
49
|
Medathati NVK, Rankin J, Meso AI, Kornprobst P, Masson GS. Recurrent network dynamics reconciles visual motion segmentation and integration. Sci Rep 2017; 7:11270. [PMID: 28900120 PMCID: PMC5595847 DOI: 10.1038/s41598-017-11373-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 08/18/2017] [Indexed: 11/09/2022] Open
Abstract
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
Collapse
Affiliation(s)
| | - James Rankin
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
- Center for Neural Science, New York University, New York, USA
| | - Andrew I Meso
- Institut de Neurosciences de la Timone, CNRS and Aix-Marseille Université, Marseille, France
- Psychology, Faculty of Science and Technology, Bournemouth University, Bournemouth, UK
| | - Pierre Kornprobst
- Université Côte d'Azur, Inria, Biovision team, Sophia Antipolis, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, CNRS and Aix-Marseille Université, Marseille, France
| |
Collapse
|
50
|
Joint Encoding of Object Motion and Motion Direction in the Salamander Retina. J Neurosci 2017; 36:12203-12216. [PMID: 27903729 DOI: 10.1523/jneurosci.1971-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 09/17/2016] [Accepted: 09/23/2016] [Indexed: 11/21/2022] Open
Abstract
The processing of motion in visual scenes is important for detecting and tracking moving objects as well as for monitoring self-motion through the induced optic flow. Specialized neural circuits have been identified in the vertebrate retina for detecting motion direction or for distinguishing between object motion and self-motion, although little is known about how information about these distinct features of visual motion is combined. The salamander retina, which is a widely used model system for analyzing retinal function, contains object-motion-sensitive (OMS) ganglion cells, which strongly respond to local motion signals but are suppressed by global image motion. Yet, direction-selective (DS) ganglion cells have been conspicuously absent from characterizations of the salamander retina, despite their ubiquity in other model systems. We here show that the retina of axolotl salamanders contains at least two distinct classes of DS ganglion cells. For one of these classes, the cells display a strong preference for local over global motion in addition to their direction selectivity (OMS-DS cells) and thereby combine sensitivity to two distinct motion features. The OMS-DS cells are further distinct from standard (non-OMS) DS cells by their smaller receptive fields and different organization of preferred motion directions. Our results suggest that the two classes of DS cells specialize to encode motion direction of local and global motion stimuli, respectively, even for complex composite motion scenes. Furthermore, although the salamander DS cells are OFF-type, there is a strong analogy to the systems of ON and ON-OFF DS cells in the mammalian retina. SIGNIFICANCE STATEMENT The retina contains specialized cells for motion processing. Among the retinal ganglion cells, which form the output neurons of the retina, some are known to report the direction of a moving stimulus (direction-selective cells), and others distinguish the motion of an object from a moving background. But little is known about how information about local object motion and information about motion direction interact. Here, we report that direction-selective ganglion cells can be identified in the salamander retina, where their existence had been unclear. Furthermore, there are two independent systems of direction-selective cells, and one of these combines direction selectivity with sensitivity to local motion. The output of these cells could assist in tracking moving objects and estimating their future position.
Collapse
|