1
|
Kupers ER, Kim I, Grill-Spector K. Rethinking simultaneous suppression in visual cortex via compressive spatiotemporal population receptive fields. Nat Commun 2024; 15:6885. [PMID: 39128923 PMCID: PMC11317513 DOI: 10.1038/s41467-024-51243-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 07/24/2024] [Indexed: 08/13/2024] Open
Abstract
When multiple visual stimuli are presented simultaneously in the receptive field, the neural response is suppressed compared to presenting the same stimuli sequentially. The prevailing hypothesis suggests that this suppression is due to competition among multiple stimuli for limited resources within receptive fields, governed by task demands. However, it is unknown how stimulus-driven computations may give rise to simultaneous suppression. Using fMRI, we find simultaneous suppression in single voxels, which varies with both stimulus size and timing, and progressively increases up the visual hierarchy. Using population receptive field (pRF) models, we find that compressive spatiotemporal summation rather than compressive spatial summation predicts simultaneous suppression, and that increased simultaneous suppression is linked to larger pRF sizes and stronger compressive nonlinearities. These results necessitate a rethinking of simultaneous suppression as the outcome of stimulus-driven compressive spatiotemporal computations within pRFs, and open new opportunities to study visual processing capacity across space and time.
Collapse
Affiliation(s)
- Eline R Kupers
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | - Insub Kim
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
2
|
Intoy J, Li YH, Bowers NR, Victor JD, Poletti M, Rucci M. Consequences of eye movements for spatial selectivity. Curr Biol 2024; 34:3265-3272.e4. [PMID: 38981478 PMCID: PMC11348862 DOI: 10.1016/j.cub.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 05/17/2024] [Accepted: 06/07/2024] [Indexed: 07/11/2024]
Abstract
What determines spatial tuning in the visual system? Standard views rely on the assumption that spatial information is directly inherited from the relative position of photoreceptors and shaped by neuronal connectivity.1,2 However, human eyes are always in motion during fixation,3,4,5,6 so retinal neurons receive temporal modulations that depend on the interaction of the spatial structure of the stimulus with eye movements. It has long been hypothesized that these modulations might contribute to spatial encoding,7,8,9,10,11,12 a proposal supported by several recent observations.13,14,15,16 A fundamental, yet untested, consequence of this encoding strategy is that spatial tuning is not hard-wired in the visual system but critically depends on how the fixational motion of the eye shapes the temporal structure of the signals impinging onto the retina. Here we used high-resolution techniques for eye-tracking17 and gaze-contingent display control18 to quantitatively test this distinctive prediction. We examined how contrast sensitivity, a hallmark of spatial vision, is influenced by fixational motion, both during normal active fixation and when the spatiotemporal stimulus on the retina is altered to mimic changes in fixational control. We showed that visual sensitivity closely follows the strength of the luminance modulations delivered within a narrow temporal bandwidth, so changes in fixational motion have opposite visual effects at low and high spatial frequencies. By identifying a key role for oculomotor activity in spatial selectivity, these findings have important implications for the perceptual consequences of abnormal eye movements, the sources of perceptual variability, and the function of oculomotor control.
Collapse
Affiliation(s)
- Janis Intoy
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Yuanhao H Li
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Norick R Bowers
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, USA
| | - Martina Poletti
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
3
|
Phangwiwat T, Phunchongharn P, Wongsawat Y, Chatnuntawech I, Wang S, Chunharas C, Sprague TC, Woodman GF, Itthipuripat S. Sustained attention operates via dissociable neural mechanisms across different eccentric locations. Sci Rep 2024; 14:11188. [PMID: 38755251 PMCID: PMC11099062 DOI: 10.1038/s41598-024-61171-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 05/02/2024] [Indexed: 05/18/2024] Open
Abstract
In primates, foveal and peripheral vision have distinct neural architectures and functions. However, it has been debated if selective attention operates via the same or different neural mechanisms across eccentricities. We tested these alternative accounts by examining the effects of selective attention on the steady-state visually evoked potential (SSVEP) and the fronto-parietal signal measured via EEG from human subjects performing a sustained visuospatial attention task. With a negligible level of eye movements, both SSVEP and SND exhibited the heterogeneous patterns of attentional modulations across eccentricities. Specifically, the attentional modulations of these signals peaked at the parafoveal locations and such modulations wore off as visual stimuli appeared closer to the fovea or further away towards the periphery. However, with a relatively higher level of eye movements, the heterogeneous patterns of attentional modulations of these neural signals were less robust. These data demonstrate that the top-down influence of covert visuospatial attention on early sensory processing in human cortex depends on eccentricity and the level of saccadic responses. Taken together, the results suggest that sustained visuospatial attention operates differently across different eccentric locations, providing new understanding of how attention augments sensory representations regardless of where the attended stimulus appears.
Collapse
Affiliation(s)
- Tanagrit Phangwiwat
- Neuroscience Center for Research and Innovation (NX), Learning Institute, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
| | - Phond Phunchongharn
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, 73170, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, 12120, Thailand
| | - Sisi Wang
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA
| | - Chaipat Chunharas
- Cognitive Clinical and Computational Neuroscience Center of Excellence, Department of Internal Medicine, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
- Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, 10330, Thailand
| | - Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, 93106, USA
| | - Geoffrey F Woodman
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA
| | - Sirawaj Itthipuripat
- Neuroscience Center for Research and Innovation (NX), Learning Institute, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand.
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand.
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA.
| |
Collapse
|
4
|
Song J, Breitmeyer BG, Brown JM. Further Examination of the Pulsed- and Steady-Pedestal Paradigms under Hypothetical Parvocellular- and Magnocellular-Biased Conditions. Vision (Basel) 2024; 8:28. [PMID: 38804349 PMCID: PMC11130818 DOI: 10.3390/vision8020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/12/2024] [Accepted: 04/25/2024] [Indexed: 05/29/2024] Open
Abstract
The pulsed- and steady-pedestal paradigms were designed to track increment thresholds (ΔC) as a function of pedestal contrast (C) for the parvocellular (P) and magnocellular (M) systems, respectively. These paradigms produce contrasting results: linear relationships between ΔC and C are observed in the pulsed-pedestal paradigm, indicative of the P system's processing, while the steady-pedestal paradigm reveals nonlinear functions, characteristic of the M system's response. However, we recently found the P model fits better than the M model for both paradigms, using Gabor stimuli biased towards the M or P systems based on their sensitivity to color and spatial frequency. Here, we used two-square pedestals under green vs. red light in the lower-left vs. upper-right visual fields to bias processing towards the M vs. P system, respectively. Based on our previous findings, we predicted the following: (1) steeper ΔC vs. C functions with the pulsed than the steady pedestal due to different task demands; (2) lower ΔCs in the upper-right vs. lower-left quadrant due to its bias towards P-system processing there; (3) no effect of color, since both paradigms track the P-system; and, most importantly (4) contrast gain should not be higher for the steady than for the pulsed pedestal. In general, our predictions were confirmed, replicating our previous findings and providing further evidence questioning the general validity of using the pulsed- and steady-pedestal paradigms to differentiate the P and M systems.
Collapse
Affiliation(s)
- Jaeseon Song
- Department of Psychology, University of Georgia, Athens, GA 30602, USA;
| | | | - James M. Brown
- Department of Psychology, University of Georgia, Athens, GA 30602, USA;
| |
Collapse
|
5
|
Song J, Breitmeyer BG, Brown JM. Examining Increment thresholds as a function of pedestal contrast under hypothetical parvo- and magnocellular-biased conditions. Atten Percept Psychophys 2024; 86:213-220. [PMID: 38030820 DOI: 10.3758/s13414-023-02819-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 12/01/2023]
Abstract
Theoretically, the pulsed- and steady-pedestal paradigms are thought to track contrast-increment thresholds (ΔC) as a function of pedestal contrast (C) for the parvocellular (P) and magnocellular (M) systems, respectively, yielding linear ΔC versus C functions for the pulsed- and nonlinear functions for the steady-pedestal paradigm. A recent study utilizing these paradigms to isolate the P and M systems reported no evidence of the M system being suppressed by red light, contrary to previous physiological and psychophysical findings. Curious as to why this may have occurred, we examined how ΔC varies with C for the P and M systems using the pulsed- and steady-pedestal paradigms and stimuli biased towards the P or M systems based on their sensitivity to spatial frequency (SF) and color. We found no effect of color and little influence of SF. To explain this lack of color effects, we used a quantitative model of ΔC (as it changes with C) to obtain Csat and contrast-gain values. The contrast-gain values (i) contradicted the hypothesis that the steady-pedestal paradigm tracks the M-system response, and (ii) our obtained Csat values indicated strongly that both pulsed- and steady-pedestal paradigms track primarily the P-system response.
Collapse
Affiliation(s)
- Jaeseon Song
- Department of Psychology, University of Georgia, Athens, GA, 30602-3013, USA.
| | - Bruno G Breitmeyer
- Department of Psychology, University of Houston, Houston, TX, 77204-5022, USA
| | - James M Brown
- Department of Psychology, University of Georgia, Athens, GA, 30602-3013, USA
| |
Collapse
|
6
|
Zemon V, Butler PD, Legatt ME, Gordon J. The spatial contrast sensitivity function and its neurophysiological bases. Vision Res 2023; 210:108266. [PMID: 37247511 PMCID: PMC10527080 DOI: 10.1016/j.visres.2023.108266] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 05/02/2023] [Accepted: 05/14/2023] [Indexed: 05/31/2023]
Abstract
Contrast processing is a fundamental function of the visual system, and contrast sensitivity as a function of spatial frequency (CSF) provides critical information about the integrity of the system. Here, we used a novel iPad-based instrument to collect CSFs and fitted the data with a difference of Gaussians model to investigate the neurophysiological bases of the spatial CSF. The reliability of repeat testing within and across sessions was evaluated in a sample of 22 adults for five spatial frequencies (0.41-13 cycles/degree) and two temporal durations (33 and 500 ms). Results demonstrate that the shape of the CSF, lowpass versus bandpass, depends on the temporal stimulus condition. Comparisons with previous psychophysical studies and with single-cell data from macaques and humans indicate that the major portion of the CSF, spatial frequencies >1.5 cycles/degree regardless of temporal condition, is determined by a 'sustained' mechanism (presumably parvocellular input to primary visual cortex [V1]). Contrast sensitivity to the lowest spatial frequency tested appears to be generated by a 'transient' mechanism (presumably magnocellular input to V1). The model fits support the hypothesis that the high spatial frequency limb of the CSF reflects the receptive field profile of the center mechanism of the smallest cells in the parvocellular pathway. These findings enhance the value of contrast sensitivity testing in general and increase the accessibility of this technique for use by clinicians through implementation on a commercially-available device.
Collapse
Affiliation(s)
- Vance Zemon
- Ferkauf Graduate School of Psychology, Yeshiva University, 1165 Morris Park Ave., Bronx, NY 10461, USA; Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd., Orangeburg, NY 10962, USA.
| | - Pamela D Butler
- Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd., Orangeburg, NY 10962, USA; Department of Psychiatry, New York University School of Medicine, One Park Ave, New York, NY 10016, USA.
| | | | - James Gordon
- Department of Psychology, Hunter College, City University of New York, 695 Park Ave., New York, NY 10065, USA.
| |
Collapse
|
7
|
Tursini K, Remy I, Le Cam S, Louis-Dorr V, Malka-Mahieu H, Schwan R, Gross G, Laprévote V, Schwitzer T. Subsequent and simultaneous electrophysiological investigation of the retina and the visual cortex in neurodegenerative and psychiatric diseases: what are the forecasts for the medicine of tomorrow? Front Psychiatry 2023; 14:1167654. [PMID: 37333926 PMCID: PMC10272854 DOI: 10.3389/fpsyt.2023.1167654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 05/18/2023] [Indexed: 06/20/2023] Open
Abstract
Visual electrophysiological deficits have been reported in neurodegenerative disorders as well as in mental disorders. Such alterations have been mentioned in both the retina and the cortex, notably affecting the photoreceptors, retinal ganglion cells (RGCs) and the primary visual cortex. Interestingly, such impairments emphasize the functional role of the visual system. For this purpose, the present study reviews the existing literature with the aim of identifying key alterations in electroretinograms (ERGs) and visual evoked potentials electroencephalograms (VEP-EEGs) of subjects with neurodegenerative and psychiatric disorders. We focused on psychiatric and neurodegenerative diseases due to similarities in their neuropathophysiological mechanisms. Our research focuses on decoupled and coupled ERG/VEP-EEG results obtained with black-and-white checkerboards or low-level visual stimuli. A decoupled approach means recording first the ERG, then the VEP-EEG in the same subject with the same visual stimuli. The second method means recording both ERG and VEP-EEG simultaneously in the same participant with the same visual stimuli. Both coupled and decoupled results were found, indicating deficits mainly in the N95 ERG wave and the P100 VEP-EEG wave in Parkinson’s, Alzheimer’s, and major depressive disorder. Such results reinforce the link between the retina and the visual cortex for the diagnosis of psychiatric and neurodegenerative diseases. With that in mind, medical devices using coupled ERG/VEP-EEG measurements are being developed in order to further investigate the relationship between the retina and the visual cortex. These new techniques outline future challenges in mental health and the use of machine learning for the diagnosis of mental disorders, which would be a crucial step toward precision psychiatry.
Collapse
Affiliation(s)
- Katelyne Tursini
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- BioSerenity, Paris, France
- INSERM U1254, Université de Lorraine, IADI, Nancy, France
| | - Irving Remy
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- BioSerenity, Paris, France
- INSERM U1114, Université de Strasbourg, Strasbourg, France
| | - Steven Le Cam
- CRAN, CNRS UMR 7039, Université de Lorraine, Nancy, France
| | | | | | - Raymund Schwan
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- INSERM U1254, Université de Lorraine, IADI, Nancy, France
- Faculté de Médecine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| | - Grégory Gross
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- INSERM U1254, Université de Lorraine, IADI, Nancy, France
| | - Vincent Laprévote
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- INSERM U1114, Université de Strasbourg, Strasbourg, France
- Faculté de Médecine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| | - Thomas Schwitzer
- Pôle Hospitalo-Universitaire de Psychiatrie d’Adultes et d’Addictologie du Grand Nancy, Centre Psychothérapique de Nancy, Laxou, France
- INSERM U1254, Université de Lorraine, IADI, Nancy, France
- Faculté de Médecine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| |
Collapse
|
8
|
Lin YC, Intoy J, Clark AM, Rucci M, Victor JD. Cognitive influences on fixational eye movements. Curr Biol 2023; 33:1606-1612.e4. [PMID: 37015221 PMCID: PMC10133196 DOI: 10.1016/j.cub.2023.03.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 01/16/2023] [Accepted: 03/09/2023] [Indexed: 04/05/2023]
Abstract
We perceive the world based on visual information acquired via oculomotor control,1 an activity intertwined with ongoing cognitive processes.2,3,4 Cognitive influences have been primarily studied in the context of macroscopic movements, like saccades and smooth pursuits. However, our eyes are never still, even during periods of fixation. One of the fixational eye movements, ocular drifts, shifts the stimulus over hundreds of receptors on the retina, a motion that has been argued to enhance the processing of spatial detail by translating spatial into temporal information.5 Despite their apparent randomness, ocular drifts are under neural control.6,7,8 However little is known about the control of drift beyond the brainstem circuitry of the vestibulo-ocular reflex.9,10 Here, we investigated the cognitive control of ocular drifts with a letter discrimination task. The experiment was designed to reveal open-loop effects, i.e., cognitive oculomotor control driven by specific prior knowledge of the task, independent of incoming sensory information. Open-loop influences were isolated by randomly presenting pure noise fields (no letters) while subjects engaged in discriminating specific letter pairs. Our results show open-loop control of drift direction in human observers.
Collapse
Affiliation(s)
- Yen-Chu Lin
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA.
| | - Janis Intoy
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Ashley M Clark
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Michele Rucci
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA
| |
Collapse
|
9
|
St-Amand D, Baker CL. Model-Based Approach Shows ON Pathway Afferents Elicit a Transient Decrease of V1 Responses. J Neurosci 2023; 43:1920-1932. [PMID: 36759194 PMCID: PMC10027028 DOI: 10.1523/jneurosci.1220-22.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 02/11/2023] Open
Abstract
Neurons in the primary visual cortex (V1) receive excitation and inhibition from distinct parallel pathways processing lightness (ON) and darkness (OFF). V1 neurons overall respond more strongly to dark than light stimuli, consistent with a preponderance of darker regions in natural images, as well as human psychophysics. However, it has been unclear whether this "dark-dominance" is because of more excitation from the OFF pathway or more inhibition from the ON pathway. To understand the mechanisms behind dark-dominance, we record electrophysiological responses of individual simple-type V1 neurons to natural image stimuli and then train biologically inspired convolutional neural networks to predict the neurons' responses. Analyzing a sample of 71 neurons (in anesthetized, paralyzed cats of either sex) has revealed their responses to be more driven by dark than light stimuli, consistent with previous investigations. We show that this asymmetry is predominantly because of slower inhibition to dark stimuli rather than to stronger excitation from the thalamocortical OFF pathway. Consistent with dark-dominant neurons having faster responses than light-dominant neurons, we find dark-dominance to solely occur in the early latencies of neurons' responses. Neurons that are strongly dark-dominated also tend to be less orientation-selective. This novel approach gives us new insight into the dark-dominance phenomenon and provides an avenue to address new questions about excitatory and inhibitory integration in cortical neurons.SIGNIFICANCE STATEMENT Neurons in the early visual cortex respond on average more strongly to dark than to light stimuli, but the mechanisms behind this bias have been unclear. Here we address this issue by combining single-unit electrophysiology with a novel machine learning model to analyze neurons' responses to natural image stimuli in primary visual cortex. Using these techniques, we find slower inhibition to light than to dark stimuli to be the leading mechanism behind stronger dark responses. This slower inhibition to light might help explain other empirical findings, such as why orientation selectivity is weaker at earlier response latencies. These results demonstrate how imbalances in excitation versus inhibition can give rise to response asymmetries in cortical neuron responses.
Collapse
Affiliation(s)
- David St-Amand
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec H3G 1A4, Canada
| | - Curtis L Baker
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec H3G 1A4, Canada
| |
Collapse
|
10
|
In vivo chromatic and spatial tuning of foveolar retinal ganglion cells in Macaca fascicularis. PLoS One 2022; 17:e0278261. [PMID: 36445926 PMCID: PMC9707781 DOI: 10.1371/journal.pone.0278261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 11/13/2022] [Indexed: 11/30/2022] Open
Abstract
The primate fovea is specialized for high acuity chromatic vision, with the highest density of cone photoreceptors and a disproportionately large representation in visual cortex. The unique visual properties conferred by the fovea are conveyed to the brain by retinal ganglion cells, the somas of which lie at the margin of the foveal pit. Microelectrode recordings of these centermost retinal ganglion cells have been challenging due to the fragility of the fovea in the excised retina. Here we overcome this challenge by combining high resolution fluorescence adaptive optics ophthalmoscopy with calcium imaging to optically record functional responses of foveal retinal ganglion cells in the living eye. We use this approach to study the chromatic responses and spatial transfer functions of retinal ganglion cells using spatially uniform fields modulated in different directions in color space and monochromatic drifting gratings. We recorded from over 350 cells across three Macaca fascicularis primates over a time period of weeks to months. We find that the majority of the L vs. M cone opponent cells serving the most central foveolar cones have spatial transfer functions that peak at high spatial frequencies (20-40 c/deg), reflecting strong surround inhibition that sacrifices sensitivity at low spatial frequencies but preserves the transmission of fine detail in the retinal image. In addition, we fit to the drifting grating data a detailed model of how ganglion cell responses draw on the cone mosaic to derive receptive field properties of L vs. M cone opponent cells at the very center of the foveola. The fits are consistent with the hypothesis that foveal midget ganglion cells are specialized to preserve information at the resolution of the cone mosaic. By characterizing the functional properties of retinal ganglion cells in vivo through adaptive optics, we characterize the response characteristics of these cells in situ.
Collapse
|
11
|
Méndez CA, Celeghin A, Diano M, Orsenigo D, Ocak B, Tamietto M. A deep neural network model of the primate superior colliculus for emotion recognition. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210512. [PMID: 36126660 PMCID: PMC9489290 DOI: 10.1098/rstb.2021.0512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/18/2022] [Indexed: 12/01/2022] Open
Abstract
Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as 'emotional' (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal 'limbic' brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never 'seen' before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in 'limbic' areas. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.
Collapse
Affiliation(s)
- Carlos Andrés Méndez
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Alessia Celeghin
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Davide Orsenigo
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Brian Ocak
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Section of Cognitive Neurophysiology and Imaging, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA
| | - Marco Tamietto
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Department of Medical and Clinical Psychology, and CoRPS - Center of Research on Psychology in Somatic diseases, Tilburg University, PO Box 90153, 5000 LE Tilburg, The Netherlands
| |
Collapse
|
12
|
Bowen EFW, Rodriguez AM, Sowinski DR, Granger R. Visual stream connectivity predicts assessments of image quality. J Vis 2022; 22:4. [PMID: 36219145 PMCID: PMC9580224 DOI: 10.1167/jov.22.11.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Despite extensive study of early vision, new and unexpected mechanisms continue to be identified. We introduce a novel formal treatment of the psychophysics of image similarity, derived directly from straightforward connectivity patterns in early visual pathways. The resulting differential geometry formulation is shown to provide accurate and explanatory accounts of human perceptual similarity judgments. The direct formal predictions are then shown to be further improved via simple regression on human behavioral reports, which in turn are used to construct more elaborate hypothesized neural connectivity patterns. It is shown that the predictive approaches introduced here outperform a standard successful published measure of perceived image fidelity; moreover, the approach provides clear explanatory principles of these similarity findings.
Collapse
Affiliation(s)
- Elijah F W Bowen
- Brain Engineering Laboratory, Department of Psychological and Brain Sciences, Dartmouth, Hanover, NH, USA.,
| | - Antonio M Rodriguez
- Brain Engineering Laboratory, Department of Psychological and Brain Sciences, Dartmouth, Hanover, NH, USA.,
| | - Damian R Sowinski
- Brain Engineering Laboratory, Department of Psychological and Brain Sciences, Dartmouth, Hanover, NH, USA.,
| | - Richard Granger
- Brain Engineering Laboratory, Department of Psychological and Brain Sciences, Dartmouth, Hanover, NH, USA.,
| |
Collapse
|
13
|
Gardiner SK, Mansberger SL. Moving Stimulus Perimetry: A New Functional Test for Glaucoma. Transl Vis Sci Technol 2022; 11:9. [PMID: 36201198 PMCID: PMC9554223 DOI: 10.1167/tvst.11.10.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 08/30/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Static pointwise perimetric sensitivities of less than approximately 19 dB are unreliable in glaucoma owing to excessive variability. We propose using moving stimuli to increase detectability, decrease variability, and hence increase this dynamic range. Methods A moving stimulus was designed to travel parallel to the average nerve fiber bundle orientation at each location, and compared against an otherwise identical static stimulus. To assess dynamic range, psychometric functions were measured at 4 locations of each of 10 subjects. To assess clinically realistic test-retest variability, 34 locations of 94 subjects with glaucoma and glaucoma suspects were tested twice, 6 months apart. Pointwise sensitivity estimates were compared using generalized estimating equation regression models. The test-retest limits of agreement for each stimulus were assessed, adjusted for within-eye clustering. Results Using static stimuli, 9 of the 40 psychometric functions had less than a 90% maximum response probability, suggesting being beyond the dynamic range. Eight of those locations had asymptotic maximum of more than 90% with moving stimuli. Sensitivities were higher for moving stimuli (P < 0.001); the difference increased as sensitivity decreased (P < 0.001). Test-retest limits of agreement were narrower for moving stimuli (-6.35 to +6.48 dB) than static stimuli (-12.7 to +7.81 dB). Sixty-two percent of subjects preferred using moving stimuli versus 19% who preferred static stimuli. Conclusions Using a moving stimulus increases perimetric sensitivities in regions of glaucomatous loss. This extends the effective dynamic range, allowing reliable testing later into the disease. Results are more repeatable, and the test is preferred by most subjects. Translational Relevance Moving stimuli allow reliable testing in patients with more severe glaucoma than currently possible.
Collapse
|
14
|
Leung TW, Cheong AMY, Chan HHL. Deficits in the Magnocellular Pathway of People with Reading Difficulties. CURRENT DEVELOPMENTAL DISORDERS REPORTS 2022. [DOI: 10.1007/s40474-022-00248-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
15
|
Phung TH, Spieringhs RM, Smet KAG, Leloup FB, Hanselaer P. Towards an image-based brightness model for self-luminous stimuli. OPTICS EXPRESS 2022; 30:9035-9052. [PMID: 35299342 DOI: 10.1364/oe.451265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
Brightness is one of the most important perceptual correlates of color appearance models (CAMs) when self-luminous stimuli are targeted. However, the vast majority of existing CAMs adopt the presence of a uniform background surrounding the stimulus, which severely limits their practical application in lighting. In this paper, a study on the brightness perception of a neutral circular stimulus surrounded by a non-uniform background consisting of a neutral ring-shaped luminous area and a dark surround is presented. The ring-shaped luminous area is presented with 3 thicknesses (0.33 cm, 0.67 cm and 1.00 cm), at 4 angular distances to the edge of the central stimulus (1.2°, 6.4°, 11.3° and 16.1°) and at 3 luminance levels (90 cd/m2, 335 cd/m2, 1200 cd/m2). In line with the literature, the results of the visual matching experiments show that the perceived brightness decreases in presence of a ring and the effect is maximal at the highest luminance of the ring, for the largest thickness and at the closest distance. Based on the observed results, an image-based model inspired by the physiology of the retina is proposed. The model includes the calculation of cone-fundamental weighted spectral radiance, scattering in the eye, cone compression and receptive field post-receptor organization. The wide receptive field assures an adaptive shift determined by both the adaptation to the stimulus and to the background. It is shown that the model performs well in predicting the matching experiments, including the impact of the thickness, the distance and the intensity of the ring, showing its potential to become the basic framework of a Lighting Appearance Model.
Collapse
|
16
|
Pamplona D, Hilgen G, Hennig MH, Cessac B, Sernagor E, Kornprobst P. Receptive field estimation in large visual neuron assemblies using a super-resolution approach. J Neurophysiol 2022; 127:1334-1347. [PMID: 35235437 DOI: 10.1152/jn.00076.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Computing the spike-triggered average (STA) is a simple method to estimate the sensory neurons' linear receptive fields (RFs). For random, uncorrelated stimuli the STA provides an unbiased RF estimate, but in practice, white noise is not a feasible stimulus as it usually evokes only weak responses. Therefore, for a visual stimulus, it is often used images of randomly modulated blocks of pixels. This solution naturally limits the resolution at which an RF can be obtained. Here we show that this limitation can be overcome by using a simple super-resolution technique. We define a novel type of stimulus, the Shifted White Noise (SWN), by introducing random spatial shifts in the usual stimulus in order to increase the resolution of the measurements. In simulated data we show that the average error using the SWN was 1.7 times smaller than when using the classical stimulus, with successful mapping of 2.3 times more neurons, covering a broader range of RF sizes. Moreover, successful RF mapping was achieved with short recordings of about one minute of activity, more than 10 times more efficient than the classical white noise stimulus. In recordings from mouse retinal ganglion cells with large scale microelectrode arrays, we could map 18 times more RFs covering a broader range of sizes. In summary, here we show that randomly shifting the usual white noise stimulus significantly improves RFs estimation, and requires only short recordings. It is straight forward to extend this method into the time dimension and adapt it to other sensory modalities.
Collapse
Affiliation(s)
- Daniela Pamplona
- Ecole Nationale Supérieure de Techniques Avancées, Institut Polytechnique de Paris, Palaiseau, France.,Université Côte d'Azur, Inria, France
| | - Gerrit Hilgen
- Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom.,Applied Sciences, Health and Life Sciences, Northumbria University, Newcastle upon Tyne, United Kingdom
| | - Matthias Helge Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Evelyne Sernagor
- Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | | |
Collapse
|
17
|
Yang Y, Wang T, Li Y, Dai W, Yang G, Han C, Wu Y, Xing D. Coding strategy for surface luminance switches in the primary visual cortex of the awake monkey. Nat Commun 2022; 13:286. [PMID: 35022404 PMCID: PMC8755737 DOI: 10.1038/s41467-021-27892-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022] Open
Abstract
Both surface luminance and edge contrast of an object are essential features for object identification. However, cortical processing of surface luminance remains unclear. In this study, we aim to understand how the primary visual cortex (V1) processes surface luminance information across its different layers. We report that edge-driven responses are stronger than surface-driven responses in V1 input layers, but luminance information is coded more accurately by surface responses. In V1 output layers, the advantage of edge over surface responses increased eight times and luminance information was coded more accurately at edges. Further analysis of neural dynamics shows that such substantial changes for neural responses and luminance coding are mainly due to non-local cortical inhibition in V1’s output layers. Our results suggest that non-local cortical inhibition modulates the responses elicited by the surfaces and edges of objects, and that switching the coding strategy in V1 promotes efficient coding for luminance. How brightness is encoded in the visual cortex remains incompletely understood. By recording from macaque V1, the authors revealed a switch from surface to edge encoding that is mediated by widespread inhibition in the output layers of the cortex.
Collapse
Affiliation(s)
- Yi Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Tian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, College of Life Sciences, Beijing Normal University, Beijing, China
| | - Yang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Weifeng Dai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Guanzhong Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Chuanliang Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Yujie Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
18
|
Baumgarten S, Hoberg T, Lohmann T, Mazinani B, Walter P, Koutsonas A. Fullfield and extrafoveal visual evoked potentials in healthy eyes: reference data for a curved OLED display. Doc Ophthalmol 2022; 145:247-262. [PMID: 36087163 PMCID: PMC9653365 DOI: 10.1007/s10633-022-09897-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 08/24/2022] [Indexed: 12/29/2022]
Abstract
PURPOSE Visual evoked potentials (VEP) present an important diagnostic tool in various ophthalmologic and neurologic diseases. Quantitative response data varied among patients but are also dependent on the recording and stimulating equipment. We established VEP reference values for our setting which was recently modified by using a curved OLED display as visual stimulator. Distinction is made between fullfield (FF) and extrafoveal (EF) conduction, and the effect of sex, age and lens status was determined. METHODS This prospective cross-sectional study included 162 healthy eyes of 162 test persons older than 10 years. A fullfield pattern-reversal visual evoked potential (FF-PR-VEP) with two stimulus sizes (ss) (20.4' and 1.4°) as well as an extrafoveal pattern onset-offset VEP (EF-P-ON/OFF-VEP) (ss 1.4° and 2.8°) was derived in accordance with the International Society for Clinical Electrophysiology of Vision guidelines. Amplitudes and latencies were recorded, and the mean values as well as standard deviations were calculated. Age- and sex-dependent influences and the difference between phakic and pseudophakic eyes were examined. A subanalysis of EF-P-ON/OFF-VEP and fullfield pattern onset-offset VEP (FF-P-ON/OFF-VEP) was performed. A 55-inch curved OLED display (LG55EC930V, LG Electronics Inc., Seoul, South Korea) was used as visual stimulator. RESULTS Mean P100 latency of the FF-PR-VEP was 103.81 ± 7.77 ms (ss 20.4') and 102.58 ± 7.26 ms (ss 1.4°), and mean C2 latency of the EF-P-ON/OFF-VEP was 102.95 ± 11.84 ms (ss 1.4°) and 113.58 ± 9.87 ms (ss 2.8°). For all stimulation settings (FF-PR-VEP, EF-P-ON/OFF-VEP), a significant effect of age with longer latencies and smaller amplitudes in older subjects and higher amplitudes in women was observed. We saw no significant difference in latency or amplitude between phakic and pseudophakic eyes and between EF-P-ON/OFF-VEP and FF-P-ON/OFF-VEP. CONCLUSIONS A curved OLED visual stimulator is well suited to obtain VEP response curves with a reasonable interindividual variability. We found significant effects of age and gender in our responses but no effect of the lens status. EF-P-ON/OFF-VEP tends to show smaller amplitudes.
Collapse
Affiliation(s)
- Sabine Baumgarten
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Tabea Hoberg
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Tibor Lohmann
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Babac Mazinani
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Peter Walter
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Antonis Koutsonas
- grid.1957.a0000 0001 0728 696XDepartment of Ophthalmology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| |
Collapse
|
19
|
Kupers ER, Benson NC, Carrasco M, Winawer J. Asymmetries around the visual field: From retina to cortex to behavior. PLoS Comput Biol 2022; 18:e1009771. [PMID: 35007281 PMCID: PMC8782511 DOI: 10.1371/journal.pcbi.1009771] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/21/2022] [Accepted: 12/19/2021] [Indexed: 11/29/2022] Open
Abstract
Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision maker's performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction, and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.
Collapse
Affiliation(s)
- Eline R. Kupers
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Noah C. Benson
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| |
Collapse
|
20
|
Jung H, Wager TD, Carter RM. Novel Cognitive Functions Arise at the Convergence of Macroscale Gradients. J Cogn Neurosci 2021; 34:381-396. [PMID: 34942643 DOI: 10.1162/jocn_a_01803] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Functions in higher-order brain regions are the source of extensive debate. Although past trends have been to describe the brain-especially posterior cortical areas-in terms of a set of functional modules, a new emerging paradigm focuses on the integration of proximal functions. In this review, we synthesize emerging evidence that a variety of novel functions in the higher-order brain regions are due to convergence: convergence of macroscale gradients brings feature-rich representations into close proximity, presenting an opportunity for novel functions to arise. Using the TPJ as an example, we demonstrate that convergence is enabled via three properties of the brain: (1) hierarchical organization, (2) abstraction, and (3) equidistance. As gradients travel from primary sensory cortices to higher-order brain regions, information becomes abstracted and hierarchical, and eventually, gradients meet at a point maximally and equally distant from their sensory origins. This convergence, which produces multifaceted combinations, such as mentalizing another person's thought or projecting into a future space, parallels evolutionary and developmental characteristics in such regions, resulting in new cognitive and affective faculties.
Collapse
Affiliation(s)
- Heejung Jung
- University of Colorado Boulder.,Dartmouth College
| | - Tor D Wager
- University of Colorado Boulder.,Dartmouth College
| | | |
Collapse
|
21
|
Lukanov H, König P, Pipa G. Biologically Inspired Deep Learning Model for Efficient Foveal-Peripheral Vision. Front Comput Neurosci 2021; 15:746204. [PMID: 34880741 PMCID: PMC8645638 DOI: 10.3389/fncom.2021.746204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
While abundant in biology, foveated vision is nearly absent from computational models and especially deep learning architectures. Despite considerable hardware improvements, training deep neural networks still presents a challenge and constraints complexity of models. Here we propose an end-to-end neural model for foveal-peripheral vision, inspired by retino-cortical mapping in primates and humans. Our model has an efficient sampling technique for compressing the visual signal such that a small portion of the scene is perceived in high resolution while a large field of view is maintained in low resolution. An attention mechanism for performing "eye-movements" assists the agent in collecting detailed information incrementally from the observed scene. Our model achieves comparable results to a similar neural architecture trained on full-resolution data for image classification and outperforms it at video classification tasks. At the same time, because of the smaller size of its input, it can reduce computational effort tenfold and uses several times less memory. Moreover, we present an easy to implement bottom-up and top-down attention mechanism which relies on task-relevant features and is therefore a convenient byproduct of the main architecture. Apart from its computational efficiency, the presented work provides means for exploring active vision for agent training in simulated environments and anthropomorphic robotics.
Collapse
Affiliation(s)
- Hristofor Lukanov
- Department of Neuroinformatics, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Department of Neurobiopsychology, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gordon Pipa
- Department of Neuroinformatics, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
22
|
Gu Y, Chen ZS, Wang C, Song XM, Lu S, Cai YC. Spatial suppression of chromatic motion. Vision Res 2021; 188:227-233. [PMID: 34385078 DOI: 10.1016/j.visres.2021.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 07/03/2021] [Accepted: 07/20/2021] [Indexed: 10/20/2022]
Abstract
Center-surround antagonism, as a ubiquitous feature in visual processing, usually leads to inferior perception for a large stimulus compared to a small one. For example, it is more difficult to judge the motion direction of a large high-contrast pattern than that of a small one. However, this spatial suppression in the motion dimension was only reported for luminance motion, and was not found for chromatic motion. Given that center-surround suppression only occurs for strong visual inputs, we hypothesized that previous failure in finding spatial suppression of chromatic motion might be due to weak chromatic motion being induced with stimuli of limited parameters. In this study, we used phase-shift discrimination and motion-direction discrimination tasks to measure motion spatial suppression induced by stimuli of two spatial frequencies (0.5 and 2 cpd) and two contrasts (low and high). We found that spatial suppression of the chromatic motion was stably observed for stimuli of high spatial frequency (2 cpd) and high contrast and spatial summation occurred for stimuli of low spatial frequency (0.5 cpd). Intriguingly, there was no correlations between the motion spatial suppressions of luminance motion and chromatic motion, implying that the two types of spatial suppression are not originated from the same neural processing. Our findings indicate that spatial suppression also exists for chromatic motion, and the mechanisms underlying the spatial suppression of chromatic motion is different from that of luminance motion.
Collapse
Affiliation(s)
- Ye Gu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang 310028, China
| | - Zhang-Shan Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang 310028, China
| | - Ci Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang 310028, China
| | - Xue-Mei Song
- Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Shena Lu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang 310028, China.
| | - Yong-Chun Cai
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang 310028, China.
| |
Collapse
|
23
|
Archer DR, Alitto HJ, Usrey WM. Stimulus Contrast Affects Spatial Integration in the Lateral Geniculate Nucleus of Macaque Monkeys. J Neurosci 2021; 41:6246-6256. [PMID: 34103362 PMCID: PMC8287990 DOI: 10.1523/jneurosci.2946-20.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 05/30/2021] [Accepted: 06/02/2021] [Indexed: 11/21/2022] Open
Abstract
Gain-control mechanisms adjust neuronal responses to accommodate the wide range of stimulus conditions in the natural environment. Contrast gain control and extraclassical surround suppression are two manifestations of gain control that govern the responses of neurons in the early visual system. Understanding how these two forms of gain control interact has important implications for the detection and discrimination of stimuli across a range of contrast conditions. Here, we report that stimulus contrast affects spatial integration in the lateral geniculate nucleus of alert macaque monkeys (male and female), whereby neurons exhibit a reduction in the strength of extraclassical surround suppression and an expansion in the preferred stimulus size with low-contrast stimuli compared with high-contrast stimuli. Effects were greater for magnocellular neurons than for parvocellular neurons, indicating stream-specific interactions between stimulus contrast and stimulus size. Within the magnocellular pathway, contrast-dependent effects were comparable for ON-center and OFF-center neurons, despite ON neurons having larger receptive fields, less pronounced surround suppression, and more pronounced contrast gain control than OFF neurons. Together, these findings suggest that the parallel streams delivering visual information from retina to primary visual cortex, serve not only to broaden the range of signals delivered to cortex, but also to provide a substrate for differential interactions between stimulus contrast and stimulus size that may serve to improve stimulus detection and stimulus discrimination under pathway-specific lower and higher contrast conditions, respectively.SIGNIFICANCE STATEMENT Stimulus contrast is a salient feature of visual scenes. Here we examine the influence of stimulus contrast on spatial integration in the lateral geniculate nucleus (LGN). Our results demonstrate that increases in contrast generally increase extraclassical suppression and decrease the size of optimal stimuli, indicating a reduction in the extent of visual space from which LGN neurons integrate signals. Differences between magnocellular and parvocellular neurons are noteworthy and further demonstrate that the feedforward parallel pathways to cortex increase the range of information conveyed for downstream cortical processing, a range broadened by diversity in the ON and OFF pathways. These results have important implications for more complex visual processing that underly the detection and discrimination of stimuli under varying natural conditions.
Collapse
Affiliation(s)
- Darlene R Archer
- Center for Neuroscience, University of California, Davis, Davis, California 95616
- SUNY College of Optometry, New York, New York 10036
- Center for Neural Science, New York University, New York, New York 10003
| | - Henry J Alitto
- Center for Neuroscience, University of California, Davis, Davis, California 95616
| | - W Martin Usrey
- Center for Neuroscience, University of California, Davis, Davis, California 95616
| |
Collapse
|
24
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | | |
Collapse
|
25
|
Horwitz GD. Temporal filtering of luminance and chromaticity in macaque visual cortex. iScience 2021; 24:102536. [PMID: 34189430 PMCID: PMC8219838 DOI: 10.1016/j.isci.2021.102536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 03/02/2021] [Accepted: 05/11/2021] [Indexed: 11/23/2022] Open
Abstract
Contrast sensitivity peaks near 10 Hz for luminance modulations and at lower frequencies for modulations between equiluminant lights. This difference is rooted in retinal filtering, but additional filtering occurs in the cerebral cortex. To measure the cortical contributions to luminance and chromatic temporal contrast sensitivity, signals in the lateral geniculate nucleus (LGN) were compared to the behavioral contrast sensitivity of macaque monkeys. Long wavelength-sensitive (L) and medium wavelength-sensitive (M) cones were modulated in phase to produce a luminance modulation (L + M) or in counterphase to produce a chromatic modulation (L - M). The sensitivity of LGN neurons was well matched to behavioral sensitivity at low temporal frequencies but was approximately 7 times greater at high temporal frequencies. Similar results were obtained for L + M and L - M modulations. These results show that differences in the shapes of the luminance and chromatic temporal contrast sensitivity functions are due almost entirely to pre-cortical mechanisms.
Collapse
Affiliation(s)
- Gregory D. Horwitz
- Department of Physiology and Biophysics, Washington National Primate Research Center, University of Washington, 1959 N.E. Pacific Street, HSB I-714, Box 357290, Seattle, WA 98195, USA
| |
Collapse
|
26
|
De Cesarei A, Cavicchi S, Cristadoro G, Lippi M. Do Humans and Deep Convolutional Neural Networks Use Visual Information Similarly for the Categorization of Natural Scenes? Cogn Sci 2021; 45:e13009. [PMID: 34170027 PMCID: PMC8365760 DOI: 10.1111/cogs.13009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 05/19/2021] [Accepted: 05/31/2021] [Indexed: 11/28/2022]
Abstract
The investigation of visual categorization has recently been aided by the introduction of deep convolutional neural networks (CNNs), which achieve unprecedented accuracy in picture classification after extensive training. Even if the architecture of CNNs is inspired by the organization of the visual brain, the similarity between CNN and human visual processing remains unclear. Here, we investigated this issue by engaging humans and CNNs in a two-class visual categorization task. To this end, pictures containing animals or vehicles were modified to contain only low/high spatial frequency (HSF) information, or were scrambled in the phase of the spatial frequency spectrum. For all types of degradation, accuracy increased as degradation was reduced for both humans and CNNs; however, the thresholds for accurate categorization varied between humans and CNNs. More remarkable differences were observed for HSF information compared to the other two types of degradation, both in terms of overall accuracy and image-level agreement between humans and CNNs. The difficulty with which the CNNs were shown to categorize high-passed natural scenes was reduced by picture whitening, a procedure which is inspired by how visual systems process natural images. The results are discussed concerning the adaptation to regularities in the visual environment (scene statistics); if the visual characteristics of the environment are not learned by CNNs, their visual categorization may depend only on a subset of the visual information on which humans rely, for example, on low spatial frequency information.
Collapse
Affiliation(s)
| | | | | | - Marco Lippi
- Department of Sciences and Methods for EngineeringUniversity of Modena and Reggio Emilia
| |
Collapse
|
27
|
Schottdorf M, Lee BB. A quantitative description of macaque ganglion cell responses to natural scenes: the interplay of time and space. J Physiol 2021; 599:3169-3193. [PMID: 33913164 DOI: 10.1113/jp281200] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 04/20/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Responses to natural scenes are the business of the retina. We find primate ganglion cell responses to such scenes consistent with those to simpler stimuli. A biophysical model confirmed this and predicted ganglion cell responses with close to retinal reliability. Primate ganglion cell responses to natural scenes were driven by temporal variations in colour and luminance over the receptive field centre caused by eye movements, and little influenced by interaction of centre and surround with structure in the scene. We discuss implications in the context of efficient coding of the visual environment. Much information in a higher spatiotemporal frequency band is concentrated in the magnocellular pathway. ABSTRACT Responses of visual neurons to natural scenes provide a link between classical descriptions of receptive field structure and visual perception of the natural environment. A natural scene video with a movement pattern resembling that of primate eye movements was used to evoke responses from macaque ganglion cells. Cell responses were well described through known properties of cell receptive fields. Different analyses converge to show that responses primarily derive from the temporal pattern of stimulation derived from eye movements, rather than spatial receptive field structure beyond centre size and position. This was confirmed using a model that predicted ganglion cell responses close to retinal reliability, with only a small contribution of the surround relative to the centre. We also found that the spatiotemporal spectrum of the stimulus is modified in ganglion cell responses, and this can reduce redundancy in the retinal signal. This is more pronounced in the magnocellular pathway, which is much better suited to transmit the detailed structure of natural scenes than the parvocellular pathway. Whitening is less important for chromatic channels. Taken together, this shows how a complex interplay across space, time and spectral content sculpts ganglion cell responses.
Collapse
Affiliation(s)
- Manuel Schottdorf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, D-37077, Germany.,Max Planck Institute of Experimental Medicine, Göttingen, D-37075, Germany.,Princeton Neuroscience Institute, Princeton, NJ, 08544, USA
| | - Barry B Lee
- Graduate Center for Vision Research, Department of Biological Sciences, SUNY College of Optometry, 33 West 42nd St., New York, NY, 10036, USA.,Department of Neurobiology, Max Planck Institute for Biophysical Chemistry, Göttingen, D-37077, Germany
| |
Collapse
|
28
|
Segmenting surface boundaries using luminance cues. Sci Rep 2021; 11:10074. [PMID: 33980899 PMCID: PMC8115076 DOI: 10.1038/s41598-021-89277-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 04/16/2021] [Indexed: 12/02/2022] Open
Abstract
Segmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter–Rectify–Filter model positing two cascaded stages of filtering, which fits our data well, and explains observers' ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.
Collapse
|
29
|
Jang J, Song M, Paik SB. Retino-Cortical Mapping Ratio Predicts Columnar and Salt-and-Pepper Organization in Mammalian Visual Cortex. Cell Rep 2021; 30:3270-3279.e3. [PMID: 32160536 DOI: 10.1016/j.celrep.2020.02.038] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 12/27/2019] [Accepted: 02/07/2020] [Indexed: 12/22/2022] Open
Abstract
In the mammalian primary visual cortex, neural tuning to stimulus orientation is organized in either columnar or salt-and-pepper patterns across species. For decades, this sharp contrast has spawned fundamental questions about the origin of functional architectures in visual cortex. However, it is unknown whether these patterns reflect disparate developmental mechanisms across mammalian taxa or simply originate from variation of biological parameters under a universal development process. In this work, after the analysis of data from eight mammalian species, we show that cortical organization is predictable by a single factor, the retino-cortical mapping ratio. Groups of species with or without columnar clustering are distinguished by the feedforward sampling ratio, and model simulations with controlled mapping conditions reproduce both types of organization. Prediction from the Nyquist theorem explains this parametric division of the patterns with high accuracy. Our results imply that evolutionary variation of physical parameters may induce development of distinct functional circuitry.
Collapse
Affiliation(s)
- Jaeson Jang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Min Song
- Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
| |
Collapse
|
30
|
A Data-Driven and Biologically Inspired Preprocessing Scheme to Improve Visual Object Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021. [DOI: 10.1155/2021/6699335] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Autonomous object recognition in images is one of the most critical topics in security and commercial applications. Due to recent advances in visual neuroscience, the researchers tend to extend biologically plausible schemes to improve the accuracy of object recognition. Preprocessing is one part of the visual recognition system that has received much less attention. In this paper, we propose a new, simple, and biologically inspired pre processing technique by using the data-driven mechanism of visual attention. In this part, the responses of Retinal Ganglion Cells (RGCs) are simulated. After obtaining these responses, an efficient threshold is selected. Then, the points of the raw image with the most information are extracted according to it. Then, the new images with these points are created, and finally, by combining these images with entropy coefficients, the most salient object is located. After extracting appropriate features, the classifier categorizes the initial image into one of the predefined object categories. Our system was evaluated on the Caltech-101 dataset. Experimental results demonstrate the efficacy and effectiveness of this novel method of preprocessing.
Collapse
|
31
|
Solomon SG. Retinal ganglion cells and the magnocellular, parvocellular, and koniocellular subcortical visual pathways from the eye to the brain. HANDBOOK OF CLINICAL NEUROLOGY 2021; 178:31-50. [PMID: 33832683 DOI: 10.1016/b978-0-12-821377-3.00018-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
In primates including humans, most retinal ganglion cells send signals to the lateral geniculate nucleus (LGN) of the thalamus. The anatomical and functional properties of the two major pathways through the LGN, the parvocellular (P) and magnocellular (M) pathways, are now well understood. Neurones in these pathways appear to convey a filtered version of the retinal image to primary visual cortex for further analysis. The properties of the P-pathway suggest it is important for high spatial acuity and red-green color vision, while those of the M-pathway suggest it is important for achromatic visual sensitivity and motion vision. Recent work has sharpened our understanding of how these properties are built in the retina, and described subtle but important nonlinearities that shape the signals that cortex receives. In addition to the P- and M-pathways, other retinal ganglion cells also project to the LGN. These ganglion cells are larger than those in the P- and M-pathways, have different retinal connectivity, and project to distinct regions of the LGN, together forming heterogenous koniocellular (K) pathways. Recent work has started to reveal the properties of these K-pathways, in the retina and in the LGN. The functional properties of K-pathways are more complex than those in the P- and M-pathways, and the K-pathways are likely to have a distinct contribution to vision. They provide a complementary pathway to the primary visual cortex, but can also send signals directly to extrastriate visual cortex. At the level of the LGN, many neurones in the K-pathways seem to integrate retinal with non-retinal inputs, and some may provide an early site of binocular convergence.
Collapse
Affiliation(s)
- Samuel G Solomon
- Department of Experimental Psychology, University College London, London, United Kingdom.
| |
Collapse
|
32
|
Song M, Jang J, Kim G, Paik SB. Projection of Orthogonal Tiling from the Retina to the Visual Cortex. Cell Rep 2021; 34:108581. [PMID: 33406438 DOI: 10.1016/j.celrep.2020.108581] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 10/22/2020] [Accepted: 12/09/2020] [Indexed: 10/22/2022] Open
Abstract
In higher mammals, the primary visual cortex (V1) is organized into diverse tuning maps of visual features. The topography of these maps intersects orthogonally, but it remains unclear how such a systematic relationship can develop. Here, we show that the orthogonal organization already exists in retinal ganglion cell (RGC) mosaics, providing a blueprint of the organization in V1. From analysis of the RGC mosaics data in monkeys and cats, we find that the ON-OFF RGC distance and ON-OFF angle of neighboring RGCs are organized into a topographic tiling across mosaics, analogous to the orthogonal intersection of cortical tuning maps. Our model simulation shows that the ON-OFF distance and angle in RGC mosaics correspondingly initiate ocular dominance/spatial frequency tuning and orientation tuning, resulting in the orthogonal intersection of cortical tuning maps. These findings suggest that the regularly structured ON-OFF patterns mirrored from the retina initiate the uniform representation of combinations of map features over the visual space.
Collapse
Affiliation(s)
- Min Song
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Jaeson Jang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
| |
Collapse
|
33
|
David E, Beitner J, Võ MLH. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci 2020; 10:E841. [PMID: 33198116 PMCID: PMC7696943 DOI: 10.3390/brainsci10110841] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Collapse
Affiliation(s)
- Erwan David
- Scene Grammar Lab, Department of Psychology, Theodor-W.-Adorno-Platz 6, Johann Wolfgang-Goethe-Universität, 60323 Frankfurt, Germany; (J.B.); (M.L.-H.V.)
| | | | | |
Collapse
|
34
|
Masri RA, Grünert U, Martin PR. Analysis of Parvocellular and Magnocellular Visual Pathways in Human Retina. J Neurosci 2020; 40:8132-8148. [PMID: 33009001 PMCID: PMC7574660 DOI: 10.1523/jneurosci.1671-20.2020] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 08/26/2020] [Accepted: 09/22/2020] [Indexed: 12/18/2022] Open
Abstract
Two main subcortical pathways serving conscious visual perception are the midget-parvocellular (P), and the parasol-magnocellular (M) pathways. It is generally accepted that the P pathway serves red-green color vision, but the relative contribution of P and M pathways to spatial vision is a long-standing and unresolved issue. Here, we mapped the spatial sampling properties of P and M pathways across the human retina. Data were obtained from immunolabeled vertical sections of six postmortem male and female human donor retinas and imaged using high-resolution microscopy. Cone photoreceptors, OFF-midget bipolar cells (P pathway), OFF-diffuse bipolar (DB) types DB3a and DB3b (M pathway), and ganglion cells were counted along the temporal horizontal meridian, taking foveal spatial distortions (postreceptoral displacements) into account. We found that the density of OFF-midget bipolar and OFF-midget ganglion cells can support one-to-one connections to 1.05-mm (3.6°) eccentricity. One-to-one connections of cones to OFF-midget bipolar cells are present to at least 10-mm (35°) eccentricity. The OFF-midget ganglion cell array acuity is well-matched to photopic spatial acuity measures throughout the central 35°, but the OFF-parasol array acuity is well below photopic spatial acuity, supporting the view that the P pathway underlies high-acuity spatial vision. Outside the fovea, array acuity of both OFF-midget and OFF-DB cells exceeds psychophysical measures of photopic spatial acuity. We conclude that parasol and midget pathway bipolar cells deliver high-acuity spatial signals to the inner plexiform layer, but outside the fovea, this spatial resolution is lost at the level of ganglion cells.SIGNIFICANCE STATEMENT We make accurate maps of the spatial density and distribution of neurons in the human retina to aid in understanding human spatial vision, interpretation of diagnostic tests, and the implementation of therapies for retinal diseases. Here, we map neurons involved with the midget-parvocellular (P pathway) and parasol-magnocellular (M pathway) through human retina. We find that P-type bipolar cells outnumber M-type bipolar cells at all eccentricities. We show that cone photoreceptors and P-type pathway bipolar cells are tightly connected throughout the retina, but that spatial resolution is lost at the level of the ganglion cells. Overall, the results support the view that the P pathway is specialized to serve both high acuity vision and red-green color vision.
Collapse
Affiliation(s)
- Rania A Masri
- Faculty of Medicine and Health, Save Sight Institute and Discipline of Clinical Ophthalmology, The University of Sydney, Sydney, New South Wales 2000, Australia
- Australian Research Council Center of Excellence for Integrative Brain Function, The University of Sydney, Sydney, New South Wales 2000, Australia
| | - Ulrike Grünert
- Faculty of Medicine and Health, Save Sight Institute and Discipline of Clinical Ophthalmology, The University of Sydney, Sydney, New South Wales 2000, Australia
- Australian Research Council Center of Excellence for Integrative Brain Function, The University of Sydney, Sydney, New South Wales 2000, Australia
| | - Paul R Martin
- Faculty of Medicine and Health, Save Sight Institute and Discipline of Clinical Ophthalmology, The University of Sydney, Sydney, New South Wales 2000, Australia
- Australian Research Council Center of Excellence for Integrative Brain Function, The University of Sydney, Sydney, New South Wales 2000, Australia
| |
Collapse
|
35
|
Shoshina II, Sosnina IS, Zelenskiy KA, Karpinskaya VY, Lyakhovetskii VA, Pronin SV. The Contrast Sensitivity of the Visual System in “Dry” Immersion Conditions. Biophysics (Nagoya-shi) 2020. [DOI: 10.1134/s0006350920040211] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
36
|
Liu R, Kwon M. Increased Equivalent Input Noise in Glaucomatous Central Vision: Is it Due to Undersampling of Retinal Ganglion Cells? Invest Ophthalmol Vis Sci 2020; 61:10. [PMID: 32645132 PMCID: PMC7425734 DOI: 10.1167/iovs.61.8.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 06/01/2020] [Indexed: 12/30/2022] Open
Abstract
Purpose Recent evidence shows that macular damage is common even in early stages of glaucoma. Here we investigated whether contrast sensitivity loss in the central vision of glaucoma patients is due to an increase in equivalent input noise (Neq), a decrease in calculation efficiency, or both. We also examined how retinal undersampling resulting from loss of retinal ganglion cells (RGCs) may affect Neq and calculation efficiency. Methods This study included 21 glaucoma patients and 23 age-matched normally sighted individuals. Threshold contrast for orientation discrimination was measured with a sinewave grating embedded in varying levels of external noise. Data were fitted to the linear amplifier model (LAM) to factor contrast sensitivity into Neq and calculation efficiency. We also correlated macular RGC counts estimated from structural (spectral-domain optical coherence tomography) and functional (standard automated perimetry Swedish interactive thresholding algorithm 10-2) data with either Neq or efficiency. Furthermore, using analytical and computer simulation approach, the relative effect of retinal undersampling on Neq and efficiency was evaluated by adding the RGC sampling module into the LAM. Results Compared with normal controls, glaucoma patients exhibited a significantly larger Neq without significant difference in efficiency. Neq was significantly correlated with Pelli-Robson contrast sensitivity and macular RGC counts. The results from analytical derivation and model simulation further demonstrated that Neq can be expressed as a function of internal noise and retinal sampling. Conclusions Our results showed that equivalent input noise is significantly elevated in glaucomatous vision, thereby impairing foveal contrast sensitivity. Our findings further elucidated how undersampling at the retinal level may increase equivalent input noise.
Collapse
Affiliation(s)
- Rong Liu
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - MiYoung Kwon
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| |
Collapse
|
37
|
Soto F, Hsiang JC, Rajagopal R, Piggott K, Harocopos GJ, Couch SM, Custer P, Morgan JL, Kerschensteiner D. Efficient Coding by Midget and Parasol Ganglion Cells in the Human Retina. Neuron 2020; 107:656-666.e5. [PMID: 32533915 DOI: 10.1016/j.neuron.2020.05.030] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 04/24/2020] [Accepted: 05/20/2020] [Indexed: 01/03/2023]
Abstract
In humans, midget and parasol ganglion cells account for most of the input from the eyes to the brain. Yet, how they encode visual information is unknown. Here, we perform large-scale multi-electrode array recordings from retinas of treatment-naive patients who underwent enucleation surgery for choroidal malignant melanomas. We identify robust differences in the function of midget and parasol ganglion cells, consistent asymmetries between their ON and OFF types (that signal light increments and decrements, respectively) and divergence in the function of human versus non-human primate retinas. Our computational analyses reveal that the receptive fields of human midget and parasol ganglion cells divide naturalistic movies into adjacent spatiotemporal frequency domains with equal stimulus power, while the asymmetric response functions of their ON and OFF types simultaneously maximize stimulus coverage and information transmission and minimize metabolic cost. Thus, midget and parasol ganglion cells in the human retina efficiently encode our visual environment.
Collapse
Affiliation(s)
- Florentina Soto
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Jen-Chun Hsiang
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Rithwick Rajagopal
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Kisha Piggott
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - George J Harocopos
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Steven M Couch
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Philip Custer
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Josh L Morgan
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Daniel Kerschensteiner
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA; Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University School of Medicine, Saint Louis, MO 63110, USA; Hope Center for Neurological Disorders, Washington University School of Medicine, Saint Louis, MO 63110, USA.
| |
Collapse
|
38
|
Abstract
To model the responses of neurons in the early visual system, at least three basic components are required: a receptive field, a normalization term, and a specification of encoding noise. Here, we examine how the receptive field, the normalization factor, and the encoding noise affect the drive to model-neuron responses when stimulated with natural images. We show that when these components are modeled appropriately, the response drives elicited by natural stimuli are Gaussian-distributed and scale invariant, and very nearly maximize the sensitivity (d') for natural-image discrimination. We discuss the statistical models of natural stimuli that can account for these response statistics, and we show how some commonly used modeling practices may distort these results. Finally, we show that normalization can equalize important properties of neural response across different stimulus types. Specifically, narrowband (stimulus- and feature-specific) normalization causes model neurons to yield Gaussian response-drive statistics when stimulated with natural stimuli, 1/f noise stimuli, and white-noise stimuli. The current work makes recommendations for best practices and lays a foundation, grounded in the response statistics to natural stimuli, upon which to build principled models of more complex visual tasks.
Collapse
Affiliation(s)
- Arvind Iyer
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Johannes Burge
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.,Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.,Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
39
|
Horwitz GD. Temporal information loss in the macaque early visual system. PLoS Biol 2020; 18:e3000570. [PMID: 31971946 PMCID: PMC6977937 DOI: 10.1371/journal.pbio.3000570] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 12/05/2019] [Indexed: 01/09/2023] Open
Abstract
Stimuli that modulate neuronal activity are not always detectable, indicating a loss of information between the modulated neurons and perception. To identify where in the macaque visual system information about periodic light modulations is lost, signal-to-noise ratios were compared across simulated cone photoreceptors, lateral geniculate nucleus (LGN) neurons, and perceptual judgements. Stimuli were drifting, threshold-contrast Gabor patterns on a photopic background. The sensitivity of LGN neurons, extrapolated to populations, was similar to the monkeys' at low temporal frequencies. At high temporal frequencies, LGN sensitivity exceeded the monkeys' and approached the upper bound set by cone photocurrents. These results confirm a loss of high-frequency information downstream of the LGN. However, this loss accounted for only about 5% of the total. Phototransduction accounted for essentially all of the rest. Together, these results show that low temporal frequency information is lost primarily between the cones and the LGN, whereas high-frequency information is lost primarily within the cones, with a small additional loss downstream of the LGN.
Collapse
Affiliation(s)
- Gregory D. Horwitz
- Department of Physiology and Biophysics, Washington National Primate Research Center, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
40
|
Lieber JD, Bensmaia SJ. Emergence of an Invariant Representation of Texture in Primate Somatosensory Cortex. Cereb Cortex 2019; 30:3228-3239. [PMID: 31813989 PMCID: PMC7197205 DOI: 10.1093/cercor/bhz305] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 01/13/2023] Open
Abstract
A major function of sensory processing is to achieve neural representations of objects that are stable across changes in context and perspective. Small changes in exploratory behavior can lead to large changes in signals at the sensory periphery, thus resulting in ambiguous neural representations of objects. Overcoming this ambiguity is a hallmark of human object recognition across sensory modalities. Here, we investigate how the perception of tactile texture remains stable across exploratory movements of the hand, including changes in scanning speed, despite the concomitant changes in afferent responses. To this end, we scanned a wide range of everyday textures across the fingertips of rhesus macaques at multiple speeds and recorded the responses evoked in tactile nerve fibers and somatosensory cortical neurons (from Brodmann areas 3b, 1, and 2). We found that individual cortical neurons exhibit a wider range of speed-sensitivities than do nerve fibers. The resulting representations of speed and texture in cortex are more independent than are their counterparts in the nerve and account for speed-invariant perception of texture. We demonstrate that this separation of speed and texture information is a natural consequence of previously described cortical computations.
Collapse
Affiliation(s)
- Justin D Lieber
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA
| | - Sliman J Bensmaia
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA.,Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
41
|
2-D Peripheral image quality metrics with different types of multifocal contact lenses. Sci Rep 2019; 9:18487. [PMID: 31811185 PMCID: PMC6898319 DOI: 10.1038/s41598-019-54783-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 11/18/2019] [Indexed: 12/19/2022] Open
Abstract
To evaluate the impact of multifocal contact lens wear on the image quality metrics across the visual field in the context of eye growth and myopia control. Two-dimensional cross-correlation coefficients were estimated by comparing a reference image against the computed retinal images for every location. Retinal images were simulated based on the measured optical aberrations of the naked eye and a set of multifocal contact lenses (centre-near and centre-distance designs), and images were spatially filtered to match the resolution limit at each eccentricity. Value maps showing the reduction in the quality of the image through each optical condition were obtained by subtracting the optical image quality from the theoretical physiological limits. Results indicate that multifocal contact lenses degrade the image quality independently from their optical design, though this result depends on the type of analysis conducted. Analysis of the image quality across the visual field should not be oversimplified to a single number but split into regional and groups because it provides more insightful information and can avoid misinterpretation of the results. The decay of the image quality caused by the multifocal contacts alone, cannot explain the translation of peripheral defocus towards protection on myopia progression, and a different explanation needs to be found.
Collapse
|
42
|
Abstract
Abstract
In primates and carnivores, the main laminae of the dorsal lateral geniculate nucleus (LGN) receive monocular excitatory input in an eye-alternating fashion. There is also evidence that nondominant eye stimulation can reduce responses to dominant eye stimulation and that a subset of LGN cells in the koniocellular (K) layers receives convergent binocular excitatory input from both eyes. What is not known is how the two eye inputs summate in the K layers of LGN. Here, we aimed to answer this question by making extracellular array electrode recordings targeted to K layers in the marmoset (Callithrix jacchus) LGN, as visual stimuli (flashed 200 ms temporal square-wave pulses or drifting gratings) were presented to each eye independently or to both eyes simultaneously. We found that when the flashed stimulus was presented to both eyes, compared to the dominant eye, the peak firing rate of most cells (61%, 14/23) was reduced. The remainder showed response facilitation (17%) or partial summation (22%). A greater degree of facilitation was seen when the total number of spikes across the stimulus time window (200 ms) rather than peak firing rates was measured. A similar pattern of results was seen for contrast-varying gratings and for small numbers of parvocellular (n = 12) and magnocellular (n = 3) cells recorded. Our findings show that binocular summation in the marmoset LGN is weak and predominantly sublinear in nature.
Collapse
|
43
|
Fujimoto K, Ashida H. Larger Head Displacement to Optic Flow Presented in the Lower Visual Field. Iperception 2019; 10:2041669519886903. [PMID: 31803463 PMCID: PMC6876183 DOI: 10.1177/2041669519886903] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 10/14/2019] [Indexed: 11/15/2022] Open
Abstract
Optic flow that simulates self-motion often produces postural adjustment. Although literature has suggested that human postural control depends largely on visual inputs from the lower field in the environment, effects of the vertical location of optic flow on postural responses are not well investigated. Here, we examined whether optic flow presented in the lower visual field produces stronger responses than optic flow in the upper visual field. Either expanding or contracting optic flow was presented in upper, lower, or full visual fields through an Oculus Rift head-mounted display. Head displacement and vection strength were measured. Results showed larger head displacement under the optic flow presentation in the full visual field and the lower visual field than the upper visual field, during early period of presentation of the contracting optic flow. Vection was strongest in the full visual field and weakest in the upper visual field. Our findings of lower field superiority in head displacement and vection support the notion that ecologically relevant information has a particularly important role in human postural control and self-motion perception.
Collapse
Affiliation(s)
- Kanon Fujimoto
- Department of Psychology, Graduate School of Letters, Kyoto University, Japan
| | - Hiroshi Ashida
- Department of Psychology, Graduate School of Letters, Kyoto University, Japan
| |
Collapse
|
44
|
Rucci M, Ahissar E, Burr D. Temporal Coding of Visual Space. Trends Cogn Sci 2019; 22:883-895. [PMID: 30266148 DOI: 10.1016/j.tics.2018.07.009] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 07/16/2018] [Accepted: 07/16/2018] [Indexed: 11/20/2022]
Abstract
Establishing a representation of space is a major goal of sensory systems. Spatial information, however, is not always explicit in the incoming sensory signals. In most modalities it needs to be actively extracted from cues embedded in the temporal flow of receptor activation. Vision, on the other hand, starts with a sophisticated optical imaging system that explicitly preserves spatial information on the retina. This may lead to the assumption that vision is predominantly a spatial process: all that is needed is to transmit the retinal image to the cortex, like uploading a digital photograph, to establish a spatial map of the world. However, this deceptively simple analogy is inconsistent with theoretical models and experiments that study visual processing in the context of normal motor behavior. We argue here that, as with other senses, vision relies heavily on temporal strategies and temporal neural codes to extract and represent spatial information.
Collapse
Affiliation(s)
- Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | - Ehud Ahissar
- Department of Neurobiology, Weizmann Institute, Rehovot, Israel.
| | - David Burr
- Department of Neuroscience, University of Florence, Florence 50125, Italy; School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| |
Collapse
|
45
|
Patterson SS, Neitz M, Neitz J. Reconciling Color Vision Models With Midget Ganglion Cell Receptive Fields. Front Neurosci 2019; 13:865. [PMID: 31474825 PMCID: PMC6707431 DOI: 10.3389/fnins.2019.00865] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 08/02/2019] [Indexed: 11/13/2022] Open
Abstract
Midget retinal ganglion cells (RGCs) make up the majority of foveal RGCs in the primate retina. The receptive fields of midget RGCs exhibit both spectral and spatial opponency and are implicated in both color and achromatic form vision, yet the exact mechanisms linking their responses to visual perception remain unclear. Efforts to develop color vision models that accurately predict all the features of human color and form vision based on midget RGCs provide a case study connecting experimental and theoretical neuroscience, drawing on diverse research areas such as anatomy, physiology, psychophysics, and computer vision. Recent technological advances have allowed researchers to test some predictions of color vision models in new and precise ways, producing results that challenge traditional views. Here, we review the progress in developing models of color-coding receptive fields that are consistent with human psychophysics, the biology of the primate visual system and the response properties of midget RGCs.
Collapse
Affiliation(s)
- Sara S Patterson
- Department of Ophthalmology, University of Washington, Seattle, WA, United States.,Neuroscience Graduate Program, University of Washington, Seattle, WA, United States
| | - Maureen Neitz
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
| | - Jay Neitz
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
| |
Collapse
|
46
|
Kupers ER, Carrasco M, Winawer J. Modeling visual performance differences 'around' the visual field: A computational observer approach. PLoS Comput Biol 2019; 15:e1007063. [PMID: 31125331 PMCID: PMC6553792 DOI: 10.1371/journal.pcbi.1007063] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 06/06/2019] [Accepted: 05/02/2019] [Indexed: 01/25/2023] Open
Abstract
Visual performance depends on polar angle, even when eccentricity is held constant; on many psychophysical tasks observers perform best when stimuli are presented on the horizontal meridian, worst on the upper vertical, and intermediate on the lower vertical meridian. This variation in performance 'around' the visual field can be as pronounced as that of doubling the stimulus eccentricity. The causes of these asymmetries in performance are largely unknown. Some factors in the eye, e.g. cone density, are positively correlated with the reported variations in visual performance with polar angle. However, the question remains whether these correlations can quantitatively explain the perceptual differences observed 'around' the visual field. To investigate the extent to which the earliest stages of vision-optical quality and cone density-contribute to performance differences with polar angle, we created a computational observer model. The model uses the open-source software package ISETBIO to simulate an orientation discrimination task for which visual performance differs with polar angle. The model starts from the photons emitted by a display, which pass through simulated human optics with fixational eye movements, followed by cone isomerizations in the retina. Finally, we classify stimulus orientation using a support vector machine to learn a linear classifier on the photon absorptions. To account for the 30% increase in contrast thresholds for upper vertical compared to horizontal meridian, as observed psychophysically on the same task, our computational observer model would require either an increase of ~7 diopters of defocus or a reduction of 500% in cone density. These values far exceed the actual variations as a function of polar angle observed in human eyes. Therefore, we conclude that these factors in the eye only account for a small fraction of differences in visual performance with polar angle. Substantial additional asymmetries must arise in later retinal and/or cortical processing.
Collapse
Affiliation(s)
- Eline R. Kupers
- Department of Psychology, New York University, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| |
Collapse
|
47
|
Wallis TS, Funke CM, Ecker AS, Gatys LA, Wichmann FA, Bethge M. Image content is more important than Bouma's Law for scene metamers. eLife 2019; 8:42512. [PMID: 31038458 PMCID: PMC6491040 DOI: 10.7554/elife.42512] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 03/09/2019] [Indexed: 11/16/2022] Open
Abstract
We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling. As you read this digest, your eyes move to follow the lines of text. But now try to hold your eyes in one position, while reading the text on either side and below: it soon becomes clear that peripheral vision is not as good as we tend to assume. It is not possible to read text far away from the center of your line of vision, but you can see ‘something’ out of the corner of your eye. You can see that there is text there, even if you cannot read it, and you can see where your screen or page ends. So how does the brain generate peripheral vision, and why does it differ from what you see when you look straight ahead? One idea is that the visual system averages information over areas of the peripheral visual field. This gives rise to texture-like patterns, as opposed to images made up of fine details. Imagine looking at an expanse of foliage, gravel or fur, for example. Your eyes cannot make out the individual leaves, pebbles or hairs. Instead, you perceive an overall pattern in the form of a texture. Our peripheral vision may also consist of such textures, created when the brain averages information over areas of space. Wallis, Funke et al. have now tested this idea using an existing computer model that averages visual input in this way. By giving the model a series of photographs to process, Wallis, Funke et al. obtained images that should in theory simulate peripheral vision. If the model mimics the mechanisms that generate peripheral vision, then healthy volunteers should be unable to distinguish the processed images from the original photographs. But in fact, the participants could easily discriminate the two sets of images. This suggests that the visual system does not solely use textures to represent information in the peripheral visual field. Wallis, Funke et al. propose that other factors, such as how the visual system separates and groups objects, may instead determine what we see in our peripheral vision. This knowledge could ultimately benefit patients with eye diseases such as macular degeneration, a condition that causes loss of vision in the center of the visual field and forces patients to rely on their peripheral vision.
Collapse
Affiliation(s)
- Thomas Sa Wallis
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Christina M Funke
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Alexander S Ecker
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany.,Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Leon A Gatys
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Felix A Wichmann
- Neural Information Processing Group, Faculty of Science, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Matthias Bethge
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
48
|
Gardiner SK. Differences in the Relation Between Perimetric Sensitivity and Variability Between Locations Across the Visual Field. Invest Ophthalmol Vis Sci 2019; 59:3667-3674. [PMID: 30029253 PMCID: PMC6054428 DOI: 10.1167/iovs.18-24303] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose Perimetric sensitivities become more variable with glaucomatous functional loss. This study examines the extent to which this relation varies between locations, and whether this can be predicted by eccentricity-related differences in spatial summation. Methods Longitudinal series of visual fields from standard automated perimetry were obtained from participants with suspected or extant glaucoma. For each location in the 24-2 visual field, heterogeneous fixed-effects models were fit to the data, assuming that variability increased exponentially as sensitivity decreased. The predicted variability at each location was calculated when sensitivity was either 30 dB or 25 dB. Results Variability significantly increased with damage at all 52 locations. When sensitivity was 30 dB, variability increased with eccentricity, with P = 0.0003. The average SD was 1.54 dB at the four most central locations, versus 1.74 dB at the most peripheral locations. When sensitivity was 25 dB, variability did not vary predictably with eccentricity, with P = 0.340. The average SD was 2.36 dB at the four central locations, versus 2.24 dB at the most peripheral locations. Conclusions The relation between sensitivity and variability differed by eccentricity. Among healthy locations, variability was lower centrally, where the stimulus size is larger than Ricco's area, than peripherally. Among damaged locations, variability did not systematically vary with eccentricity. This could be because Ricco's area expands in glaucoma, such that stimuli were now smaller than this area at all locations.
Collapse
Affiliation(s)
- Stuart K Gardiner
- Devers Eye Institute, Legacy Health, Portland, Oregon, United States
| |
Collapse
|
49
|
Martínez-Cañada P, Morillas C, Pelayo F. A Neuronal Network Model of the Primate Visual System: Color Mechanisms in the Retina, LGN and V1. Int J Neural Syst 2019; 29:1850036. [DOI: 10.1142/s0129065718500363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Color plays a key role in human vision but the neural machinery that underlies the transformation from stimulus to perception is not well understood. Here, we implemented a two-dimensional network model of the first stages in the primate parvocellular pathway (retina, lateral geniculate nucleus and layer 4C[Formula: see text] in V1) consisting of conductance-based point neurons. Model parameters were tuned based on physiological and anatomical data from the primate foveal and parafoveal vision, the most relevant visual field areas for color vision. We exhaustively benchmarked the model against well-established chromatic and achromatic visual stimuli, showing spatial and temporal responses of the model to disk- and ring-shaped light flashes, spatially uniform squares and sine-wave gratings of varying spatial frequency. The spatiotemporal patterns of parvocellular cells and cortical cells are consistent with their classification into chromatically single-opponent and double-opponent groups, and nonopponent cells selective for luminance stimuli. The model was implemented in the widely used neural simulation tool NEST and released as open source software. The aim of our modeling is to provide a biologically realistic framework within which a broad range of neuronal interactions can be examined at several different levels, with a focus on understanding how color information is processed.
Collapse
Affiliation(s)
- Pablo Martínez-Cañada
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
- Centro de Investigación en Tecnologías de la Información y de las Comunicaciones (CITIC), University of Granada, Granada, Spain
| | - Christian Morillas
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
- Centro de Investigación en Tecnologías de la Información y de las Comunicaciones (CITIC), University of Granada, Granada, Spain
| | - Francisco Pelayo
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
- Centro de Investigación en Tecnologías de la Información y de las Comunicaciones (CITIC), University of Granada, Granada, Spain
| |
Collapse
|
50
|
Casile A, Victor JD, Rucci M. Contrast sensitivity reveals an oculomotor strategy for temporally encoding space. eLife 2019; 8:40924. [PMID: 30620333 PMCID: PMC6324884 DOI: 10.7554/elife.40924] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 12/03/2018] [Indexed: 11/23/2022] Open
Abstract
The contrast sensitivity function (CSF), how sensitivity varies with the frequency of the stimulus, is a fundamental assessment of visual performance. The CSF is generally assumed to be determined by low-level sensory processes. However, the spatial sensitivities of neurons in the early visual pathways, as measured in experiments with immobilized eyes, diverge from psychophysical CSF measurements in primates. Under natural viewing conditions, as in typical psychophysical measurements, humans continually move their eyes even when looking at a fixed point. Here, we show that the resulting transformation of the spatial scene into temporal modulations on the retina constitutes a processing stage that reconciles human CSF and the response characteristics of retinal ganglion cells under a broad range of conditions. Our findings suggest a fundamental integration between perception and action: eye movements work synergistically with the spatio-temporal sensitivities of retinal neurons to encode spatial information.
Collapse
Affiliation(s)
- Antonino Casile
- Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, Ferrara, Italy.,Center for Neuroscience and Cognitive Systems, Rovereto, Italy.,Department of Neurobiology, Harvard Medical School, Boston, United States
| | - Jonathan D Victor
- Brain and Mind Research Institute, Weill Cornell Medical College, New York, United States.,Department of Neurology, Weill Cornell Medical College, New York, United States
| | - Michele Rucci
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States.,Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|