1
|
Manenti GL, Dizaji AS, Schwiedrzik CM. Variability in training unlocks generalization in visual perceptual learning through invariant representations. Curr Biol 2023; 33:817-826.e3. [PMID: 36724782 DOI: 10.1016/j.cub.2023.01.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 12/24/2022] [Accepted: 01/06/2023] [Indexed: 02/03/2023]
Abstract
Stimulus and location specificity are long considered hallmarks of visual perceptual learning. This renders visual perceptual learning distinct from other forms of learning, where generalization can be more easily attained, and therefore unsuitable for practical applications, where generalization is key. Based on the hypotheses derived from the structure of the visual system, we test here whether stimulus variability can unlock generalization in perceptual learning. We train subjects in orientation discrimination, while we vary the amount of variability in a task-irrelevant feature, spatial frequency. We find that, independently of task difficulty, this manipulation enables generalization of learning to new stimuli and locations, while not negatively affecting the overall amount of learning on the task. We then use deep neural networks to investigate how variability unlocks generalization. We find that networks develop invariance to the task-irrelevant feature when trained with variable inputs. The degree of learned invariance strongly predicts generalization. A reliance on invariant representations can explain variability-induced generalization in visual perceptual learning. This suggests new targets for understanding the neural basis of perceptual learning in the higher-order visual cortex and presents an easy-to-implement modification of common training paradigms that may benefit practical applications.
Collapse
|
2
|
Du Y, Zhang G, Li W, Zhang E. Many Roads Lead to Rome: Differential Learning Processes for the Same Perceptual Improvement. Psychol Sci 2023; 34:313-325. [PMID: 36473146 DOI: 10.1177/09567976221134481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Repeatedly exercising a perceptual ability usually leads to improvement, yet it is unclear whether the mechanisms supporting the same perceptual learning could be flexibly adjusted according to the training settings. Here, we trained adult observers in an orientation-discrimination task at either a single (focused) retinal location or multiple (distributed) retinal locations. We examined the observers' discriminability (N = 52) and bias (N = 20) in orientation perception at the trained and untrained locations. The focused and distributed training enhanced orientation discriminability by the same amount and induced a bias in perceived orientation at the trained locations. Nevertheless, the distributed training promoted location generalization of both practice effects, whereas the focused training resulted in specificity. The two training tactics also differed in long-term retention of the training effects. Our results suggest that, depending on the training settings of the same task, the same discrimination learning could differentially engage location-specific and location-invariant representations of the learned stimulus feature.
Collapse
|
3
|
Rummens K, Sayim B. Multidimensional feature interactions in visual crowding: When configural cues eliminate the polarity advantage. J Vis 2022; 22:2. [PMID: 35503508 PMCID: PMC9078080 DOI: 10.1167/jov.22.6.2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 03/21/2022] [Indexed: 11/24/2022] Open
Abstract
Crowding occurs when surrounding objects (flankers) impair target perception. A key property of crowding is the weaker interference when target and flankers strongly differ on a given dimension. For instance, identification of a target letter is usually superior with flankers of opposite versus the same contrast polarity as the target (the "polarity advantage"). High performance when target-flanker similarity is low has been attributed to the ungrouping of target and flankers. Here, we show that configural cues can override the usual advantage of low target-flanker similarity, and strong target-flanker grouping can reduce - instead of exacerbate - crowding. In Experiment 1, observers were presented with line triplets in the periphery and reported the tilt (left or right) of the central line. Target and flankers had the same (uniform condition) or opposite contrast polarity (alternating condition). Flanker configurations were either upright (||), unidirectionally tilted (\\ or //), or bidirectionally tilted (\/ or /\). Upright flankers yielded stronger crowding than unidirectional flankers, and weaker crowding than bidirectional flankers. Importantly, our results revealed a clear interaction between contrast polarity and flanker configuration. Triplets with upright and bidirectional flankers, but not unidirectional flankers, showed the polarity advantage. In Experiments 2 and 3, we showed that emergent features and redundancy masking (i.e. the reduction of the number of perceived items in repeating configurations) made it easier to discriminate between uniform triplets when flanker tilts were unidirectional (but not when bidirectional). We propose that the spatial configurations of uniform triplets with unidirectional flankers provided sufficient task-relevant information to enable a similar performance as with alternating triplets: strong-target flanker grouping alleviated crowding. We suggest that features which modulate crowding strength can interact non-additively, limiting the validity of typical crowding rules to contexts where only single, independent dimensions determine the effects of target-flanker similarity.
Collapse
|
4
|
Lee RJ, Reuther J, Chakravarthi R, Martinovic J. Emergence of crowding: The role of contrast and orientation salience. J Vis 2021; 21:20. [PMID: 34709355 PMCID: PMC8556554 DOI: 10.1167/jov.21.11.20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 09/22/2021] [Indexed: 11/27/2022] Open
Abstract
Crowding causes difficulties in judging attributes of an object surrounded by other objects. We investigated crowding for stimuli that isolated either S-cone or luminance mechanisms or combined them. By targeting different retinogeniculate mechanisms with contrast-matched stimuli, we aim to determine the earliest site at which crowding emerges. Discrimination was measured in an orientation judgment task where Gabor targets were presented parafoveally among flankers. In the first experiment, we assessed flanked and unflanked orientation discrimination thresholds for pure S-cone and achromatic stimuli and their combinations. In the second experiment, to capture individual differences, we measured unflanked detection and orientation sensitivity, along with performance under flanker interference for stimuli containing luminance only or combined with S-cone contrast. We confirmed that orientation sensitivity was lower for unflanked S-cone stimuli. When flanked, the pattern of results for S-cone stimuli was the same as for achromatic stimuli with comparable (i.e. low) contrast levels. We also found that flanker interference exhibited a genuine signature of crowding only when orientation discrimination threshold was reliably surpassed. Crowding, therefore, emerges at a stage that operates on signals representing task-relevant featural (here, orientation) information. Because luminance and S-cone mechanisms have very different spatial tuning properties, it is most parsimonious to conclude that crowding takes place at a neural processing stage after they have been combined.
Collapse
|
5
|
Lyamzin DR, Aoki R, Abdolrahmani M, Benucci A. Probabilistic discrimination of relative stimulus features in mice. Proc Natl Acad Sci U S A 2021; 118:e2103952118. [PMID: 34301903 PMCID: PMC8325293 DOI: 10.1073/pnas.2103952118] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
During perceptual decision-making, the brain encodes the upcoming decision and the stimulus information in a mixed representation. Paradigms suitable for studying decision computations in isolation rely on stimulus comparisons, with choices depending on relative rather than absolute properties of the stimuli. The adoption of tasks requiring relative perceptual judgments in mice would be advantageous in view of the powerful tools available for the dissection of brain circuits. However, whether and how mice can perform a relative visual discrimination task has not yet been fully established. Here, we show that mice can solve a complex orientation discrimination task in which the choices are decoupled from the orientation of individual stimuli. Moreover, we demonstrate a typical discrimination acuity of 9°, challenging the common belief that mice are poor visual discriminators. We reached these conclusions by introducing a probabilistic choice model that explained behavioral strategies in 40 mice and demonstrated that the circularity of the stimulus space is an additional source of choice variability for trials with fixed difficulty. Furthermore, history biases in the model changed with task engagement, demonstrating behavioral sensitivity to the availability of cognitive resources. In conclusion, our results reveal that mice adopt a diverse set of strategies in a task that decouples decision-relevant information from stimulus-specific information, thus demonstrating their usefulness as an animal model for studying neural representations of relative categories in perceptual decision-making research.
Collapse
|
6
|
Sriram B, Li L, Cruz-Martín A, Ghosh A. A Sparse Probabilistic Code Underlies the Limits of Behavioral Discrimination. Cereb Cortex 2021; 30:1040-1055. [PMID: 31403676 PMCID: PMC7132908 DOI: 10.1093/cercor/bhz147] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Revised: 05/19/2019] [Accepted: 05/20/2019] [Indexed: 11/17/2022] Open
Abstract
The cortical code that underlies perception must enable subjects to perceive the world at time scales relevant for behavior. We find that mice can integrate visual stimuli very quickly (<100 ms) to reach plateau performance in an orientation discrimination task. To define features of cortical activity that underlie performance at these time scales, we measured single-unit responses in the mouse visual cortex at time scales relevant to this task. In contrast to high-contrast stimuli of longer duration, which elicit reliable activity in individual neurons, stimuli at the threshold of perception elicit extremely sparse and unreliable responses in the primary visual cortex such that the activity of individual neurons does not reliably report orientation. Integrating information across neurons, however, quickly improves performance. Using a linear decoding model, we estimate that integrating information over 50–100 neurons is sufficient to account for behavioral performance. Thus, at the limits of visual perception, the visual system integrates information encoded in the probabilistic firing of unreliable single units to generate reliable behavior.
Collapse
|
7
|
Ma H, Li P, Hu J, Cai X, Song Q, Lu HD. Processing of motion boundary orientation in macaque V2. eLife 2021; 10:61317. [PMID: 33759760 PMCID: PMC8026216 DOI: 10.7554/elife.61317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 03/24/2021] [Indexed: 11/13/2022] Open
Abstract
Human and nonhuman primates are good at identifying an object based on its motion, a task that is believed to be carried out by the ventral visual pathway. However, the neural mechanisms underlying such ability remains unclear. We trained macaque monkeys to do orientation discrimination for motion boundaries (MBs) and recorded neuronal response in area V2 with microelectrode arrays. We found 10.9% of V2 neurons exhibited robust orientation selectivity to MBs, and their responses correlated with monkeys' orientation-discrimination performances. Furthermore, the responses of V2 direction-selective neurons recorded at the same time showed correlated activity with MB neurons for particular MB stimuli, suggesting that these motion-sensitive neurons made specific functional contributions to MB discrimination tasks. Our findings support the view that V2 plays a critical role in MB analysis and may achieve this through a neural circuit within area V2.
Collapse
|
8
|
Profiles on the Orientation Discrimination Processing of Human Faces. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17165772. [PMID: 32785010 PMCID: PMC7460380 DOI: 10.3390/ijerph17165772] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 07/27/2020] [Accepted: 08/03/2020] [Indexed: 12/17/2022]
Abstract
Face recognition is a crucial subject for public health, as socialization is one of the main characteristics for full citizenship. However, good recognizers would be distinguished, not only by the number of faces they discriminate but also by the number of rejected stimuli as unfamiliar. When it comes to face recognition, it is important to remember that position, to some extent, would not entail a high cognitive cost, unlike other processes in similar areas of the brain. The aim of this paper was to examine participant’s recognition profiles according to face position. For this reason, a recognition task was carried out by employing the Karolinska Directed Emotional Faces. Reaction times and accuracy were employed as dependent variables and a cluster analysis was carried out. A total of two profiles were identified in participants’ performance, which differ in position in terms of reaction times but not accuracy. The results can be described as follows: first, it is possible to identify performance profiles in visual recognition of faces that differ in position in terms of reaction times, not accuracy; secondly, results suggest a bias towards the left. At the applied level, this could be of interest with a view to conducting training programs in face recognition.
Collapse
|
9
|
V1 neurons encode the perceptual compensation of false torsion arising from Listing's law. Proc Natl Acad Sci U S A 2020; 117:18799-18809. [PMID: 32680968 DOI: 10.1073/pnas.2007644117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We try to deploy the retinal fovea to optimally scrutinize an object of interest by directing our eyes to it. The horizontal and vertical components of eye positions acquired by goal-directed saccades are determined by the object's location. However, the eccentric eye positions also involve a torsional component, which according to Donder's law is fully determined by the two-dimensional (2D) eye position acquired. According to von Helmholtz, knowledge of the amount of torsion provided by Listing's law, an extension of Donder's law, alleviates the perceptual interpretation of the image tilt that changes with 2D eye position, a view supported by psychophysical experiments he pioneered. We address the question of where and how Listing's law is implemented in the visual system and we show that neurons in monkey area V1 use knowledge of eye torsion to compensate the image tilt associated with specific eye positions as set by Listing's law.
Collapse
|
10
|
Dosher BA, Liu J, Chu W, Lu ZL. Roving: The causes of interference and re-enabled learning in multi-task visual training. J Vis 2020; 20:9. [PMID: 32543649 PMCID: PMC7416889 DOI: 10.1167/jov.20.6.9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 03/10/2020] [Indexed: 11/24/2022] Open
Abstract
People routinely perform multiple visual judgments in the real world, yet, intermixing tasks or task variants during training can damage or even prevent learning. This paper explores why. We challenged theories of visual perceptual learning focused on plastic retuning of low-level retinotopic cortical representations by placing different task variants in different retinal locations, and tested theories of perceptual learning through reweighting (changes in readout) by varying task similarity. Discriminating different (but equivalent) and similar orientations in separate retinal locations interfered with learning, whereas training either with identical orientations or sufficiently different ones in different locations released rapid learning. This location crosstalk during learning renders it unlikely that the primary substrate of learning is retuning in early retinotopic visual areas; instead, learning likely involves reweighting from location-independent representations to a decision. We developed an Integrated Reweighting Theory (IRT), which has both V1-like location-specific representations and higher level (V4/IT or higher) location-invariant representations, and learns via reweighting the readout to decision, to predict the order of learning rates in different conditions. This model with suitable parameters successfully fit the behavioral data, as well as some microstructure of learning performance in a new trial-by-trial analysis.
Collapse
|
11
|
Benvenuti G, Chen Y, Ramakrishnan C, Deisseroth K, Geisler WS, Seidemann E. Scale-Invariant Visual Capabilities Explained by Topographic Representations of Luminance and Texture in Primate V1. Neuron 2018; 100:1504-1512.e4. [PMID: 30392796 DOI: 10.1016/j.neuron.2018.10.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 09/02/2018] [Accepted: 10/09/2018] [Indexed: 11/28/2022]
Abstract
Humans have remarkable scale-invariant visual capabilities. For example, our orientation discrimination sensitivity is largely constant over more than two orders of magnitude of variations in stimulus spatial frequency (SF). Orientation-selective V1 neurons are likely to contribute to orientation discrimination. However, because at any V1 location neurons have a limited range of receptive field (RF) sizes, we predict that at low SFs V1 neurons will carry little orientation information. If this were the case, what could account for the high behavioral sensitivity at low SFs? Using optical imaging in behaving macaques, we show that, as predicted, V1 orientation-tuned responses drop rapidly with decreasing SF. However, we reveal a surprising coarse-scale signal that corresponds to the projection of the luminance layout of low-SF stimuli to V1's retinotopic map. This homeomorphic and distributed representation, which carries high-quality orientation information, is likely to contribute to our striking scale-invariant visual capabilities.
Collapse
|
12
|
Donovan I, Carrasco M. Endogenous spatial attention during perceptual learning facilitates location transfer. J Vis 2018; 18:7. [PMID: 30347094 PMCID: PMC6181190 DOI: 10.1167/18.11.7] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/02/2018] [Indexed: 11/24/2022] Open
Abstract
Covert attention and perceptual learning enhance perceptual performance. The relation between these two mechanisms is largely unknown. Previously, we showed that manipulating involuntary, exogenous spatial attention during training improved performance at trained and untrained locations, thus overcoming the typical location specificity. Notably, attention-induced transfer only occurred for high stimulus contrasts, at the upper asymptote of the psychometric function (i.e., via response gain). Here, we investigated whether and how voluntary, endogenous attention, the top-down and goal-based type of covert visual attention, influences perceptual learning. Twenty-six participants trained in an orientation discrimination task at two locations: half of participants received valid endogenous spatial precues (attention group), while the other half received neutral precues (neutral group). Before and after training, all participants were tested with neutral precues at two trained and two untrained locations. Within each session, stimulus contrast varied on a trial basis from very low (2%) to very high (64%). Performance was fit by a Weibull psychometric function separately for each day and location. Performance improved for both groups at the trained location, and unlike training with exogenous attention, at the threshold level (i.e., via contrast gain). The neutral group exhibited location specificity: Thresholds decreased at the trained locations, but not at the untrained locations. In contrast, participants in the attention group showed significant location transfer: Thresholds decreased to the same extent at both trained and untrained locations. These results indicate that, similar to exogenous spatial attention, endogenous spatial attention induces location transfer, but influences contrast gain instead of response gain.
Collapse
|
13
|
Peel HJ, Sperandio I, Laycock R, Chouinard PA. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking. Front Integr Neurosci 2018; 12:13. [PMID: 29725292 PMCID: PMC5917041 DOI: 10.3389/fnint.2018.00013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Accepted: 03/29/2018] [Indexed: 01/12/2023] Open
Abstract
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Collapse
|
14
|
Rhodes LJ, Ruiz A, Ríos M, Nguyen T, Miskovic V. Differential aversive learning enhances orientation discrimination. Cogn Emot 2017; 32:885-891. [PMID: 28683593 DOI: 10.1080/02699931.2017.1347084] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
A number of recent studies have documented rapid changes in behavioural sensory acuity induced by aversive learning in the olfactory and auditory modalities. The effect of aversive learning on the discrimination of low-level features in the visual system of humans remains unclear. Here, we used a psychophysical staircase procedure to estimate discrimination thresholds for oriented grating stimuli, before and after differential aversive learning. We discovered that when a target grating orientation was conditioned with an aversive loud noise, it subsequently led to an improvement of discrimination acuity in nearly all subjects. However, no such change was observed in a control group conditioned to an orientation shifted by ±90° from the target. Our findings cannot be explained by contextual learning or sensitisation factors. The results converge with those reported in the olfactory modality and provide further evidence that early sensory systems can be rapidly modified by recently experienced reinforcement histories.
Collapse
|
15
|
Jabar SB, Anderson B. Orientation Probability and Spatial Exogenous Cuing Improve Perceptual Precision and Response Speed by Different Mechanisms. Front Psychol 2017; 8:183. [PMID: 28228744 PMCID: PMC5296305 DOI: 10.3389/fpsyg.2017.00183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 01/27/2017] [Indexed: 11/20/2022] Open
Abstract
We are faster and more accurate at detecting frequently occurring objects than infrequent ones, just as we are faster and more accurate at detecting objects that have been spatially cued. Does this behavioral similarity reflect similar processes? To evaluate this question we manipulated orientation probability and exogenous spatial cuing within a single perceptual estimation task. Both increased target probability and spatial cuing led to shorter response initiation times and more precise perceptual reports, but these effects were additive. Further, target probability changed the shape of the distribution of errors while spatial cuing did not. Different routes and independent mechanisms could lead to changes in behavioral measures that look similar to each other and to ‘attentional’ effects.
Collapse
|
16
|
Mice Can Use Second-Order, Contrast-Modulated Stimuli to Guide Visual Perception. J Neurosci 2016; 36:4457-69. [PMID: 27098690 DOI: 10.1523/jneurosci.4595-15.2016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 02/23/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Visual processing along the primate ventral stream takes place in a hierarchy of areas, characterized by an increase in both complexity of neuronal preferences and invariance to changes of low-level stimulus attributes. A basic type of invariance is form-cue invariance, where neurons have similar preferences in response to first-order stimuli, defined by changes in luminance, and global features of second-order stimuli, defined by changes in texture or contrast. Whether in mice, a now popular model system for early visual processing, visual perception can be guided by second-order stimuli is currently unknown. Here, we probed mouse visual perception and neural responses in areas V1 and LM using various types of second-order, contrast-modulated gratings with static noise carriers. These gratings differ in their spatial frequency composition and thus in their ability to invoke first-order mechanisms exploiting local luminance features. We show that mice can transfer learning of a coarse orientation discrimination task involving first-order, luminance-modulated gratings to the contrast-modulated gratings, albeit with markedly reduced discrimination performance. Consistent with these behavioral results, we demonstrate that neurons in area V1 and LM are less responsive and less selective to contrast-modulated than to luminance-modulated gratings, but respond with broadly similar preferred orientations. We conclude that mice can, at least in a rudimentary form, use second-order stimuli to guide visual perception. SIGNIFICANCE STATEMENT To extract object boundaries in natural scenes, the primate visual system does not only rely on differences in local luminance but can also take into account differences in texture or contrast. Whether the mouse, which has a much simpler visual system, can use such second-order information to guide visual perception is unknown. Here we tested mouse perception of second-order, contrast-defined stimuli and measured their neural representations in two areas of visual cortex. We find that mice can use contrast-defined stimuli to guide visual perception, although behavioral performance and neural representations were less robust than for luminance-defined stimuli. These findings shed light on basic steps of feature extraction along the mouse visual cortical hierarchy, which may ultimately lead to object recognition.
Collapse
|
17
|
Das A, Tadin D, Huxlin KR. Beyond blindsight: properties of visual relearning in cortically blind fields. J Neurosci 2014; 34:11652-64. [PMID: 25164661 PMCID: PMC4145170 DOI: 10.1523/jneurosci.1076-14.2014] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 06/13/2014] [Accepted: 07/07/2014] [Indexed: 11/21/2022] Open
Abstract
Damage to the primary visual cortex (V1) or its immediate afferents results in a dense scotoma, termed cortical blindness (CB). CB subjects have residual visual abilities, or blindsight, which allow them to detect and sometimes discriminate stimuli with high temporal and low spatial frequency content. Recent work showed that with training, discriminations in the blind field can become more reliable, and even reach consciousness. However, the narrow spatiotemporal bandwidth of blindsight limits its functional usefulness in everyday vision. Here, we asked whether visual training can induce recovery outside the spatiotemporal bandwidth of blindsight. Specifically, could human CB subjects learn to discriminate static, nonflickering stimuli? Can such learning transfer to untrained stimuli and tasks, and does double training with moving and static stimuli provide additional advantages relative to static training alone? We found CB subjects capable of relearning static orientation discriminations following single as well as double training. However, double training with complex, moving stimuli in a separate location was necessary to recover complex motion thresholds at locations trained with static stimuli. Subjects trained on static stimuli alone could only discriminate simple motion. Finally, both groups had approximately equivalent, incomplete recovery of fine orientation and direction discrimination thresholds, as well as contrast sensitivity. These results support two conclusions: (1) from a practical perspective, complex moving stimuli and double training may be superior training tools for inducing visual recovery in CB, and (2) the cortically blind visual system can relearn to perform a wider range of visual discriminations than predicted by blindsight alone.
Collapse
|
18
|
Decision-related activity in sensory neurons may depend on the columnar architecture of cerebral cortex. J Neurosci 2014; 34:3579-85. [PMID: 24599457 DOI: 10.1523/jneurosci.2340-13.2014] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Many studies have reported correlations between the activity of sensory neurons and animals' judgments in discrimination tasks. Here, we suggest that such neuron-behavior correlations may require a cortical map for the task relevant features. This would explain why studies using discrimination tasks based on disparity in area V1 have not found these correlations: V1 contains no map for disparity. This scheme predicts that activity of V1 neurons correlates with decisions in an orientation-discrimination task. To test this prediction, we trained two macaque monkeys in a coarse orientation discrimination task using band-pass-filtered dynamic noise. The two orientations were always 90° apart and task difficulty was controlled by varying the orientation bandwidth of the filter. While the trained animals performed this task, we recorded from orientation-selective V1 neurons (n = 82, n = 31 for Monkey 1, n = 51 for Monkey 2). For both monkeys, we observed significant correlation (quantified as "choice probabilities") of the V1 activity with the monkeys' perceptual judgments (mean choice probability 0.54, p = 10(-5)). In one of these animals, we had previously measured choice probabilities in a disparity discrimination task in V1, which had been at chance (0.49, not significantly different from 0.5). The choice probabilities in this monkey for the orientation discrimination task were significantly larger than those for the disparity discrimination task (p = 0.032). These results are predicted by our suggestion that choice probabilities are only observed for cortical sensory neurons that are organized in maps for the task-relevant feature.
Collapse
|
19
|
Zhang E, Zhang GL, Li W. Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping. Eur J Neurosci 2013; 38:3758-67. [PMID: 24118649 DOI: 10.1111/ejn.12379] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 09/02/2013] [Indexed: 11/28/2022]
Abstract
Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training.
Collapse
|
20
|
Kempgens C, Loffler G, Orbach HS. Set-size effects for sampled shapes: experiments and model. Front Comput Neurosci 2013; 7:67. [PMID: 23755007 PMCID: PMC3664879 DOI: 10.3389/fncom.2013.00067] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2013] [Accepted: 05/07/2013] [Indexed: 11/13/2022] Open
Abstract
The location of imperfections or heterogeneities in shapes and contours often correlates with points of interest in a visual scene. Investigating the detection of such heterogeneities provides clues as to the mechanisms processing simple shapes and contours. We determined set-size effects (e.g., sensitivity to single target detection as distractor number increases) for sampled contours to investigate how the visual system combines information across space. Stimuli were shapes sampled by oriented Gabor patches: circles and high-amplitude RF4 and RF8 radial frequency patterns with Gabor orientations tangential to the shape. Subjects had to detect a deviation in orientation of one element ("heterogeneity"). Heterogeneity detection sensitivity was measured for a range (7-40) of equally spaced (2.3-0.4°) elements. In a second condition, performance was measured when elements sampled a part of the shapes. We either varied partial contour length for a fixed (7) set-size, co-varying inter-element spacing, or set-size for a fixed spacing (0.7°), co-varying partial contour length. Surprisingly, set-size effects (poorer performance with more elements) are rarely seen. Set-size effects only occur for shapes containing concavities (RF4 and RF8) and when spacing is fixed. When elements are regularly spaced, detection performance improves with set-size for all shapes. When set-size is fixed and spacing varied, performance improves with decreasing spacing. Thus, when an increase in set-size and a decrease in spacing co-occur, the effect of spacing dominates, suggesting that inter-element spacing, not set-size, is the critical parameter for sampled shapes. We propose a model for the processing of simple shapes based on V4 curvature units with late noise, incorporating spacing, average shape curvature, and the number of segments with constant sign of curvature contained in the shape, which accurately accounts for our experimental results, making testable predictions for a variety of simple shapes.
Collapse
|
21
|
Le Dantec CC, Seitz AR. High resolution, high capacity, spatial specificity in perceptual learning. Front Psychol 2012; 3:222. [PMID: 22848203 PMCID: PMC3404551 DOI: 10.3389/fpsyg.2012.00222] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Accepted: 06/14/2012] [Indexed: 11/23/2022] Open
Abstract
Research of perceptual learning has received significant interest due to findings that training on perceptual tasks can yield learning effects that are specific to the stimulus features of that task. However, recent studies have demonstrated that while training a single stimulus at a single location can yield a high-degree of stimulus specificity, training multiple features, or at multiple locations can reveal a broad transfer of learning to untrained features or stimulus locations. We devised a high resolution, high capacity, perceptual learning procedure with the goal of testing whether spatial specificity can be found in cases where observers are highly trained to discriminate stimuli in many different locations in the visual field. We found a surprising degree of location specific learning, where performance was significantly better when target stimuli were presented at 1 of the 24 trained locations compared to when they were placed in 1 of the 12 untrained locations. This result is particularly impressive given that untrained locations were within a couple degrees of visual angle of those that were trained. Given the large number of trained locations, the fact that the trained and untrained locations were interspersed, and the high-degree of spatial precision of the learning, we suggest that these results are difficult to account for using attention or decision strategies and instead suggest that learning may have taken place for each location separately in retinotopically organized visual cortex.
Collapse
|
22
|
Baldassi S, Simoncini C. Reward sharpens orientation coding independently of attention. Front Neurosci 2011; 5:13. [PMID: 21369356 PMCID: PMC3037789 DOI: 10.3389/fnins.2011.00013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2010] [Accepted: 01/21/2011] [Indexed: 11/22/2022] Open
Abstract
It has long been known that rewarding improves performance. However it is unclear whether this is due to high level modulations in the output modules of associated neural systems or due to low level mechanisms favoring more “generous” inputs? Some recent studies suggest that primary sensory areas, including V1 and A1, may form part of the circuitry of reward-based modulations, but there is no data indicating whether reward can be dissociated from attention or cross-trial forms of perceptual learning. Here we address this issue with a psychophysical dual task, to control attention, while perceptual performance on oriented targets associated with different levels of reward is assessed by measuring both orientation discrimination thresholds and behavioral tuning functions for tilt values near threshold. We found that reward, at any rate, improved performance. However, higher reward rates showed an improvement of orientation discrimination thresholds by about 50% across conditions and sharpened behavioral tuning functions. Data were unaffected by changing the attentional load and by dissociating the feature of the reward cue from the task-relevant feature. These results suggest that reward may act within the span of a single trial independently of attention by modulating the activity of early sensory stages through a improvement of the signal-to-noise ratio of task-relevant channels.
Collapse
|
23
|
Saylor SA, Olzak LA. Contextual effects on fine orientation discrimination tasks. Vision Res 2006; 46:2988-97. [PMID: 16650451 PMCID: PMC1664710 DOI: 10.1016/j.visres.2006.03.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2005] [Revised: 03/02/2006] [Accepted: 03/07/2006] [Indexed: 11/16/2022]
Abstract
We examined the influence of context on fine orientation discrimination performance using sinusoidal grating patterns. Discrimination performance was impaired in the presence of modulated surrounds of the same spatial frequency, orientation, and contrast as the center. When center and surround were out-of-phase, separated by a gap of mean luminance, or very different in spatial frequency, performance remained at control levels. When center and surround were in-phase but mismatched in mean luminance, suppression was reduced or eliminated and performance was equivalent to luminance-mismatched control conditions. We speculate that lateral interactions in fine orientation discrimination tasks do not occur between objects that are perceptually distinct.
Collapse
|