1
|
Liu J, Lu ZL, Dosher B. Transfer of visual perceptual learning over a task-irrelevant feature through feature-invariant representations: Behavioral experiments and model simulations. J Vis 2024; 24:17. [PMID: 38916886 PMCID: PMC11205231 DOI: 10.1167/jov.24.6.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 05/04/2024] [Indexed: 06/26/2024] Open
Abstract
A large body of literature has examined specificity and transfer of perceptual learning, suggesting a complex picture. Here, we distinguish between transfer over variations in a "task-relevant" feature (e.g., transfer of a learned orientation task to a different reference orientation) and transfer over a "task-irrelevant" feature (e.g., transfer of a learned orientation task to a different retinal location or different spatial frequency), and we focus on the mechanism for the latter. Experimentally, we assessed whether learning a judgment of one feature (such as orientation) using one value of an irrelevant feature (e.g., spatial frequency) transfers to another value of the irrelevant feature. Experiment 1 examined whether learning in eight-alternative orientation identification with one or multiple spatial frequencies transfers to stimuli at five different spatial frequencies. Experiment 2 paralleled Experiment 1, examining whether learning in eight-alternative spatial-frequency identification at one or multiple orientations transfers to stimuli with five different orientations. Training the orientation task with a single spatial frequency transferred widely to all other spatial frequencies, with a tendency to specificity when training with the highest spatial frequency. Training the spatial frequency task fully transferred across all orientations. Computationally, we extended the identification integrated reweighting theory (I-IRT) to account for the transfer data (Dosher, Liu, & Lu, 2023; Liu, Dosher, & Lu, 2023). Just as location-invariant representations in the original IRT explain transfer over retinal locations, incorporating feature-invariant representations effectively accounted for the observed transfer. Taken together, we suggest that feature-invariant representations can account for transfer of learning over a "task-irrelevant" feature.
Collapse
Affiliation(s)
- Jiajuan Liu
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Sciences and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
| | - Barbara Dosher
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
2
|
Zhu JP, Zhang JY. Feature variability determines specificity and transfer in multiorientation feature detection learning. J Vis 2024; 24:2. [PMID: 38691087 PMCID: PMC11079675 DOI: 10.1167/jov.24.5.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 02/26/2024] [Indexed: 05/03/2024] Open
Abstract
Historically, in many perceptual learning experiments, only a single stimulus is practiced, and learning is often specific to the trained feature. Our prior work has demonstrated that multi-stimulus learning (e.g., training-plus-exposure procedure) has the potential to achieve generalization. Here, we investigated two important characteristics of multi-stimulus learning, namely, roving and feature variability, and their impacts on multi-stimulus learning and generalization. We adopted a feature detection task in which an oddly oriented target bar differed by 16° from the background bars. The stimulus onset asynchrony threshold between the target and the mask was measured with a staircase procedure. Observers were trained with four target orientation search stimuli, either with a 5° deviation (30°-35°-40°-45°) or with a 45° deviation (30°-75°-120°-165°), and the four reference stimuli were presented in a roving manner. The transfer of learning to the swapped target-background orientations was evaluated after training. We found that multi-stimulus training with a 5° deviation resulted in significant learning improvement, but learning failed to transfer to the swapped target-background orientations. In contrast, training with a 45° deviation slowed learning but produced a significant generalization to swapped orientations. Furthermore, a modified training-plus-exposure procedure, in which observers were trained with four orientation search stimuli with a 5° deviation and simultaneously passively exposed to orientations with high feature variability (45° deviation), led to significant orientation learning generalization. Learning transfer also occurred when the four orientation search stimuli with a 5° deviation were presented in separate blocks. These results help us to specify the condition under which multistimuli learning produces generalization, which holds potential for real-world applications of perceptual learning, such as vision rehabilitation and expert training.
Collapse
Affiliation(s)
- Jun-Ping Zhu
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Jun-Yun Zhang
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
3
|
Heald JB, Wolpert DM, Lengyel M. The Computational and Neural Bases of Context-Dependent Learning. Annu Rev Neurosci 2023; 46:233-258. [PMID: 36972611 PMCID: PMC10348919 DOI: 10.1146/annurev-neuro-092322-100402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
| | - Daniel M Wolpert
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
4
|
Ning R, Wright BA. Evidence that anterograde learning interference depends on the stage of learning of the interferer: blocked versus interleaved training. Learn Mem 2023; 30:101-109. [PMID: 37419679 PMCID: PMC10353258 DOI: 10.1101/lm.053710.122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Training on one task (task A) can disrupt learning on a subsequently trained task (task B), illustrating anterograde learning interference. We asked whether the induction of anterograde learning interference depends on the learning stage that task A has reached when the training on task B begins. To do so, we drew on previous observations in perceptual learning in which completing all training on one task before beginning training on another task (blocked training) yielded markedly different learning outcomes than alternating training between the same two tasks for the same total number of trials (interleaved training). Those blocked versus interleaved contrasts suggest that there is a transition between two differentially vulnerable learning stages that is related to the number of consecutive training trials on each task, with interleaved training presumably tapping acquisition, and blocked training tapping consolidation. Here, we used the blocked versus interleaved paradigm in auditory perceptual learning in a case in which blocked training generated anterograde-but not its converse, retrograde-learning interference (A→B, not B←A). We report that anterograde learning interference of training on task A (interaural time difference discrimination) on learning on task B (interaural level difference discrimination) occurred with blocked training and diminished with interleaved training, with faster rates of interleaving leading to less interference. This pattern held for across-day, within-session, and offline learning. Thus, anterograde learning interference only occurred when the number of consecutive training trials on task A surpassed some critical value, consistent with other recent evidence that anterograde learning interference only arises when learning on task A has entered the consolidation stage.
Collapse
Affiliation(s)
- Ruijing Ning
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| | - Beverly A Wright
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
- Knowles Hearing Center, Northwestern University, Evanston, Illinois 60208, USA
- Northwestern University Institute for Neuroscience, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
5
|
Abstract
Vision and learning have long been considered to be two areas of research linked only distantly. However, recent developments in vision research have changed the conceptual definition of vision from a signal-evaluating process to a goal-oriented interpreting process, and this shift binds learning, together with the resulting internal representations, intimately to vision. In this review, we consider various types of learning (perceptual, statistical, and rule/abstract) associated with vision in the past decades and argue that they represent differently specialized versions of the fundamental learning process, which must be captured in its entirety when applied to complex visual processes. We show why the generalized version of statistical learning can provide the appropriate setup for such a unified treatment of learning in vision, what computational framework best accommodates this kind of statistical learning, and what plausible neural scheme could feasibly implement this framework. Finally, we list the challenges that the field of statistical learning faces in fulfilling the promise of being the right vehicle for advancing our understanding of vision in its entirety. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- József Fiser
- Department of Cognitive Science, Center for Cognitive Computation, Central European University, Vienna 1100, Austria;
| | - Gábor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| |
Collapse
|
6
|
Simple contextual cueing prevents retroactive interference in short-term perceptual training of orientation detection tasks. Atten Percept Psychophys 2022; 84:2540-2551. [PMID: 35676554 DOI: 10.3758/s13414-022-02520-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2022] [Indexed: 11/08/2022]
Abstract
Perceptual training of multiple tasks suffers from interference between the trained tasks. Here, we conducted five psychophysical experiments with separate groups of participants to investigate the possibility of preventing the interference in short-term perceptual training. We trained the participants to detect two orientations of Gabor stimuli in two adjacent days at the same retinal location and examined the interference of training effects between the two orientations. The results showed significant retroactive interference from the second orientation to the first orientation (Experiment 1 and Experiment 2). Introducing a 6-h interval between the pre-test and training of the second orientation did not eliminate the interference effect, excluding the interpretation of disrupted reconsolidation as the pre-test of the second orientation may reactivate and destabilize the representation of the first orientation (Experiment 3). Finally, the training of the two orientations was accompanied by fixations in two colors, each serving as a contextual cue for one orientation. The results showed that the retroactive interference was not evident if the participants passively perceived contextual cues during the training and test sessions (Experiment 4). Importantly, this facilitation effect could be observed if the contextual cues appeared only during the training, demonstrating the robustness of the effect (Experiment 5). Our findings suggest that the retroactive interference effect in short-term perceptual training of orientation detection tasks was likely the result of higher-level factors such as shared contextual cues embedded in the tasks. The efficiency of multiple perceptual trainings could be facilitated by associating the trained tasks with different contextual cues.
Collapse
|
7
|
Abstract
Sensory systems often suppress self-generated sensations in order to discriminate them from those arising in the environment. The suppression of visual sensitivity during rapid eye movements is well established, and although functionally beneficial most of the time, it can limit the performance of certain tasks. Here, we show that with repeated practice, mechanisms that suppress visual signals during eye movements can be modified. People trained to detect brief visual patterns learn to turn off suppression around the expected time of the target. These findings demonstrate an elegant form of plasticity, capable of improving the visibility of behaviorally relevant stimuli without compromising the wider functional benefits of suppression. Perceptual stability is facilitated by a decrease in visual sensitivity during rapid eye movements, called saccadic suppression. While a large body of evidence demonstrates that saccadic programming is plastic, little is known about whether the perceptual consequences of saccades can be modified. Here, we demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced. Across a period of 7 days, 44 participants were trained to detect brief, low-contrast stimuli embedded within dynamic noise, while eye position was tracked. Although instructed to fixate, participants regularly made small fixational saccades. Data were accumulated over a large number of trials, allowing us to assess changes in performance as a function of the temporal proximity of stimuli and saccades. This analysis revealed that improvements in sensitivity over the training period were accompanied by a systematic change in the impact of saccades on performance—robust saccadic suppression on day 1 declined gradually over subsequent days until its magnitude became indistinguishable from zero. This silencing of suppression was not explained by learning-related changes in saccade characteristics and generalized to an untrained retinal location and stimulus orientation. Suppression was restored when learned stimulus timing was perturbed, consistent with the operation of a mechanism that temporarily reduces or eliminates saccadic suppression, but only when it is behaviorally advantageous to do so. Our results indicate that learning can circumvent saccadic suppression to improve performance, without compromising its functional benefits in other viewing contexts.
Collapse
|
8
|
Xu R, Church RM, Sasaki Y, Watanabe T. Effects of stimulus and task structure on temporal perceptual learning. Sci Rep 2021; 11:668. [PMID: 33436842 PMCID: PMC7804100 DOI: 10.1038/s41598-020-80192-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 12/17/2020] [Indexed: 11/09/2022] Open
Abstract
Our ability to discriminate temporal intervals can be improved with practice. This learning is generally thought to reflect an enhancement in the representation of a trained interval, which leads to interval-specific improvements in temporal discrimination. In the present study, we asked whether temporal learning is further constrained by context-specific factors dictated through the trained stimulus and task structure. Two groups of participants were trained using a single-interval auditory discrimination task over 5 days. Training intervals were either one of eight predetermined values (FI group), or random from trial to trial (RI group). Before and after the training period, we measured discrimination performance using an untrained two-interval temporal comparison task. Our results revealed a selective improvement in the FI group, but not the RI group. However, this learning did not generalize between the trained and untrained tasks. These results highlight the sensitivity of TPL to stimulus and task structure, suggesting that mechanisms of temporal learning rely on processes beyond changes in interval representation.
Collapse
Affiliation(s)
- Rannie Xu
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA.
| | - Russell M Church
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| |
Collapse
|
9
|
Dosher BA, Liu J, Chu W, Lu ZL. Roving: The causes of interference and re-enabled learning in multi-task visual training. J Vis 2020; 20:9. [PMID: 32543649 PMCID: PMC7416889 DOI: 10.1167/jov.20.6.9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 03/10/2020] [Indexed: 11/24/2022] Open
Abstract
People routinely perform multiple visual judgments in the real world, yet, intermixing tasks or task variants during training can damage or even prevent learning. This paper explores why. We challenged theories of visual perceptual learning focused on plastic retuning of low-level retinotopic cortical representations by placing different task variants in different retinal locations, and tested theories of perceptual learning through reweighting (changes in readout) by varying task similarity. Discriminating different (but equivalent) and similar orientations in separate retinal locations interfered with learning, whereas training either with identical orientations or sufficiently different ones in different locations released rapid learning. This location crosstalk during learning renders it unlikely that the primary substrate of learning is retuning in early retinotopic visual areas; instead, learning likely involves reweighting from location-independent representations to a decision. We developed an Integrated Reweighting Theory (IRT), which has both V1-like location-specific representations and higher level (V4/IT or higher) location-invariant representations, and learns via reweighting the readout to decision, to predict the order of learning rates in different conditions. This model with suitable parameters successfully fit the behavioral data, as well as some microstructure of learning performance in a new trial-by-trial analysis.
Collapse
Affiliation(s)
- Barbara Anne Dosher
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
| | - Jiajuan Liu
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
| | - Wilson Chu
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
- Department of Psychology, Los Angeles Valley College, Valley Glen, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Sciences and Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
10
|
Xie XY, Yu C. A new format of perceptual learning based on evidence abstraction from multiple stimuli. J Vis 2020; 20:5. [PMID: 32097482 PMCID: PMC7343432 DOI: 10.1167/jov.20.2.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Accepted: 11/28/2019] [Indexed: 11/24/2022] Open
Abstract
Perceptual learning, which improves stimulus discrimination, typically results from training with a single stimulus condition. Two major learning mechanisms, early cortical neural plasticity and response reweighting, have been proposed. Here we report a new format of perceptual learning that by design may have bypassed these mechanisms. Instead, it is more likely based on abstracted stimulus evidence from multiple stimulus conditions. Specifically, we had observers practice orientation discrimination with Gabors or symmetric dot patterns at up to 47 random or rotating location × orientation conditions. Although each condition received sparse trials (12 trials/session), the practice produced significant orientation learning. Learning also transferred to a Gabor at a single untrained condition with two- to three-times lower orientation thresholds. Moreover, practicing a single stimulus condition with matched trial frequency (12 trials/session) failed to produce significant learning. These results suggest that learning with multiple stimulus conditions may not come from early cortical plasticity or response reweighting with each particular condition. Rather, it may materialize through a new format of perceptual learning, in which orientation evidence invariant to particular orientations and locations is first abstracted from multiple stimulus conditions and then reweighted by later learning mechanisms. The coarse-to-fine transfer of orientation learning from multiple Gabors or symmetric dot patterns to a single Gabor also suggest the involvement of orientation concept learning by the learning mechanisms.
Collapse
|
11
|
Interfering with a memory without erasing its trace. Neural Netw 2019; 121:339-355. [PMID: 31593840 DOI: 10.1016/j.neunet.2019.09.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 09/16/2019] [Accepted: 09/22/2019] [Indexed: 11/21/2022]
Abstract
Previous research has shown that performance of a novice skill can be easily interfered with by subsequent training of another skill. We address the open questions whether extensively trained skills show the same vulnerability to interference as novice skills and which memory mechanism regulates interference between expert skills. We developed a recurrent neural network model of V1 able to learn from feedback experienced over the course of a long-term orientation discrimination experiment. After first exposing the model to one discrimination task for 3480 consecutive trials, we assessed how its performance was affected by subsequent training in a second, similar task. Training the second task strongly interfered with the first (highly trained) discrimination skill. The magnitude of interference depended on the relative amounts of training devoted to the different tasks. We used these and other model outcomes as predictions for a perceptual learning experiment in which human participants underwent the same training protocol as our model. Specifically, over the course of three months participants underwent baseline training in one orientation discrimination task for 15 sessions before being trained for 15 sessions on a similar task and finally undergoing another 15 sessions of training on the first task (to assess interference). Across all conditions, the pattern of interference observed empirically closely matched model predictions. According to our model, behavioral interference can be explained by antagonistic changes in neuronal tuning induced by the two tasks. Remarkably, this did not stem from erasing connections due to earlier learning but rather from a reweighting of lateral inhibition.
Collapse
|
12
|
Tan Q, Wang Z, Sasaki Y, Watanabe T. Category-Induced Transfer of Visual Perceptual Learning. Curr Biol 2019; 29:1374-1378.e3. [PMID: 30930042 PMCID: PMC6482054 DOI: 10.1016/j.cub.2019.03.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/27/2019] [Accepted: 03/01/2019] [Indexed: 10/27/2022]
Abstract
Visual perceptual learning (VPL) refers to a long-term enhancement of visual task performance as a result of visual experience [1-6]. VPL is generally specific for the trained visual feature, meaning that training on a feature leads to performance enhancement only on the feature and those in its close vicinity. In the meantime, visual perception is often categorical [7-10]. This may partially be because the ecological importance of a stimulus is usually determined by the category to which the stimulus belongs (e.g., snake, lightning, and fish) [11]. Thus, it would be advantageous to an observer if encountering or working on a feature from a category increases sensitivity to features under the same category. However, studies of VPL have used uncategorized features. Here, we found a category-induced transfer of VPL, where VPL of an orientation transferred to untrained orientations within the same category as the trained orientation, but not orientations from the different category. Furthermore, we found that, although category learning transferred to other locations in the visual field, the category-induced transfer of VPL occurred only when visual stimuli for the category learning and those for VPL training were presented at the same location. These results altogether suggest that feature specificity in VPL is greatly influenced by cognitive processing, such as categorization in a top-down fashion. In an environment where features are categorically organized, VPL may be more generalized across features under the same category. Such generalization implies that VPL is of more ecological significance than has been thought.
Collapse
Affiliation(s)
- Qingleng Tan
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA; Key Laboratory of Bio-Resource and Eco-Environment of Ministry of Education, College of Life Sciences, Sichuan University, Chengdu 610065, Sichuan, PRC
| | - Zhiyan Wang
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA.
| |
Collapse
|
13
|
Sotiropoulos G, Seitz AR, Seriès P. Performance-monitoring integrated reweighting model of perceptual learning. Vision Res 2018; 152:17-39. [PMID: 29581060 PMCID: PMC6200663 DOI: 10.1016/j.visres.2018.01.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Revised: 01/19/2018] [Accepted: 01/22/2018] [Indexed: 10/17/2022]
Abstract
Perceptual learning (PL) has been traditionally thought of as highly specific to stimulus properties, task and retinotopic position. This view is being progressively challenged, with accumulating evidence that learning can generalize (transfer) across various parameters under certain conditions. For example, retinotopic specificity can be diminished when the proportion of easy to hard trials is high, such as when multiple short staircases, instead of a single long one, are used during training. To date, there is a paucity of mechanistic explanations of what conditions affect transfer of learning. Here we present a model based on the popular Integrated Reweighting Theory model of PL but departing from its one-layer architecture by including a novel key feature: dynamic weighting of retinotopic-location-specific vs location-independent representations based on internal performance estimates of these representations. This dynamic weighting is closely related to gating in a mixture-of-experts architecture. Our dynamic performance-monitoring model (DPMM) unifies a variety of psychophysical data on transfer of PL, such as the short-vs-long staircase effect, as well as several findings from the double-training literature. Furthermore, the DPMM makes testable predictions and ultimately helps understand the mechanisms of generalization of PL, with potential applications to vision rehabilitation and enhancement.
Collapse
Affiliation(s)
| | - Aaron R Seitz
- Department of Psychology, University of California, Riverside, Riverside, CA, USA.
| | - Peggy Seriès
- School of Informatics, University of Edinburgh, Edinburgh, UK.
| |
Collapse
|
14
|
Deep Neural Networks for Modeling Visual Perceptual Learning. J Neurosci 2018; 38:6028-6044. [PMID: 29793979 DOI: 10.1523/jneurosci.1620-17.2018] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 03/12/2018] [Accepted: 03/19/2018] [Indexed: 11/21/2022] Open
Abstract
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. Although existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well -known instance of deep neural network (DNN), whereas not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could transfer asymmetrically to coarse discriminations when the stimulus conditions varied. Consistent with the behavioral findings, the distribution of plasticity moved toward lower layers when task precision increased and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL, can serve as a test bed for theories, and assists in generating predictions for physiological investigations.SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced behavioral and physiological patterns similar to those found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition, but was not designed specifically for VPL; however, it fulfilled predictions of existing theories regarding specificity and plasticity and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology.
Collapse
|
15
|
Levi A, Shaked D, Tadin D, Huxlin KR. Is improved contrast sensitivity a natural consequence of visual training? J Vis 2015; 15:4. [PMID: 26305736 DOI: 10.1167/15.10.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s).
Collapse
|
16
|
Harris H, Sagi D. Effects of spatiotemporal consistencies on visual learning dynamics and transfer. Vision Res 2015; 109:77-86. [DOI: 10.1016/j.visres.2015.02.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Revised: 02/20/2015] [Accepted: 02/23/2015] [Indexed: 11/24/2022]
|
17
|
Ritter P, Born J, Brecht M, Dinse HR, Heinemann U, Pleger B, Schmitz D, Schreiber S, Villringer A, Kempter R. State-dependencies of learning across brain scales. Front Comput Neurosci 2015; 9:1. [PMID: 25767445 PMCID: PMC4341560 DOI: 10.3389/fncom.2015.00001] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2014] [Accepted: 01/06/2015] [Indexed: 01/09/2023] Open
Abstract
Learning is a complex brain function operating on different time scales, from milliseconds to years, which induces enduring changes in brain dynamics. The brain also undergoes continuous “spontaneous” shifts in states, which, amongst others, are characterized by rhythmic activity of various frequencies. Besides the most obvious distinct modes of waking and sleep, wake-associated brain states comprise modulations of vigilance and attention. Recent findings show that certain brain states, particularly during sleep, are essential for learning and memory consolidation. Oscillatory activity plays a crucial role on several spatial scales, for example in plasticity at a synaptic level or in communication across brain areas. However, the underlying mechanisms and computational rules linking brain states and rhythms to learning, though relevant for our understanding of brain function and therapeutic approaches in brain disease, have not yet been elucidated. Here we review known mechanisms of how brain states mediate and modulate learning by their characteristic rhythmic signatures. To understand the critical interplay between brain states, brain rhythms, and learning processes, a wide range of experimental and theoretical work in animal models and human subjects from the single synapse to the large-scale cortical level needs to be integrated. By discussing results from experiments and theoretical approaches, we illuminate new avenues for utilizing neuronal learning mechanisms in developing tools and therapies, e.g., for stroke patients and to devise memory enhancement strategies for the elderly.
Collapse
Affiliation(s)
- Petra Ritter
- Minerva Research Group BrainModes, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany ; Department of Neurology, Charité University Medicine Berlin Berlin, Germany ; Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany ; Berlin School of Mind and Brain & Mind and Brain Institute, Humboldt-Universität zu Berlin Berlin, Germany
| | - Jan Born
- Department of Medical Psychology and Behavioral Neurobiology & Center for Integrative Neuroscience (CIN), University of Tübingen Tübingen, Germany
| | - Michael Brecht
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany
| | - Hubert R Dinse
- Neural Plasticity Lab, Institute for Neuroinformatics, Ruhr-University Bochum Bochum, Germany ; Department of Neurology, BG University Hospital Bergmannsheil, Ruhr-University Bochum Bochum, Germany
| | - Uwe Heinemann
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany ; NeuroCure Cluster of Excellence Berlin, Germany
| | - Burkhard Pleger
- Clinic for Cognitive Neurology, University Hospital Leipzig Leipzig, Germany ; Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Dietmar Schmitz
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany ; NeuroCure Cluster of Excellence Berlin, Germany ; Neuroscience Research Center NWFZ, Charité University Medicine Berlin Berlin, Germany ; Max-Delbrück Center for Molecular Medicine, MDC Berlin, Germany ; Center for Neurodegenerative Diseases (DZNE) Berlin, Germany
| | - Susanne Schreiber
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany ; Department of Biology, Institute for Theoretical Biology (ITB), Humboldt-Universität zu Berlin Berlin, Germany
| | - Arno Villringer
- Berlin School of Mind and Brain & Mind and Brain Institute, Humboldt-Universität zu Berlin Berlin, Germany ; Clinic for Cognitive Neurology, University Hospital Leipzig Leipzig, Germany ; Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Richard Kempter
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany ; Department of Biology, Institute for Theoretical Biology (ITB), Humboldt-Universität zu Berlin Berlin, Germany
| |
Collapse
|
18
|
|
19
|
Deveau J, Jaeggi SM, Zordan V, Phung C, Seitz AR. How to build better memory training games. Front Syst Neurosci 2015; 8:243. [PMID: 25620916 PMCID: PMC4288240 DOI: 10.3389/fnsys.2014.00243] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2014] [Accepted: 12/11/2014] [Indexed: 11/13/2022] Open
Abstract
Can we create engaging training programs that improve working memory (WM) skills? While there are numerous procedures that attempt to do so, there is a great deal of controversy regarding their efficacy. Nonetheless, recent meta-analytic evidence shows consistent improvements across studies on lab-based tasks generalizing beyond the specific training effects (Au et al., 2014; Karbach and Verhaeghen, 2014), however, there is little research into how WM training aids participants in their daily life. Here we propose that incorporating design principles from the fields of Perceptual Learning (PL) and Computer Science might augment the efficacy of WM training, and ultimately lead to greater learning and transfer. In particular, the field of PL has identified numerous mechanisms (including attention, reinforcement, multisensory facilitation and multi-stimulus training) that promote brain plasticity. Also, computer science has made great progress in the scientific approach to game design that can be used to create engaging environments for learning. We suggest that approaches integrating knowledge across these fields may lead to a more effective WM interventions and better reflect real world conditions.
Collapse
Affiliation(s)
- Jenni Deveau
- Department of Psychology, University of California, Riverside Riverside, CA, USA
| | - Susanne M Jaeggi
- School of Education, University of California, Irvine Irvine, CA, USA ; Department of Cognitive Sciences, University of California, Irvine Irvine, CA, USA
| | - Victor Zordan
- Department of Computer Science, University of California, Riverside Riverside, CA, USA
| | - Calvin Phung
- Department of Computer Science, University of California, Riverside Riverside, CA, USA
| | - Aaron R Seitz
- Department of Psychology, University of California, Riverside Riverside, CA, USA
| |
Collapse
|
20
|
Training improves visual processing speed and generalizes to untrained functions. Sci Rep 2014; 4:7251. [PMID: 25431233 PMCID: PMC4246693 DOI: 10.1038/srep07251] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 11/13/2014] [Indexed: 11/16/2022] Open
Abstract
Studies show that manipulating certain training features in perceptual learning determines the specificity of the improvement. The improvement in abnormal visual processing following training and its generalization to visual acuity, as measured on static clinical charts, can be explained by improved sensitivity or processing speed. Crowding, the inability to recognize objects in a clutter, fundamentally limits conscious visual perception. Although it was largely considered absent in the fovea, earlier studies report foveal crowding upon very brief exposures or following spatial manipulations. Here we used GlassesOff's application for iDevices to train foveal vision of young participants. The training was performed at reading distance based on contrast detection tasks under different spatial and temporal constraints using Gabor patches aimed at testing improvement of processing speed. We found several significant improvements in spatio-temporal visual functions including near and also non-trained far distances. A remarkable transfer to visual acuity measured under crowded conditions resulted in reduced processing time of 81 ms, in order to achieve 6/6 acuity. Despite a subtle change in contrast sensitivity, a robust increase in processing speed was found. Thus, enhanced processing speed may lead to overcoming foveal crowding and might be the enabling factor for generalization to other visual functions.
Collapse
|
21
|
Chen X, Sanayei M, Thiele A. Stimulus roving and flankers affect perceptual learning of contrast discrimination in Macaca mulatta. PLoS One 2014; 9:e109604. [PMID: 25340335 PMCID: PMC4207683 DOI: 10.1371/journal.pone.0109604] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 09/11/2014] [Indexed: 11/18/2022] Open
Abstract
'Stimulus roving' refers to a paradigm in which the properties of the stimuli to be discriminated vary from trial to trial, rather than being kept constant throughout a block of trials. Rhesus monkeys have previously been shown to improve their contrast discrimination performance on a non-roving task, in which they had to report the contrast of a test stimulus relative to that of a fixed-contrast sample stimulus. Human psychophysics studies indicate that roving stimuli yield little or no perceptual learning. Here, we investigate how stimulus roving influences perceptual learning in macaque monkeys and how the addition of flankers alters performance under roving conditions. Animals were initially trained on a contrast discrimination task under non-roving conditions until their performance levels stabilized. The introduction of roving contrast conditions resulted in a pronounced drop in performance, which suggested that subjects initially failed to heed the sample contrast and performed the task using an internal memory reference. With training, significant improvements occurred, demonstrating that learning is possible under roving conditions. To investigate the notion of flanker-induced perceptual learning, flanker stimuli (30% fixed-contrast iso-oriented collinear gratings) were presented jointly with central (roving) stimuli. Presentation of flanker stimuli yielded substantial performance improvements in one subject, but deteriorations in the other. Finally, after the removal of flankers, performance levels returned to their pre-flanker state in both subjects, indicating that the flanker-induced changes were contingent upon the continued presentation of flankers.
Collapse
Affiliation(s)
- Xing Chen
- Institute of Neuroscience, Newcastle University, Newcastle-upon-Tyne, United Kingdom
| | - Mehdi Sanayei
- Institute of Neuroscience, Newcastle University, Newcastle-upon-Tyne, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle-upon-Tyne, United Kingdom
- * E-mail:
| |
Collapse
|
22
|
Abstract
Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.
Collapse
Affiliation(s)
- Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912;
| | | |
Collapse
|
23
|
Neger TM, Rietveld T, Janse E. Relationship between perceptual learning in speech and statistical learning in younger and older adults. Front Hum Neurosci 2014; 8:628. [PMID: 25225475 PMCID: PMC4150448 DOI: 10.3389/fnhum.2014.00628] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2014] [Accepted: 07/28/2014] [Indexed: 11/30/2022] Open
Abstract
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Collapse
Affiliation(s)
- Thordis M Neger
- Centre for Language Studies, Radboud University Nijmegen Nijmegen, Netherlands ; International Max Planck Research School for Language Sciences Nijmegen, Netherlands
| | - Toni Rietveld
- Centre for Language Studies, Radboud University Nijmegen Nijmegen, Netherlands
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen Nijmegen, Netherlands ; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| |
Collapse
|
24
|
Prolonged training at threshold promotes robust retinotopic specificity in perceptual learning. J Neurosci 2014; 34:8423-31. [PMID: 24948798 DOI: 10.1523/jneurosci.0745-14.2014] [Citation(s) in RCA: 90] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Human perceptual learning is classically thought to be highly specific to trained stimuli's retinal location. Together with evidence that specific learning effects can result in corresponding changes in early visual cortex, researchers have theorized that specificity implies regionalization of learning in the brain. However, other research suggests that specificity can arise from learning readout in decision areas or through top-down processes. Notably, recent research using a novel double-training paradigm reveals dramatic generalization of perceptual learning to untrained locations when multiple stimuli are trained. These data provoked significant controversy in the field and challenged extant models of perceptual learning. To resolve this controversy, we investigated mechanisms that account for retinotopic specificity in perceptual learning. We replicated findings of transfer after double training; however, we show that prolonged training at threshold, which leads to a greater number of difficult trials during training, preserves location specificity when double training occurred at the same location or sequentially at different locations. Likewise, we find that prolonged training at threshold determines the degree of transfer in single training of a peripheral orientation discrimination task. Together, these data show that retinotopic specificity depends highly upon particularities of the training procedure. We suggest that perceptual learning can arise from decision rules, attention learning, or representational changes, and small differences in the training approach can emphasize some of these over the others.
Collapse
|
25
|
Multisensory perceptual learning and sensory substitution. Neurosci Biobehav Rev 2014; 41:16-25. [DOI: 10.1016/j.neubiorev.2012.11.017] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 11/19/2012] [Accepted: 11/28/2012] [Indexed: 11/23/2022]
|
26
|
Deveau J, Lovcik G, Seitz AR. Broad-based visual benefits from training with an integrated perceptual-learning video game. Vision Res 2014; 99:134-40. [PMID: 24406157 DOI: 10.1016/j.visres.2013.12.015] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2013] [Revised: 12/23/2013] [Accepted: 12/24/2013] [Indexed: 12/28/2022]
Abstract
Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision.
Collapse
Affiliation(s)
- Jenni Deveau
- Department of Psychology, University of California - Riverside, Riverside, CA, USA
| | - Gary Lovcik
- Anaheim Hills Optometry Center, Anaheim, CA, USA
| | - Aaron R Seitz
- Department of Psychology, University of California - Riverside, Riverside, CA, USA.
| |
Collapse
|
27
|
Deleterious effects of roving on learned tasks. Vision Res 2013; 99:88-92. [PMID: 24384405 DOI: 10.1016/j.visres.2013.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 11/08/2013] [Accepted: 12/19/2013] [Indexed: 11/21/2022]
Abstract
In typical perceptual learning experiments, one stimulus type (e.g., a bisection stimulus offset either to the left or right) is presented per trial. In roving, two different stimulus types (e.g., a 30' and a 20' wide bisection stimulus) are randomly interleaved from trial to trial. Roving can impair both perceptual learning and task sensitivity. Here, we investigate the relationship between the two. Using a bisection task, we found no effect of roving before training. We next trained subjects and they improved. A roving condition applied after training impaired sensitivity.
Collapse
|
28
|
Wang F, Zhong X, Huang J, Ding Y, Song Y. Comparison of perceptual learning of real and virtual line orientations: An event-related potential study. Vision Res 2013; 93:1-9. [PMID: 24139921 DOI: 10.1016/j.visres.2013.10.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 07/09/2013] [Accepted: 10/04/2013] [Indexed: 11/18/2022]
Affiliation(s)
- Fang Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China
| | | | | | | | | |
Collapse
|
29
|
Cohen Y, Daikhin L, Ahissar M. Perceptual learning is specific to the trained structure of information. J Cogn Neurosci 2013; 25:2047-60. [PMID: 23915051 DOI: 10.1162/jocn_a_00453] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
What do we learn when we practice a simple perceptual task? Many studies have suggested that we learn to refine or better select the sensory representations of the task-relevant dimension. Here we show that learning is specific to the trained structural regularities. Specifically, when this structure is modified after training with a fixed temporal structure, performance regresses to pretraining levels, even when the trained stimuli and task are retained. This specificity raises key questions as to the importance of low-level sensory modifications in the learning process. We trained two groups of participants on a two-tone frequency discrimination task for several days. In one group, a fixed reference tone was consistently presented in the first interval (the second tone was higher or lower), and in the other group the same reference tone was consistently presented in the second interval. When following training, these temporal protocols were switched between groups, performance of both groups regressed to pretraining levels, and further training was needed to attain postlearning performance. ERP measures, taken before and after training, indicated that participants implicitly learned the temporal regularity of the protocol and formed an attentional template that matched the trained structure of information. These results are consistent with Reverse Hierarchy Theory, which posits that even the learning of simple perceptual tasks progresses in a top-down manner, hence can benefit from temporal regularities at the trial level, albeit at the potential cost that learning may be specific to these regularities.
Collapse
|
30
|
Abstract
Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system.
Collapse
|
31
|
Sohn H, Lee SH. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course. J Neurophysiol 2013; 109:344-62. [DOI: 10.1152/jn.01201.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing.
Collapse
Affiliation(s)
- Hansem Sohn
- Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; and
| | - Sang-Hun Lee
- Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; and
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
32
|
Censor N, Sagi D, Cohen LG. Common mechanisms of human perceptual and motor learning. Nat Rev Neurosci 2012; 13:658-64. [PMID: 22903222 DOI: 10.1038/nrn3315] [Citation(s) in RCA: 126] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The adult mammalian brain has a remarkable capacity to learn in both the perceptual and motor domains through the formation and consolidation of memories. Such practice-enabled procedural learning results in perceptual and motor skill improvements. Here, we examine evidence supporting the notion that perceptual and motor learning in humans exhibit analogous properties, including similarities in temporal dynamics and the interactions between primary cortical and higher-order brain areas. These similarities may point to the existence of a common general mechanism for learning in humans.
Collapse
Affiliation(s)
- Nitzan Censor
- Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892, USA.
| | | | | |
Collapse
|
33
|
Abstract
While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.
Collapse
Affiliation(s)
- C S Green
- Department of Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA.
| | | |
Collapse
|
34
|
Rademaker RL, Pearson J. Training Visual Imagery: Improvements of Metacognition, but not Imagery Strength. Front Psychol 2012; 3:224. [PMID: 22787452 PMCID: PMC3392841 DOI: 10.3389/fpsyg.2012.00224] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2012] [Accepted: 06/16/2012] [Indexed: 11/13/2022] Open
Abstract
Visual imagery has been closely linked to brain mechanisms involved in perception. Can visual imagery, like visual perception, improve by means of training? Previous research has demonstrated that people can reliably evaluate the vividness of single episodes of imagination - might the metacognition of imagery also improve over the course of training? We had participants imagine colored Gabor patterns for an hour a day, over the course of five consecutive days, and again 2 weeks after training. Participants rated the subjective vividness and effort of their mental imagery on each trial. The influence of imagery on subsequent binocular rivalry dominance was taken as our measure of imagery strength. We found no overall effect of training on imagery strength. Training did, however, improve participant's metacognition of imagery. Trial-by-trial ratings of vividness gained predictive power on subsequent rivalry dominance as a function of training. These data suggest that, while imagery strength might be immune to training in the current context, people's metacognitive understanding of mental imagery can improve with practice.
Collapse
Affiliation(s)
- Rosanne L Rademaker
- Cognitive Neuroscience Department, Maastricht University Maastricht, Netherlands
| | | |
Collapse
|
35
|
Pollmann S. Anterior prefrontal contributions to implicit attention control. Brain Sci 2012; 2:254-66. [PMID: 24962775 PMCID: PMC4061792 DOI: 10.3390/brainsci2020254] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 06/04/2012] [Accepted: 06/05/2012] [Indexed: 11/25/2022] Open
Abstract
Prefrontal cortex function has traditionally been associated with explicit executive function. Recently, however, evidence has been presented that lateral prefrontal cortex is also involved in high-level cognitive processes such as task set selection or inhibition in the absence of awareness. Here, we discuss evidence that not only lateral prefrontal cortex, but also rostral prefrontal cortex is involved in such kinds of implicit control processes. Specifically, rostral prefrontal cortex activation changes have been observed when implicitly learned spatial contingencies in a search display become invalid, requiring a change of attentional settings for optimal guidance of visual search.
Collapse
Affiliation(s)
- Stefan Pollmann
- Experimental Psychology Lab, Institute of Psychology II, Otto-von-Guericke-University, Postbox 4120, D-39016 Magdeburg, Germany.
| |
Collapse
|
36
|
Herzog MH, Aberg KC, Frémaux N, Gerstner W, Sprekeler H. Perceptual learning, roving and the unsupervised bias. Vision Res 2012; 61:95-9. [DOI: 10.1016/j.visres.2011.11.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2011] [Revised: 10/13/2011] [Accepted: 11/03/2011] [Indexed: 12/01/2022]
|
37
|
Sasaki Y, Náñez JE, Watanabe T. Recent progress in perceptual learning research. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2012; 3:293-299. [PMID: 24179564 DOI: 10.1002/wcs.1175] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Perceptual learning is defined as long-term improvement in perceptual or sensory systems resulting from repeated practice or experience. As the number of perceptual learning studies has increased, controversies and questions have arisen regarding divergent aspects of perceptual learning, including: (1) stages in which perceptual learning occurs, (2) effects of training type, (3) changes in neural processing during the time course of learning, (4) effects of feedback as to correctness of a subject's responses, and (5) double training. Here we review each of these aspects and suggest fruitful directions for future perceptual learning research. WIREs Cogn Sci 2012, 3:293-299. doi: 10.1002/wcs.1175 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Yuka Sasaki
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Massachusetts General Hospital, Charlestown, MA, USA
| | - José E Náñez
- Division of Social and Behavioral Sciences, New College of Interdisciplinary Arts and Sciences, Arizona State University, Glendale, AZ, USA
| | - Takeo Watanabe
- Department of Psychology and Center for Neuroscience, University of Boston, Boston, MA, USA
| |
Collapse
|
38
|
Perry C, Felsen G. Rats can make relative perceptual judgments about sequential stimuli. Anim Cogn 2012; 15:473-81. [PMID: 22350084 DOI: 10.1007/s10071-012-0471-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2011] [Revised: 12/16/2011] [Accepted: 01/25/2012] [Indexed: 11/30/2022]
Abstract
In their natural environment, animals often make decisions based on abstract relationships among multiple stimulus representations. Humans and other primates can determine not only whether a sensory stimulus differs from a remembered sensory representation, but also how they differ along a particular dimension. However, much remains unknown about how such relative comparisons are made, and which species share this capacity, in part because most studies of sensory-guided decision making have utilized instrumental tasks in which choices are based on very simple stimulus-response associations. Here, we used a two-stimulus-interval discrimination task to test whether rats could determine how two sequentially presented stimuli were related along the dimension of odor quality (i.e., what the stimulus smells like). At a central port, rats sampled and compared two odor mixtures that consisted of spearmint and caraway in different ratios, separated by a 2-4-s interval, and then entered the left or right reward port. Water was delivered at the left if the first mixture consisted of more spearmint than the second did, and at the right otherwise. We found that the difference in mixture ratio predicted choice accuracy. Control experiments suggest that rats were indeed basing their choices on a comparison of odor quality across mixtures and were not using associative strategies. This study is the first demonstration of the use of a sequential "more than versus less than" rule in rats and provides a well-controlled paradigm for studying abstract comparisons in a rodent model system.
Collapse
Affiliation(s)
- Clint Perry
- Department of Physiology and Biophysics, University of Colorado School of Medicine, 12800 E. 19th Ave., Aurora, CO 80045, USA.
| | | |
Collapse
|
39
|
Aberg KC, Herzog MH. About similar characteristics of visual perceptual learning and LTP. Vision Res 2012; 61:100-6. [PMID: 22289647 DOI: 10.1016/j.visres.2011.12.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2011] [Revised: 12/12/2011] [Accepted: 12/13/2011] [Indexed: 12/12/2022]
Abstract
Perceptual learning is an implicit form of learning which induces long-lasting perceptual enhancements. Perceptual learning shows intriguing characteristics. For example, a minimal number of trials per session is needed for learning and the interleaved presentation of more than one stimulus type can hinder learning. Here, we show that these and other characteristics of perceptual learning are very similar to characteristics of long-term potentiation (LTP), the basic mechanism of memory formation. We outline these characteristics and discuss results of electrophysiological experiments which indirectly link LTP and perceptual learning.
Collapse
Affiliation(s)
- Kristoffer C Aberg
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
| | | |
Collapse
|
40
|
Banai K, Amitay S. Stimulus uncertainty in auditory perceptual learning. Vision Res 2012; 61:83-8. [PMID: 22289646 DOI: 10.1016/j.visres.2012.01.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2011] [Revised: 11/19/2011] [Accepted: 01/11/2012] [Indexed: 10/14/2022]
Abstract
Stimulus uncertainty produced by variations in a target stimulus to be detected or discriminated, impedes perceptual learning under some, but not all experimental conditions. To account for those discrepancies, it has been proposed that uncertainty is detrimental to learning when the interleaved stimuli or tasks are similar to each other but not when they are sufficiently distinct, or when it obstructs the downstream search required to gain access to fine-grained sensory information, as suggested by the Reverse Hierarchy Theory (RHT). The focus of the current review is on the effects of uncertainty on the perceptual learning of speech and non-speech auditory signals. Taken together, the findings from the auditory modality suggest that in addition to the accounts already described, uncertainty may contribute to learning when categorization of stimuli to phonological or acoustic categories is involved. Therefore, it appears that the differences reported between the learning of non-speech and speech-related parameters are not an outcome of inherent differences between those two domains, but rather due to the nature of the tasks often associated with those different stimuli.
Collapse
|
41
|
Versatile perceptual learning of textures after variable exposures. Vision Res 2012; 61:89-94. [PMID: 22266193 DOI: 10.1016/j.visres.2012.01.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2011] [Revised: 01/04/2012] [Accepted: 01/05/2012] [Indexed: 11/21/2022]
Abstract
Perceptual learning of 10-AFC texture identification is stimulus specific: after practice, identification accuracy drops substantially when textures are rotated 180°, reversed in contrast polarity, or when a novel set of textures is presented. Here we asked if perceptual learning occurs without any repetition of items during training, and whether exposure to greater stimulus variation during training influences transfer of learning. We trained three groups of subjects in a 10-AFC texture identification task on 2 days. The Standard group viewed a fixed set of 10 textures throughout training. The Variable group viewed 840 novel sets of textures. The Switch group viewed different fixed sets of 10 textures on Days 1 and 2. In all groups, transfer of learning was tested by using fixed sets of textures on Days 3 and 4 and having half of the subjects from each group switch to a novel set on Day 4. During training, the most learning was obtained by the Standard group, and gradual but significant learning was obtained by the other two groups. On Day 4, performance of the Standard group was adversely affected by a switch to novel textures, whereas performance of the Variable and Switch groups remained intact. Hence, slight but significant learning occurred without repetition of items during training, and stimulus specificity was influenced significantly by the type of training. Increasing stimulus variability by reducing the number of times stimuli are repeated during practice may cause subjects to adopt strategies that increase generalization of learning to new stimuli. Alternatively, presenting new stimuli on each trial may prevent subjects from adopting strategies that result in stimulus specific learning.
Collapse
|
42
|
Abstract
Perceptual skills improve with daily practice (Fahle and Poggio, 2002; Fine and Jacobs, 2002). Practice induces plasticity in task-relevant brain regions during an "offline" consolidation period thought to last several hours, during which initially fragile memory traces become stable (Karni, 1996; Dudai, 2004). Impaired retention of a task if followed by training in another task is considered evidence for the instability of memory traces during consolidation (Dudai, 2004). However, it remains unknown when after training memory traces become stable and resistant against interference, where in the brain the neuronal mechanisms responsible for interference are localized, and how these mechanisms produce interference. Here, we show in human participants strong interference between two visual skill-learning tasks for surprisingly long time intervals between training periods (up to 24 h). Interference occurred during asymptotic learning, but only when stimuli were similar between tasks. This supports a strong contribution to interference of low-level visual cortical areas (Karni and Bertini, 1997; Ahissar and Hochstein, 2004), where similar stimuli recruit overlapping neuronal populations. Our finding of stimulus-dependent and time-independent interference reveals a fundamental limit in cortical plasticity that constrains the simultaneous representation of multiple skills in a single neuronal population, rather than a time-limited consolidation process.
Collapse
|
43
|
Li S, Mayhew SD, Kourtzi Z. Learning shapes spatiotemporal brain patterns for flexible categorical decisions. ACTA ACUST UNITED AC 2011; 22:2322-35. [PMID: 22079922 DOI: 10.1093/cercor/bhr309] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Learning is thought to facilitate our ability to perform complex perceptual tasks and optimize brain circuits involved in decision making. However, little is known about the experience-dependent mechanisms in the human brain that support our ability to make fine categorical judgments. Previous work has focused on identifying spatial brain patterns (i.e., areas) that change with learning. Here, we take advantage of the complementary high spatial and temporal resolution of simultaneous electroencephalography-functional magnetic resonance imaging (EEG-fMRI) to identify the spatiotemporal dynamics between cortical networks involved in flexible category learning. Observers were trained to use different decision criteria (i.e., category boundaries) when making fine categorical judgments on morphed stimuli (i.e., radial vs. concentric patterns). Our findings demonstrate that learning acts on a feedback-based circuit that supports fine categorical judgments. Experience-dependent changes in the behavioral decision criterion were associated with changes in later perceptual processes engaging higher occipitotemporal and frontoparietal circuits. In contrast, category learning did not modulate early processes in a medial frontotemporal network that are thought to support the coarse interpretation of visual scenes. These findings provide evidence that learning flexible criteria for fine categorical judgments acts on distinct spatiotemporal brain circuits and shapes the readout of sensory signals that provide evidence for categorical decisions.
Collapse
Affiliation(s)
- Sheng Li
- Department of Psychology, Peking University, Beijing 100871, China
| | | | | |
Collapse
|
44
|
Hung SC, Seitz AR. Retrograde interference in perceptual learning of a peripheral hyperacuity task. PLoS One 2011; 6:e24556. [PMID: 21931753 PMCID: PMC3170339 DOI: 10.1371/journal.pone.0024556] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2011] [Accepted: 08/14/2011] [Indexed: 11/18/2022] Open
Abstract
Consolidation, a process that stabilizes memory trace after initial acquisition, has been studied for over a century. A number of studies have shown that a skill or memory must be consolidated after acquisition so that it becomes resistant to interference from new information. Previous research found that training on a peripheral 3-dot hyperacuity task could retrogradely interfere with earlier training on the same task but with a mirrored stimulus configuration. However, a recent study failed to replicate this finding. Here we address the controversy by replicating both patterns of results, however, under different experimental settings. We find that retrograde interference occurs when eye-movements are tightly controlled, using a gaze-contingent display, where the peripheral stimuli were only presented when subjects maintained fixation. On the other hand, no retrograde interference was found in a group of subjects who performed the task without this fixation control. Our results provide a plausible explanation of why divergent results were found for retrograde interference in perceptual learning on the 3-dot hyperacuity task and confirm that retrograde interference can occur in this type of low-level perceptual learning. Furthermore, our results demonstrate the importance of eye-movement controls in studies of perceptual learning in the peripheral visual field.
Collapse
Affiliation(s)
- Shao-Chin Hung
- Department of Psychology, University of California Riverside, Riverside, California, United States of America
| | - Aaron R. Seitz
- Department of Psychology, University of California Riverside, Riverside, California, United States of America
- * E-mail:
| |
Collapse
|
45
|
|
46
|
Sotiropoulos G, Seitz AR, Seriès P. Perceptual learning in visual hyperacuity: A reweighting model. Vision Res 2011; 51:585-99. [DOI: 10.1016/j.visres.2011.02.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Revised: 01/06/2011] [Accepted: 02/07/2011] [Indexed: 11/17/2022]
|
47
|
Aberg KC, Herzog MH. Does perceptual learning suffer from retrograde interference? PLoS One 2010; 5:e14161. [PMID: 21151868 PMCID: PMC2998421 DOI: 10.1371/journal.pone.0014161] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2010] [Accepted: 10/26/2010] [Indexed: 11/22/2022] Open
Abstract
In motor learning, training a task B can disrupt improvements of performance of a previously learned task A, indicating that learning needs consolidation. An influential study suggested that this is the case also for visual perceptual learning [1]. Using the same paradigm, we failed to reproduce these results. Further experiments with bisection stimuli also showed no retrograde disruption from task B on task A. Hence, for the tasks tested here, perceptual learning does not suffer from retrograde interference.
Collapse
Affiliation(s)
- Kristoffer C Aberg
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | |
Collapse
|
48
|
Perceptual learning in Vision Research. Vision Res 2010; 51:1552-66. [PMID: 20974167 DOI: 10.1016/j.visres.2010.10.019] [Citation(s) in RCA: 298] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2010] [Revised: 10/15/2010] [Accepted: 10/15/2010] [Indexed: 12/31/2022]
Abstract
Reports published in Vision Research during the late years of the 20th century described surprising effects of long-term sensitivity improvement with some basic visual tasks as a result of training. These improvements, found in adult human observers, were highly specific to simple visual features, such as location in the visual field, spatial-frequency, local and global orientation, and in some cases even the eye of origin. The results were interpreted as arising from the plasticity of sensory brain regions that display those features of specificity within their constituting neuronal subpopulations. A new view of the visual cortex has emerged, according to which a degree of plasticity is retained at adult age, allowing flexibility in acquiring new visual skills when the need arises. Although this "sensory plasticity" interpretation is often questioned, it is commonly believed that learning has access to detailed low-level visual representations residing within the visual cortex. More recent studies during the last decade revealed the conditions needed for learning and the conditions under which learning can be generalized across stimuli and tasks. The results are consistent with an account of perceptual learning according to which visual processing is remodeled by the brain, utilizing sensory information acquired during task performance. The stability of the visual system is viewed as an adaptation to a stable environment and instances of perceptual learning as a reaction of the brain to abrupt changes in the environment. Training on a restricted stimulus set may lead to perceptual overfitting and over-specificity. The systemic methodology developed for perceptual learning, and the accumulated knowledge, allows us to explore issues related to learning and memory in general, such as learning rules, reinforcement, memory consolidation, and neural rehabilitation. A persistent open question is the neuro-anatomical substrate underlying these learning effects.
Collapse
|
49
|
Yao H, Lu H, Wang W. Visual neuroscience research in China. SCIENCE CHINA-LIFE SCIENCES 2010; 53:363-373. [DOI: 10.1007/s11427-010-0071-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2009] [Accepted: 01/19/2010] [Indexed: 11/28/2022]
|
50
|
Hamid OH, Wendemuth A, Braun J. Temporal context and conditional associative learning. BMC Neurosci 2010; 11:45. [PMID: 20353575 PMCID: PMC2873591 DOI: 10.1186/1471-2202-11-45] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2009] [Accepted: 03/30/2010] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND We investigated how temporal context affects the learning of arbitrary visuo-motor associations. Human observers viewed highly distinguishable, fractal objects and learned to choose for each object the one motor response (of four) that was rewarded. Some objects were consistently preceded by specific other objects, while other objects lacked this task-irrelevant but predictive context. RESULTS The results of five experiments showed that predictive context consistently and significantly accelerated associative learning. A simple model of reinforcement learning, in which three successive objects informed response selection, reproduced our behavioral results. CONCLUSIONS Our results imply that not just the representation of a current event, but also the representations of past events, are reinforced during conditional associative learning. In addition, these findings are broadly consistent with the prediction of attractor network models of associative learning and their prophecy of a persistent representation of past objects.
Collapse
Affiliation(s)
- Oussama H Hamid
- Department of Cognitive Biology, Institute of Biology, Otto-von-Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany.
| | | | | |
Collapse
|