1
|
Long-term training reduces the responses to the sound-induced flash illusion. Atten Percept Psychophys 2021; 84:529-539. [PMID: 34518970 DOI: 10.3758/s13414-021-02363-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/08/2022]
Abstract
The sound-induced flash illusion (SiFI) is a robust auditory-dominated multisensory integration phenomenon that is used as a reliable indicator to assess multisensory integration. Previous studies have indicated that the SiFI effect is correlated with perceptual sensitivity. However, to date, there is no consensus regarding how it corresponds to sensitivity with long-term training. The present study adopted the classic SiFI paradigm with feedback training to investigate the effect of a week of long-term training on the SiFI effect. Both the training group and control group completed a pretest and a posttest before and after the perceptual training; however, only the training group was required to complete 7-day behavioral training. The results showed that (1) long-term training could reduce the response of fission and fusion illusions by improving perceptual sensitivity and that (2) there was a "plateau effect" that emerged during the training stage, which tended to stabilize by the fifth day. These findings demonstrated that the SiFI effect could be modified with long-term training by ameliorating perceptual sensitivity, especially in terms of the fission illusion. Therefore, the present study supplements perceptual training in SiFI domains and provides evidence that the SiFI could be used as an assessment intervention to improve the efficiency of multisensory integration.
Collapse
|
2
|
Herwig A, Weiß K, Schneider WX. Feature prediction across eye movements is location specific and based on retinotopic coordinates. J Vis 2019; 18:13. [PMID: 30372762 DOI: 10.1167/18.8.13] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
With each saccadic eye movement, internal object representations change their retinal position and spatial resolution. Recently, we suggested that the visual system deals with these saccade-induced changes by predicting visual features across saccades based on transsaccadic associations of peripheral and foveal input (Herwig & Schneider, 2014). Here we tested the specificity of feature prediction by asking (a) whether it is spatially restricted to the previous learning location or the saccade target location, and (b) whether it is based on retinotopic (eye-centered) or spatiotopic (world-centered) coordinates. In a preceding acquisition phase, objects systematically changed their spatial frequency during saccades. In the following test phases of two experiments, participants had to judge the frequency of briefly presented peripheral objects. These objects were presented either at the previous learning location or at new locations and were either the target of a saccadic eye movement or not (Experiment 1). Moreover, objects were presented either in the same or different retinotopic and spatiotopic coordinates (Experiment 2). Spatial frequency perception was biased toward previously associated foveal input indicating transsaccadic learning and feature prediction. Importantly, while this pattern was not bound to the saccade target location, it was seen only at the previous learning location in retinotopic coordinates, suggesting that feature prediction probably affects low- or mid-level perception.
Collapse
Affiliation(s)
- Arvid Herwig
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Katharina Weiß
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Werner X Schneider
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
3
|
Disentangling locus of perceptual learning in the visual hierarchy of motion processing. Sci Rep 2019; 9:1557. [PMID: 30733535 PMCID: PMC6367332 DOI: 10.1038/s41598-018-37892-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 12/17/2018] [Indexed: 12/03/2022] Open
Abstract
Visual perceptual learning (VPL) can lead to long-lasting perceptual improvements. One of the central topics in VPL studies is the locus of plasticity in the visual processing hierarchy. Here, we tackled this question in the context of motion processing. We took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. Two groups of participants were trained on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from different levels of processing stages to behavioral improvements. We observed a complete bi-directional transfer of learning between component and pattern stimuli that moved to the same directions, indicating learning-induced plasticity associated with intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, diminishing the possibility of non-sensory decision-level enhancements. Taken together, these results indicate that, at least for the type of stimuli and the task used here, motion VPL most likely alters visual computation associated with signals at the middle stage of motion processing.
Collapse
|
4
|
Liu LD, Pack CC. The Contribution of Area MT to Visual Motion Perception Depends on Training. Neuron 2017; 95:436-446.e3. [PMID: 28689980 DOI: 10.1016/j.neuron.2017.06.024] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 04/24/2017] [Accepted: 06/15/2017] [Indexed: 10/19/2022]
Abstract
Perceptual decisions require the transformation of raw sensory inputs into cortical representations suitable for stimulus discrimination. One of the best-known examples of this transformation involves the middle temporal area (MT) of the primate visual cortex. Area MT provides a robust representation of stimulus motion, and previous work has shown that it contributes causally to performance on motion discrimination tasks. Here we report that the strength of this contribution can be highly plastic: depending on the recent training history, pharmacological inactivation of MT can severely impair motion discrimination, or it can have little detectable influence. Further analysis of neural and behavioral data suggests that training moves the readout of motion information between MT and lower-level cortical areas. These results show that the contribution of individual brain regions to conscious perception can shift flexibly depending on sensory experience.
Collapse
Affiliation(s)
- Liu D Liu
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
5
|
Bays BC, Visscher KM, Le Dantec CC, Seitz AR. Alpha-band EEG activity in perceptual learning. J Vis 2015; 15:7. [PMID: 26370167 DOI: 10.1167/15.10.7] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing.
Collapse
|
6
|
Prolonged training at threshold promotes robust retinotopic specificity in perceptual learning. J Neurosci 2014; 34:8423-31. [PMID: 24948798 DOI: 10.1523/jneurosci.0745-14.2014] [Citation(s) in RCA: 90] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Human perceptual learning is classically thought to be highly specific to trained stimuli's retinal location. Together with evidence that specific learning effects can result in corresponding changes in early visual cortex, researchers have theorized that specificity implies regionalization of learning in the brain. However, other research suggests that specificity can arise from learning readout in decision areas or through top-down processes. Notably, recent research using a novel double-training paradigm reveals dramatic generalization of perceptual learning to untrained locations when multiple stimuli are trained. These data provoked significant controversy in the field and challenged extant models of perceptual learning. To resolve this controversy, we investigated mechanisms that account for retinotopic specificity in perceptual learning. We replicated findings of transfer after double training; however, we show that prolonged training at threshold, which leads to a greater number of difficult trials during training, preserves location specificity when double training occurred at the same location or sequentially at different locations. Likewise, we find that prolonged training at threshold determines the degree of transfer in single training of a peripheral orientation discrimination task. Together, these data show that retinotopic specificity depends highly upon particularities of the training procedure. We suggest that perceptual learning can arise from decision rules, attention learning, or representational changes, and small differences in the training approach can emphasize some of these over the others.
Collapse
|
7
|
Abstract
Training or exposure to a visual feature leads to a long-term improvement in performance on visual tasks that employ this feature. Such performance improvements and the processes that govern them are called visual perceptual learning (VPL). As an ever greater volume of research accumulates in the field, we have reached a point where a unifying model of VPL should be sought. A new wave of research findings has exposed diverging results along three major directions in VPL: specificity versus generalization of VPL, lower versus higher brain locus of VPL, and task-relevant versus task-irrelevant VPL. In this review, we propose a new theoretical model that suggests the involvement of two different stages in VPL: a low-level, stimulus-driven stage, and a higher-level stage dominated by task demands. If experimentally verified, this model would not only constructively unify the current divergent results in the VPL field, but would also lead to a significantly better understanding of visual plasticity, which may, in turn, lead to interventions to ameliorate diseases affecting vision and other pathological or age-related visual and nonvisual declines.
Collapse
Affiliation(s)
- Kazuhisa Shibata
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island
| | | | | |
Collapse
|
8
|
Abstract
In the past decade, there has been an increasing interest in the effects of rewards on visual perception. Exogenous rewards have been shown to increase visual sensitivity and to affect attentional selection. Human beings, however, also feel rewarded by the correct execution of a task. It has been proposed that this form of endogenous reward triggers reinforcement signals in the brain, making the sensory system more sensitive to stimuli that have been extensively and repeatedly paired with the rewarding experiences and modulating long-term cortical plasticity. Here, we report the striking observation that a well-known visual illusion, the tilt aftereffect, which is due to a form of short-term cortical plasticity, is immediately enhanced by a concurrent and independent target-recognition process. Our results show that endogenous rewards can alter visual experience with virtually no delay.
Collapse
Affiliation(s)
- David Pascucci
- Center for Mind/Brain Sciences, Sciences, University ofTrento, Corso Bettini, 31, 38068 Rovereto, Italy
| | | |
Collapse
|
9
|
Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Netw 2013; 37:1-47. [PMID: 23149242 DOI: 10.1016/j.neunet.2012.09.017] [Citation(s) in RCA: 190] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Revised: 08/24/2012] [Accepted: 09/24/2012] [Indexed: 11/17/2022]
|
10
|
Sasaki Y, Náñez JE, Watanabe T. Recent progress in perceptual learning research. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2012; 3:293-299. [PMID: 24179564 DOI: 10.1002/wcs.1175] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Perceptual learning is defined as long-term improvement in perceptual or sensory systems resulting from repeated practice or experience. As the number of perceptual learning studies has increased, controversies and questions have arisen regarding divergent aspects of perceptual learning, including: (1) stages in which perceptual learning occurs, (2) effects of training type, (3) changes in neural processing during the time course of learning, (4) effects of feedback as to correctness of a subject's responses, and (5) double training. Here we review each of these aspects and suggest fruitful directions for future perceptual learning research. WIREs Cogn Sci 2012, 3:293-299. doi: 10.1002/wcs.1175 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Yuka Sasaki
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Massachusetts General Hospital, Charlestown, MA, USA
| | - José E Náñez
- Division of Social and Behavioral Sciences, New College of Interdisciplinary Arts and Sciences, Arizona State University, Glendale, AZ, USA
| | - Takeo Watanabe
- Department of Psychology and Center for Neuroscience, University of Boston, Boston, MA, USA
| |
Collapse
|
11
|
Stemme A, Deco G, Lang EW. Perceptual learning with perceptions. Cogn Neurodyn 2012; 5:31-43. [PMID: 22379494 DOI: 10.1007/s11571-010-9134-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2010] [Revised: 08/23/2010] [Accepted: 08/26/2010] [Indexed: 10/19/2022] Open
Abstract
UNLABELLED In this work we present an approach to understand neuronal mechanisms underlying perceptual learning. Experimental results achieved with stimulus patterns of coherently moving dots are considered to build a simple neuronal model. The design of the model is made transparent and underlying behavioral assumptions made explicit. The key aspect of the suggested neuronal model is the learning algorithm used: We evaluated an implementation of Hebbian learning and are thus able to provide a straight-forward model capable to explain the neuronal dynamics underlying perceptual learning. Moreover, the simulation results suggest a very simple explanation for the aspect of "sub-threshold" learning (Watanabe et al. in Nature 413:844-884, 2001) as well as the relearning of motion discrimination after damage to primary visual cortex as recently reported (Huxlin et al. in J Neurosci 29:3981-3991, 2009) and at least indicate that perceptual learning might only occur when accompanied by conscious percepts. ELECTRONIC SUPPLEMENTARY MATERIAL The online version of this article (doi:10.1007/s11571-010-9134-9) contains supplementary material, which is available to authorized users.
Collapse
|
12
|
Shams L, Wozny DR, Kim R, Seitz A. Influences of multisensory experience on subsequent unisensory processing. Front Psychol 2011; 2:264. [PMID: 22028697 PMCID: PMC3198541 DOI: 10.3389/fpsyg.2011.00264] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2011] [Accepted: 09/23/2011] [Indexed: 11/23/2022] Open
Abstract
Multisensory perception has been the focus of intense investigation in recent years. It is now well-established that crossmodal interactions are ubiquitous in perceptual processing and endow the system with improved precision, accuracy, processing speed, etc. While these findings have shed much light on principles and mechanisms of perception, ultimately it is not very surprising that multiple sources of information provides benefits in performance compared to a single source of information. Here, we argue that the more surprising recent findings are those showing that multisensory experience also influences the subsequent unisensory processing. For example, exposure to auditory–visual stimuli can change the way that auditory or visual stimuli are processed subsequently even in isolation. We review three sets of findings that represent three different types of learning ranging from perceptual learning, to sensory recalibration, to associative learning. In all these cases exposure to multisensory stimuli profoundly influences the subsequent unisensory processing. This diversity of phenomena may suggest that continuous modification of unisensory representations by multisensory relationships may be a general learning strategy employed by the brain.
Collapse
Affiliation(s)
- Ladan Shams
- Department of Psychology, University of California Los Angeles Los Angeles, CA, USA
| | | | | | | |
Collapse
|
13
|
|
14
|
Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making. Neural Netw 2011; 24:417-26. [PMID: 21377327 DOI: 10.1016/j.neunet.2011.01.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Revised: 09/12/2010] [Accepted: 01/20/2011] [Indexed: 11/22/2022]
Abstract
Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons' broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multi-hypothesis sequential probability ratio test.
Collapse
|