1
|
Laamerad P, Awada A, Pack CC, Bakhtiari S. Asymmetric stimulus representations bias visual perceptual learning. J Vis 2024; 24:10. [PMID: 38285454 PMCID: PMC10829801 DOI: 10.1167/jov.24.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 12/12/2023] [Indexed: 01/30/2024] Open
Abstract
The primate visual cortex contains various regions that exhibit specialization for different stimulus properties, such as motion, shape, and color. Within each region, there is often further specialization, such that particular stimulus features, such as horizontal and vertical orientations, are over-represented. These asymmetries are associated with well-known perceptual biases, but little is known about how they influence visual learning. Most theories would predict that learning is optimal, in the sense that it is unaffected by these asymmetries. However, other approaches to learning would result in specific patterns of perceptual biases. To distinguish between these possibilities, we trained human observers to discriminate between expanding and contracting motion patterns, which have a highly asymmetrical representation in the visual cortex. Observers exhibited biased percepts of these stimuli, and these biases were affected by training in ways that were often suboptimal. We simulated different neural network models and found that a learning rule that involved only adjustments to decision criteria, rather than connection weights, could account for our data. These results suggest that cortical asymmetries influence visual perception and that human observers often rely on suboptimal strategies for learning.
Collapse
Affiliation(s)
- Pooya Laamerad
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Asmara Awada
- Department of Psychology, Université de Montréal, Montreal, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Shahab Bakhtiari
- Department of Psychology, Université de Montréal, Montreal, Canada
- Mila - Quebec AI Institute, Montreal, Canada
| |
Collapse
|
2
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
3
|
Lu ZL, Dosher BA. Current directions in visual perceptual learning. NATURE REVIEWS PSYCHOLOGY 2022; 1:654-668. [PMID: 37274562 PMCID: PMC10237053 DOI: 10.1038/s44159-022-00107-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 06/06/2023]
Abstract
The visual expertise of adult humans is jointly determined by evolution, visual development, and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioral aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Institute of Brain and Cognitive Science, New York University - East China Normal University, Shanghai, China
| | | |
Collapse
|
4
|
Dissecting the Roles of Supervised and Unsupervised Learning in Perceptual Discrimination Judgments. J Neurosci 2021; 41:757-765. [PMID: 33380471 PMCID: PMC7842757 DOI: 10.1523/jneurosci.0757-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/26/2020] [Accepted: 11/05/2020] [Indexed: 11/21/2022] Open
Abstract
Our ability to compare sensory stimuli is a fundamental cognitive function, which is known to be affected by two biases: choice bias, which reflects a preference for a given response, and contraction bias, which reflects a tendency to perceive stimuli as similar to previous ones. To test whether both reflect supervised processes, we designed feedback protocols aimed to modify them and tested them in human participants. Choice bias was readily modifiable. However, contraction bias was not. To compare these results to those predicted from an optimal supervised process, we studied a noise-matched optimal linear discriminator (Perceptron). In this model, both biases were substantially modified, indicating that the “resilience” of contraction bias to feedback does not maximize performance. These results suggest that perceptual discrimination is a hierarchical, two-stage process. In the first, stimulus statistics are learned and integrated with representations in an unsupervised process that is impenetrable to external feedback. In the second, a binary judgment, learned in a supervised way, is applied to the combined percept. SIGNIFICANCE STATEMENT The seemingly effortless process of inferring physical reality from the sensory input is highly influenced by previous knowledge, leading to perceptual biases. Two common ones are contraction bias (the tendency to perceive stimuli as similar to previous ones) and choice bias (the tendency to prefer a specific response). Combining human psychophysical experiments with computational modeling we show that they reflect two different learning processes. Contraction bias reflects unsupervised learning of stimuli statistics, whereas choice bias results from supervised or reinforcement learning. This dissociation reveals a hierarchical, two-stage process. The first, where stimuli statistics are learned and integrated with representations, is unsupervised. The second, where a binary judgment is applied to the combined percept, is learned in a supervised way.
Collapse
|
5
|
Horsfall RP. Narrowing of the Audiovisual Temporal Binding Window Due To Perceptual Training Is Specific to High Visual Intensity Stimuli. Iperception 2021; 12:2041669520978670. [PMID: 33680418 PMCID: PMC7897829 DOI: 10.1177/2041669520978670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/14/2020] [Indexed: 12/04/2022] Open
Abstract
The temporal binding window (TBW), which reflects the range of temporal offsets in which audiovisual stimuli are combined to form a singular percept, can be reduced through training. Our research aimed to investigate whether training-induced reductions in TBW size transfer across stimulus intensities. A total of 32 observers performed simultaneity judgements at two visual intensities with a fixed auditory intensity, before and after receiving audiovisual TBW training at just one of these two intensities. We show that training individuals with a high visual intensity reduces the size of the TBW for bright stimuli, but this improvement did not transfer to dim stimuli. The reduction in TBW can be explained by shifts in decision criteria. Those trained with the dim visual stimuli, however, showed no reduction in TBW. Our main finding is that perceptual improvements following training are specific for high-intensity stimuli, potentially highlighting limitations of proposed TBW training procedures.
Collapse
Affiliation(s)
- Ryan P. Horsfall
- Ryan P. Horsfall, Division of Neuroscience & Experimental Psychology, University of Manchester, Manchester M13 9PL, United Kingdom.
| |
Collapse
|
6
|
Asher JM, Hibbard PB. No effect of feedback, level of processing or stimulus presentation protocol on perceptual learning when easy and difficult trials are interleaved. Vision Res 2020; 176:100-117. [DOI: 10.1016/j.visres.2020.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 07/26/2020] [Accepted: 07/29/2020] [Indexed: 11/24/2022]
|
7
|
Change deafness can be reduced, but not eliminated, using brief training interventions. PSYCHOLOGICAL RESEARCH 2019; 85:423-438. [PMID: 31493050 DOI: 10.1007/s00426-019-01239-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 08/06/2019] [Indexed: 10/26/2022]
Abstract
Research on change deafness indicates there are substantial limitations to listeners' perception of which objects are present in complex auditory scenes, an ability that is important for many everyday situations. Experiment 1 examined the extent to which change deafness could be reduced by training with performance feedback compared to no training. Experiment 2 compared the efficacy of training with detailed feedback that identified the change and provided performance feedback on each trial, training without feedback, and no training. We further examined the timescale over which improvement unfolded by examining performance using an immediate post-test and a second post-test 12 h later. We were able to reduce, but not eliminate, change deafness for all groups, and determined that the practice content strongly impacted bias and response strategy. Training with simple performance feedback reduced change deafness but increased bias and false alarm rates, while providing a more detailed feedback improved change detection without affecting bias. Together, these findings suggest that change deafness can be reduced if a relatively small amount of practice is completed. When bias did not impede performance during the first post-test, the majority of the learning following training occurred immediately, suggesting that fast within-session learning primarily supported improvement on the task.
Collapse
|
8
|
Evered A. Criterion learning-A neglected aspect of training in cytopathology? Cytopathology 2018; 29:569-573. [PMID: 30007094 DOI: 10.1111/cyt.12611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 07/06/2018] [Indexed: 10/28/2022]
|
9
|
Zhang M, Tu J, Dong B, Chen C, Bao M. Preliminary evidence for a role of the personality trait in visual perceptual learning. Neurobiol Learn Mem 2017; 139:22-27. [DOI: 10.1016/j.nlm.2016.12.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 12/02/2016] [Accepted: 12/13/2016] [Indexed: 11/28/2022]
|
10
|
The Impact of Feedback on the Different Time Courses of Multisensory Temporal Recalibration. Neural Plast 2017; 2017:3478742. [PMID: 28316841 PMCID: PMC5339631 DOI: 10.1155/2017/3478742] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 01/14/2017] [Accepted: 01/26/2017] [Indexed: 11/18/2022] Open
Abstract
The capacity to rapidly adjust perceptual representations confers a fundamental advantage when confronted with a constantly changing world. Unexplored is how feedback regarding sensory judgments (top-down factors) interacts with sensory statistics (bottom-up factors) to drive long- and short-term recalibration of multisensory perceptual representations. Here, we examined the time course of both cumulative and rapid temporal perceptual recalibration for individuals completing an audiovisual simultaneity judgment task in which they were provided with varying degrees of feedback. We find that in the presence of feedback (as opposed to simple sensory exposure) temporal recalibration is more robust. Additionally, differential time courses are seen for cumulative and rapid recalibration dependent upon the nature of the feedback provided. Whereas cumulative recalibration effects relied more heavily on feedback that informs (i.e., negative feedback) rather than confirms (i.e., positive feedback) the judgment, rapid recalibration shows the opposite tendency. Furthermore, differential effects on rapid and cumulative recalibration were seen when the reliability of feedback was altered. Collectively, our findings illustrate that feedback signals promote and sustain audiovisual recalibration over the course of cumulative learning and enhance rapid trial-to-trial learning. Furthermore, given the differential effects seen for cumulative and rapid recalibration, these processes may function via distinct mechanisms.
Collapse
|
11
|
Practice improves peri-saccadic shape judgment but does not diminish target mislocalization. Proc Natl Acad Sci U S A 2016; 113:E7327-E7336. [PMID: 27807142 DOI: 10.1073/pnas.1607051113] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual sensitivity is markedly reduced during an eye movement. Peri-saccadic vision is also characterized by a mislocalization of the briefly presented stimulus closer to the saccadic target. These features are commonly viewed as obligatory elements of peri-saccadic vision. However, practice improves performance in many perceptual tasks performed at threshold conditions. We wondered if this could also be the case with peri-saccadic perception. To test this, we used a paradigm in which subjects reported the orientation (or location) of an ellipse briefly presented during a saccade. Practice on peri-saccadic orientation discrimination led to long-lasting gains in that task but did not alter the classical mislocalization of the visual stimulus. Shape discrimination gains were largely generalized to other untrained conditions when the same stimuli were used (discrimination during a saccade in the opposite direction or at a different stimulus location than previously trained). However, performance dropped to baseline level when participants shifted to a novel Vernier discrimination task under identical saccade conditions. Furthermore, practice on the location task did not induce better stimulus localization or discrimination. These results suggest that the limited visual information available during a saccade may be better used with practice, possibly by focusing attention on the specific target features or a better readout of the available information. Saccadic mislocalization, by contrast, is robust and resistant to top-down modulations, suggesting that it involves an automatic process triggered by the upcoming execution of a saccade (e.g., an efference copy signal).
Collapse
|
12
|
Liu J, Dosher BA, Lu ZL. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback. J Vis 2015; 15:10. [PMID: 26418382 DOI: 10.1167/15.10.10] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.
Collapse
|
13
|
Amitay S, Moore DR, Molloy K, Halliday LF. Feedback valence affects auditory perceptual learning independently of feedback probability. PLoS One 2015; 10:e0126412. [PMID: 25946173 PMCID: PMC4422442 DOI: 10.1371/journal.pone.0126412] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Accepted: 04/01/2015] [Indexed: 11/18/2022] Open
Abstract
Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability.
Collapse
Affiliation(s)
- Sygal Amitay
- Medical Research Council—Institute of Hearing Research, Nottingham, United Kingdom
- * E-mail:
| | - David R. Moore
- Medical Research Council—Institute of Hearing Research, Nottingham, United Kingdom
| | - Katharine Molloy
- Medical Research Council—Institute of Hearing Research, Nottingham, United Kingdom
| | - Lorna F. Halliday
- Developmental Science, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| |
Collapse
|
14
|
Jones PR, Moore DR, Shub DE, Amitay S. The role of response bias in perceptual learning. J Exp Psychol Learn Mem Cogn 2015; 41:1456-70. [PMID: 25867609 PMCID: PMC4562609 DOI: 10.1037/xlm0000111] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed.
Collapse
Affiliation(s)
- Pete R Jones
- Medical Research Council (MRC) Institute of Hearing Research
| | - David R Moore
- Medical Research Council (MRC) Institute of Hearing Research
| | | | - Sygal Amitay
- Medical Research Council (MRC) Institute of Hearing Research
| |
Collapse
|
15
|
Reference-frame specificity of perceptual learning: The effect of practice. Vision Res 2015; 106:1-6. [DOI: 10.1016/j.visres.2014.10.035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2014] [Revised: 10/23/2014] [Accepted: 10/30/2014] [Indexed: 11/23/2022]
|
16
|
Training improves visual processing speed and generalizes to untrained functions. Sci Rep 2014; 4:7251. [PMID: 25431233 PMCID: PMC4246693 DOI: 10.1038/srep07251] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 11/13/2014] [Indexed: 11/16/2022] Open
Abstract
Studies show that manipulating certain training features in perceptual learning determines the specificity of the improvement. The improvement in abnormal visual processing following training and its generalization to visual acuity, as measured on static clinical charts, can be explained by improved sensitivity or processing speed. Crowding, the inability to recognize objects in a clutter, fundamentally limits conscious visual perception. Although it was largely considered absent in the fovea, earlier studies report foveal crowding upon very brief exposures or following spatial manipulations. Here we used GlassesOff's application for iDevices to train foveal vision of young participants. The training was performed at reading distance based on contrast detection tasks under different spatial and temporal constraints using Gabor patches aimed at testing improvement of processing speed. We found several significant improvements in spatio-temporal visual functions including near and also non-trained far distances. A remarkable transfer to visual acuity measured under crowded conditions resulted in reduced processing time of 81 ms, in order to achieve 6/6 acuity. Despite a subtle change in contrast sensitivity, a robust increase in processing speed was found. Thus, enhanced processing speed may lead to overcoming foveal crowding and might be the enabling factor for generalization to other visual functions.
Collapse
|
17
|
Modeling trial by trial and block feedback in perceptual learning. Vision Res 2014; 99:46-56. [PMID: 24423783 DOI: 10.1016/j.visres.2014.01.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Revised: 01/02/2014] [Accepted: 01/03/2014] [Indexed: 11/20/2022]
Abstract
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle, 1997), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning.
Collapse
|
18
|
Amitay S, Zhang YX, Jones PR, Moore DR. Perceptual learning: top to bottom. Vision Res 2013; 99:69-77. [PMID: 24296314 DOI: 10.1016/j.visres.2013.11.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Revised: 11/18/2013] [Accepted: 11/20/2013] [Indexed: 11/30/2022]
Abstract
Perceptual learning has traditionally been portrayed as a bottom-up phenomenon that improves encoding or decoding of the trained stimulus. Cognitive skills such as attention and memory are thought to drive, guide and modulate learning but are, with notable exceptions, not generally considered to undergo changes themselves as a result of training with simple perceptual tasks. Moreover, shifts in threshold are interpreted as shifts in perceptual sensitivity, with no consideration for non-sensory factors (such as response bias) that may contribute to these changes. Accumulating evidence from our own research and others shows that perceptual learning is a conglomeration of effects, with training-induced changes ranging from the lowest (noise reduction in the phase locking of auditory signals) to the highest (working memory capacity) level of processing, and includes contributions from non-sensory factors that affect decision making even on a "simple" auditory task such as frequency discrimination. We discuss our emerging view of learning as a process that increases the signal-to-noise ratio associated with perceptual tasks by tackling noise sources and inefficiencies that cause performance bottlenecks, and present some implications for training populations other than young, smart, attentive and highly-motivated college students.
Collapse
Affiliation(s)
- Sygal Amitay
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - Yu-Xuan Zhang
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - Pete R Jones
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - David R Moore
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| |
Collapse
|
19
|
Sohn H, Lee SH. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course. J Neurophysiol 2013; 109:344-62. [DOI: 10.1152/jn.01201.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing.
Collapse
Affiliation(s)
- Hansem Sohn
- Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; and
| | - Sang-Hun Lee
- Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; and
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
20
|
Abstract
Psychometric sensory discrimination functions are usually modeled by cumulative Gaussian functions with just two parameters, their central tendency (μ) and their slope (1/σ). These correspond to Fechner's "constant" and "variable" errors, respectively. Fechner pointed out that even the constant error could vary over space and time and could masquerade as variable error. We wondered whether observers could deliberately introduce a constant error into their performance without loss of precision. In three-dot vernier and bisection tasks with the method of single stimuli, observers were instructed to favour one of the two responses when unsure of their answer. The slope of the resulting psychometric function was not significantly changed, despite a significant change in central tendency. Similar results were obtained when altered feedback was used to induce bias. We inferred that observers can adopt artificial response criteria without any significant increase in criterion fluctuation. These findings have implications for some studies that have measured perceptual "illusions" by shifts in the psychometric functions of sophisticated observers.
Collapse
|
21
|
Choi H, Watanabe T. Perceptual learning solely induced by feedback. Vision Res 2012; 61:77-82. [PMID: 22269189 PMCID: PMC3352973 DOI: 10.1016/j.visres.2012.01.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2011] [Revised: 01/06/2012] [Accepted: 01/07/2012] [Indexed: 01/27/2023]
Abstract
Although feedback is considered to be an important factor in perceptual learning (PL), its role is normally considered limited to facilitation, rather than direct inducement, of PL. Recent studies, however, have suggested feedback to be more actively involved in the inducement of PL. The current study demonstrates an even more significant role for feedback in PL: feedback can evoke PL of a feature without any bottom-up processing of that feature. We use a "fake feedback" method, in which the feedback is related to an arbitrarily chosen feature, rather than actual performance. We find evidence of PL with this fake feedback method both when the learned feature is absent from the visual stimulus (Experiment 1) and when it conflicts with the visual stimulus (Experiment 2). We call this "feedback-based PL," in contrast with the classical "exposure-based PL." We find that feedback-based PL and exposure-based PL can occur independently of each other even while occurring in the same paradigm. These results suggest that feedback not only facilitates PL that is evoked by bottom-up information, but that it can directly induce PL, where such feedback-based PL occurs independently of exposure-based PL.
Collapse
Affiliation(s)
- Hoon Choi
- Department of Psychology, Boston University, 64 Cummington St., Boston, MA 02215, USA.
| | | |
Collapse
|
22
|
Shibata K, Yamagishi N, Ishii S, Kawato M. Boosting perceptual learning by fake feedback. Vision Res 2009; 49:2574-85. [DOI: 10.1016/j.visres.2009.06.009] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2008] [Revised: 06/05/2009] [Accepted: 06/09/2009] [Indexed: 11/15/2022]
|