1
|
Lu ZL, Yang S, Dosher B. Hierarchical Bayesian Augmented Hebbian Reweighting Model of Perceptual Learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.08.606902. [PMID: 39149245 PMCID: PMC11326272 DOI: 10.1101/2024.08.08.606902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
The Augmented Hebbian Reweighting Model (AHRM) has been effectively utilized to model the collective performance of observers in various perceptual learning studies. In this work, we have introduced a novel hierarchical Bayesian Augmented Hebbian Reweighting Model (HB-AHRM) to simultaneously model the learning curves of individual participants and the entire population within a single framework. We have compared its performance to that of a Bayesian Inference Procedure (BIP), which independently estimates the posterior distributions of model parameters for each individual subject without employing a hierarchical structure. To cope with the substantial computational demands, we developed an approach to approximate the likelihood function in the AHRM with feature engineering and linear regression, increasing the speed of the estimation procedure by 20,000 times. The HB-AHRM has enabled us to compute the joint posterior distribution of hyperparameters and parameters at the population, observer, and test levels, facilitating statistical inferences across these levels. While we have developed this methodology within the context of a single experiment, the HB-AHRM and the associated modeling techniques can be readily applied to analyze data from various perceptual learning experiments and provide predictions of human performance at both the population and individual levels. The likelihood approximation concept introduced in this study may have broader utility in fitting other stochastic models lacking analytic forms.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, USA; NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
| | - Shanglin Yang
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
| | - Barbara Dosher
- Cognitive Sciences Department, University of California, Irvine, CA 92697-5100, USA
| |
Collapse
|
2
|
Wang M, McGraw PV, Ledgeway T. Collective plasticity of binocular interactions in the adult visual system. Sci Rep 2024; 14:10494. [PMID: 38714660 PMCID: PMC11076462 DOI: 10.1038/s41598-024-57276-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 03/15/2024] [Indexed: 05/10/2024] Open
Abstract
Binocular visual plasticity can be initiated via either bottom-up or top-down mechanisms, but it is unknown if these two forms of adult plasticity can be independently combined. In seven participants with normal binocular vision, sensory eye dominance was assessed using a binocular rivalry task, before and after a period of monocular deprivation and with and without selective attention directed towards one eye. On each trial, participants reported the dominant monocular target and the inter-ocular contrast difference between the stimuli was systematically altered to obtain estimates of ocular dominance. We found that both monocular light- and pattern-deprivation shifted dominance in favour of the deprived eye. However, this shift was completely counteracted if the non-deprived eye's stimulus was selectively attended. These results reveal that shifts in ocular dominance, driven by bottom-up and top-down selection, appear to act independently to regulate the relative contrast gain between the two eyes.
Collapse
Affiliation(s)
- Mengxin Wang
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK.
| | - Paul V McGraw
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Timothy Ledgeway
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| |
Collapse
|
3
|
Shen S, Sun Y, Lu J, Li C, Chen Q, Mo C, Fang F, Zhang X. Profiles of visual perceptual learning in feature space. iScience 2024; 27:109128. [PMID: 38384835 PMCID: PMC10879700 DOI: 10.1016/j.isci.2024.109128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 01/22/2024] [Accepted: 02/01/2024] [Indexed: 02/23/2024] Open
Abstract
Visual perceptual learning (VPL), experience-induced gains in discriminating visual features, has been studied extensively and intensively for many years, its profile in feature space, however, remains unclear. Here, human subjects were trained to perform either a simple low-level feature (grating orientation) or a complex high-level object (face view) discrimination task over a long-time course. During, immediately after, and one month after training, all results showed that in feature space VPL in grating orientation discrimination was a center-surround profile; VPL in face view discrimination, however, was a monotonic gradient profile. Importantly, these two profiles can be emerged by a deep convolutional neural network with a modified AlexNet consisted of 7 and 12 layers, respectively. Altogether, our study reveals for the first time a feature hierarchy-dependent profile of VPL in feature space, placing a necessary constraint on our understanding of the neural computation of VPL.
Collapse
Affiliation(s)
- Shiqi Shen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Yueling Sun
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jiachen Lu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Chu Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Qinglin Chen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Ce Mo
- Department of Psychology, Sun-YatSen University, Guangzhou, Guangdong 510275, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Xilin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| |
Collapse
|
4
|
Duyar A, Ren S, Carrasco M. When temporal attention interacts with expectation. Sci Rep 2024; 14:4624. [PMID: 38409235 PMCID: PMC10897459 DOI: 10.1038/s41598-024-55399-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 02/22/2024] [Indexed: 02/28/2024] Open
Abstract
Temporal attention is voluntarily deployed at specific moments, whereas temporal expectation is deployed according to timing probabilities. When the target appears at an expected moment in a sequence, temporal attention improves performance at the attended moments, but the timing and the precision of the attentional window remain unknown. Here we independently and concurrently manipulated temporal attention-via behavioral relevance-and temporal expectation-via session-wise precision and trial-wise hazard rate-to investigate whether and how these mechanisms interact to improve perception. Our results reveal that temporal attention interacts with temporal expectation-the higher the precision, the stronger the attention benefit, but surprisingly this benefit decreased with delayed onset despite the increasing probability of stimulus appearance. When attention was suboptimally deployed to earlier than expected moments, it could not be reoriented to a later time point. These findings provide evidence that temporal attention and temporal expectation are different mechanisms, and highlight their interplay in optimizing visual performance.
Collapse
Affiliation(s)
- Aysun Duyar
- Department of Psychology, New York University, New York, NY, USA.
| | - Shiyang Ren
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
5
|
Bi T, Luo W, Wu J, Shao B, Tan Q, Kou H. Effect of facial emotion recognition learning transfers across emotions. Front Psychol 2024; 15:1310101. [PMID: 38312392 PMCID: PMC10834736 DOI: 10.3389/fpsyg.2024.1310101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/05/2024] [Indexed: 02/06/2024] Open
Abstract
Introduction Perceptual learning of facial expression is shown specific to the train expression, indicating separate encoding of the emotional contents in different expressions. However, little is known about the specificity of emotional recognition training with the visual search paradigm and the sensitivity of learning to near-threshold stimuli. Methods In the present study, we adopted a visual search paradigm to measure the recognition of facial expressions. In Experiment 1 (Exp1), Experiment 2 (Exp2), and Experiment 3 (Exp3), subjects were trained for 8 days to search for a target expression in an array of faces presented for 950 ms, 350 ms, and 50 ms, respectively. In Experiment 4 (Exp4), we trained subjects to search for a target of a triangle, and tested them with the task of facial expression search. Before and after the training, subjects were tested on the trained and untrained facial expressions which were presented for 950 ms, 650 ms, 350 ms, or 50 ms. Results The results showed that training led to large improvements in the recognition of facial emotions only if the faces were presented long enough (Exp1: 85.89%; Exp2: 46.05%). Furthermore, the training effect could transfer to the untrained expression. However, when the faces were presented briefly (Exp3), the training effect was small (6.38%). In Exp4, the results indicated that the training effect could not transfer across categories. Discussion Our findings revealed cross-emotion transfer for facial expression recognition training in a visual search task. In addition, learning hardly affects the recognition of near-threshold expressions.
Collapse
Affiliation(s)
- Taiyong Bi
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Wei Luo
- The Institute of Ethnology and Anthropology, Chinese Academy of Social Sciences, Beijing, China
| | - Jia Wu
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Boyao Shao
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Qingli Tan
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Hui Kou
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| |
Collapse
|
6
|
Hung SC, Barbot A, Carrasco M. Visual perceptual learning modulates microsaccade rate and directionality. Sci Rep 2023; 13:16525. [PMID: 37783775 PMCID: PMC10545683 DOI: 10.1038/s41598-023-42768-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 09/14/2023] [Indexed: 10/04/2023] Open
Abstract
Microsaccades, incessant "fixational eye movements" (< 1°), are an important window into cognitive functions. Yet, its role in visual perceptual learning (VPL)-improvements in visual discrimination due to practice-remains practically unexplored. Here we investigated whether and how microsaccades change in VPL. Human observers performed a Landolt acuity task for 5 consecutive days and were assigned to the Neutral or Attention group. On each trial, two peripheral Landolt squares were presented briefly along a diagonal. Observers reported the gap side of the target stimulus. Training improved acuity and modified the microsaccade rate; with training, the rate decreased during the fixation period but increased during the response cue. Furthermore, microsaccade direction during the response cue was biased toward the target location, and training enhanced and sped up this bias. Finally, the microsaccade rate during a task-free fixation period correlated with observers' initial acuity threshold, indicating that the fewer the microsaccades during fixation the better the individual visual acuity. All these results, which were similar for both the Neutral and Attention groups and at both trained and untrained locations, suggest that microsaccades could serve as a physiological marker reflecting functional dynamics in human perceptual learning.
Collapse
Affiliation(s)
- Shao-Chin Hung
- Department of Psychology, New York University, New York, USA.
| | - Antoine Barbot
- Department of Psychology, New York University, New York, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, USA
- Center for Neural Science, New York University, New York, USA
| |
Collapse
|
7
|
Abstract
Human perceptual learning, experience-induced gains in sensory discrimination, typically yields long-term performance improvements. Recent research revealed long-lasting transfer at the untrained location enabled by feature-based attention (FBA), reminiscent of its global effect (Hung & Carrasco, Scientific Reports, 11(1), 13914, (2021)). Visual Perceptual Learning (VPL) is typically studied while observers maintain fixation, but the role of fixational eye movements is unknown. Microsaccades - the largest of fixational eye movements - provide a continuous, online, physiological measure from the oculomotor system that reveals dynamic processing, which is unavailable from behavioral measures alone. We investigated whether and how microsaccades change after training in an orientation discrimination task. For human observers trained with or without FBA, microsaccade rates were significantly reduced during the response window in both trained and untrained locations and orientations. Critically, consistent with long-term training benefits, this microsaccade-rate reduction persisted over a year. Furthermore, microsaccades were biased toward the target location prior to stimulus onset and were more suppressed for incorrect than correct trials after observers' responses. These findings reveal that fixational eye movements and VPL are tightly coupled and that learning-induced microsaccade changes are long lasting. Thus, microsaccades reflect functional dynamics of the oculomotor system during information encoding, maintenance and readout, and may serve as a reliable long-term physiological correlate in VPL.
Collapse
|
8
|
Lu ZL, Dosher BA. Current directions in visual perceptual learning. NATURE REVIEWS PSYCHOLOGY 2022; 1:654-668. [PMID: 37274562 PMCID: PMC10237053 DOI: 10.1038/s44159-022-00107-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 06/06/2023]
Abstract
The visual expertise of adult humans is jointly determined by evolution, visual development, and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioral aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Institute of Brain and Cognitive Science, New York University - East China Normal University, Shanghai, China
| | | |
Collapse
|
9
|
Cavanaugh MR, Tadin D, Carrasco M, Huxlin KR. Benefits of Endogenous Spatial Attention During Visual Double-Training in Cortically-Blinded Fields. Front Neurosci 2022; 16:771623. [PMID: 35495043 PMCID: PMC9046589 DOI: 10.3389/fnins.2022.771623] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 03/08/2022] [Indexed: 12/12/2022] Open
Abstract
Recovery of visual discrimination thresholds inside cortically-blinded (CB) fields is most commonly attained at a single, trained location at a time, with iterative progress deeper into the blind field as performance improves over several months. As such, training is slow, inefficient, burdensome, and often frustrating for patients. Here, we investigated whether double-location training, coupled with a covert spatial-attention (SA) pre-cue, could improve the efficiency of training. Nine CB participants completed a randomized, training assignment with either a spatial attention or neutral pre-cue. All trained for a similar length of time on a fine direction discrimination task at two blind field locations simultaneously. Training stimuli and tasks for both cohorts were identical, save for the presence of a central pre-cue, to manipulate endogenous (voluntary) SA, or a Neutral pre-cue. Participants in the SA training cohort demonstrated marked improvements in direction discrimination thresholds, albeit not to normal/intact-field levels; participants in the Neutral training cohort remained impaired. Thus, double-training within cortically blind fields, when coupled with SA pre-cues can significantly improve direction discrimination thresholds at two locations simultaneously, offering a new method to improve performance and reduce the training burden for CB patients. Double-training without SA pre-cues revealed a hitherto unrecognized limitation of cortically-blind visual systems’ ability to improve while processing two stimuli simultaneously. These data could potentially explain why exposure to the typically complex visual environments encountered in everyday life is insufficient to induce visual recovery in CB patients. It is hoped that these new insights will direct both research and therapeutic developments toward methods that can attain better, faster recovery of vision in CB fields.
Collapse
Affiliation(s)
- Matthew R. Cavanaugh
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Duje Tadin
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - Krystel R. Huxlin
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, United States
- *Correspondence: Krystel R. Huxlin,
| |
Collapse
|