1
|
Wu D, Zhang P, Liu N, Sun K, Xiao W. Effects of High-Definition Transcranial Direct Current Stimulation Over the Left Fusiform Face Area on Face View Discrimination Depend on the Individual Baseline Performance. Front Neurosci 2021; 15:704880. [PMID: 34867146 PMCID: PMC8639859 DOI: 10.3389/fnins.2021.704880] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
A basic human visual function is to identify objects from different viewpoints. Typically, the ability to discriminate face views based on in-depth orientation is necessary in daily life. Early neuroimaging studies have identified the involvement of the left fusiform face area (FFA) and the left superior temporal sulcus (STS) in face view discrimination. However, many studies have documented the important role of the right FFA in face processing. Thus, there remains controversy over whether one specific region or all of them are involved in discriminating face views. Thus, this research examined the influence of high-definition transcranial direct current stimulation (HD-tDCS) over the left FFA, left STS or right FFA on face view discrimination in three experiments. In experiment 1, eighteen subjects performed a face view discrimination task before and immediately, 10 min and 20 min after anodal, cathodal and sham HD-tDCS (20 min, 1.5 mA) over the left FFA in three sessions. Compared with sham stimulation, anodal and cathodal stimulation had no effects that were detected at the group level. However, the analyses at the individual level showed that the baseline performance negatively correlated with the degree of change after anodal tDCS, suggesting a dependence of the change amount on the initial performance. Specifically, tDCS decreased performance in the subjects with better baseline performance but increased performance in those with poorer baseline performance. In experiments 2 and 3, the same experimental protocol was used except that the stimulation site was the left STS or right FFA, respectively. Neither anodal nor cathodal tDCS over the left STS or right FFA influenced face view discrimination in group- or individual-level analyses. These results not only indicated the importance of the left FFA in face view discrimination but also demonstrated that individual initial performance should be taken into consideration in future research and practical applications.
Collapse
Affiliation(s)
- Di Wu
- Department of Medical Psychology, Air Force Medical University, Xi'an, China
| | - Pan Zhang
- Department of Psychology, Hebei Normal University, Shijiazhuang, China
| | - Na Liu
- Department of Nursing, Air Force Medical University, Xi'an, China
| | - Kewei Sun
- Department of Medical Psychology, Air Force Medical University, Xi'an, China
| | - Wei Xiao
- Department of Medical Psychology, Air Force Medical University, Xi'an, China
| |
Collapse
|
2
|
Xu R, Church RM, Sasaki Y, Watanabe T. Effects of stimulus and task structure on temporal perceptual learning. Sci Rep 2021; 11:668. [PMID: 33436842 PMCID: PMC7804100 DOI: 10.1038/s41598-020-80192-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 12/17/2020] [Indexed: 11/09/2022] Open
Abstract
Our ability to discriminate temporal intervals can be improved with practice. This learning is generally thought to reflect an enhancement in the representation of a trained interval, which leads to interval-specific improvements in temporal discrimination. In the present study, we asked whether temporal learning is further constrained by context-specific factors dictated through the trained stimulus and task structure. Two groups of participants were trained using a single-interval auditory discrimination task over 5 days. Training intervals were either one of eight predetermined values (FI group), or random from trial to trial (RI group). Before and after the training period, we measured discrimination performance using an untrained two-interval temporal comparison task. Our results revealed a selective improvement in the FI group, but not the RI group. However, this learning did not generalize between the trained and untrained tasks. These results highlight the sensitivity of TPL to stimulus and task structure, suggesting that mechanisms of temporal learning rely on processes beyond changes in interval representation.
Collapse
Affiliation(s)
- Rannie Xu
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA.
| | - Russell M Church
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, 02912, USA
| |
Collapse
|
3
|
Donovan I, Shen A, Tortarolo C, Barbot A, Carrasco M. Exogenous attention facilitates perceptual learning in visual acuity to untrained stimulus locations and features. J Vis 2020; 20:18. [PMID: 32340029 PMCID: PMC7405812 DOI: 10.1167/jov.20.4.18] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/08/2020] [Indexed: 12/11/2022] Open
Abstract
Visual perceptual learning (VPL) refers to the improvement in performance on a visual task due to practice. A hallmark of VPL is specificity, as improvements are often confined to the trained retinal locations or stimulus features. We have previously found that exogenous (involuntary, stimulus-driven) and endogenous (voluntary, goal-driven) spatial attention can facilitate the transfer of VPL across locations in orientation discrimination tasks mediated by contrast sensitivity. Here, we investigated whether exogenous spatial attention can facilitate such transfer in acuity tasks that have been associated with higher specificity. We trained observers for 3 days (days 2-4) in a Landolt acuity task (Experiment 1) or a Vernier hyperacuity task (Experiment 2), with either exogenous precues (attention group) or neutral precues (neutral group). Importantly, during pre-tests (day 1) and post-tests (day 5), all observers were tested with neutral precues; thus, groups differed only in their attentional allocation during training. For the Landolt acuity task, we found evidence of location transfer in both the neutral and attention groups, suggesting weak location specificity of VPL. For the Vernier hyperacuity task, we found evidence of location and feature specificity in the neutral group, and learning transfer in the attention group-similar improvement at trained and untrained locations and features. Our results reveal that, when there is specificity in a perceptual acuity task, exogenous spatial attention can overcome that specificity and facilitate learning transfer to both untrained locations and features simultaneously with the same training. Thus, in addition to improving performance, exogenous attention generalizes perceptual learning across locations and features.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology and Neural Science, New York University,New York,NY,USA
| | - Angela Shen
- Department of Psychology, New York University,New York,NY,USA
| | | | - Antoine Barbot
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| | - Marisa Carrasco
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| |
Collapse
|
4
|
Zhang P, Zhao Y, Dosher BA, Lu ZL. Evaluating the performance of the staircase and quick Change Detection methods in measuring perceptual learning. J Vis 2020; 19:14. [PMID: 31323664 PMCID: PMC6645707 DOI: 10.1167/19.7.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The staircase method has been widely used in measuring perceptual learning. Recently, Zhao, Lesmes, and Lu (2017, 2019) developed the quick Change Detection (qCD) method and applied it to measure the trial-by-trial time course of dark adaptation. In the current study, we conducted two simulations to evaluate the performance of the 3-down/1-up staircase and qCD methods in measuring perceptual learning in a two-alternative forced-choice task. In Study 1, three observers with different time constants (40, 80, and 160 trials) of an exponential learning curve were simulated. Each simulated observer completed staircases with six step sizes (1%, 5%, 10%, 20%, 30%, and 60%) and a qCD procedure, each starting at five levels (+50%, +25%, 0, −25%, and −50% different from the true threshold in the first trial). We found the following results: Staircases with 1% and 5% step sizes failed to generate more than five reversals half of the time; and the bias and standard deviations of thresholds estimated from the post hoc segment-by-segment qCD analysis were much smaller than those from the staircase method with the other four step sizes. In Study 2, we simulated thresholds in the transfer phases with the same time constants and 50% transfer for each observer in Study 1. We found that the estimated transfer indexes from qCD showed smaller biases and standard deviations than those from the staircase method. In addition, rescoring the simulated data from the staircase method using the Bayesian estimation component of the qCD method resulted in much-improved estimates. We conclude that the qCD method characterizes the time course of perceptual learning and transfer more accurately, precisely, and efficiently than the staircase method, even with the optimal 10% step size.
Collapse
Affiliation(s)
- Pan Zhang
- Laboratory of Brain Processes (LOBES), Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Yukai Zhao
- Laboratory of Brain Processes (LOBES), Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Laboratory of Brain Processes (LOBES), Department of Psychology, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
5
|
Waite S, Grigorian A, Alexander RG, Macknik SL, Carrasco M, Heeger DJ, Martinez-Conde S. Analysis of Perceptual Expertise in Radiology - Current Knowledge and a New Perspective. Front Hum Neurosci 2019; 13:213. [PMID: 31293407 PMCID: PMC6603246 DOI: 10.3389/fnhum.2019.00213] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 06/07/2019] [Indexed: 12/14/2022] Open
Abstract
Radiologists rely principally on visual inspection to detect, describe, and classify findings in medical images. As most interpretive errors in radiology are perceptual in nature, understanding the path to radiologic expertise during image analysis is essential to educate future generations of radiologists. We review the perceptual tasks and challenges in radiologic diagnosis, discuss models of radiologic image perception, consider the application of perceptual learning methods in medical training, and suggest a new approach to understanding perceptional expertise. Specific principled enhancements to educational practices in radiology promise to deepen perceptual expertise among radiologists with the goal of improving training and reducing medical error.
Collapse
Affiliation(s)
- Stephen Waite
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Arkadij Grigorian
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Robert G. Alexander
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Stephen L. Macknik
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - David J. Heeger
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| |
Collapse
|
6
|
Zhang F, de Ridder H, Pont SC. Asymmetric perceptual confounds between canonical lightings and materials. J Vis 2019; 18:11. [PMID: 30347097 DOI: 10.1167/18.11.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
To better understand the interactions between material perception and light perception, we further developed our material probe MatMix 1.0 into MixIM 1.0, which allows optical mixing of canonical lighting modes. We selected three canonical lighting modes (ambient, focus, and brilliance) and created scenes to represent the three illuminations. Together with four canonical material modes (matte, velvety, specular, glittery), this resulted in 12 basis images (the "bird set"). These images were optically mixed in our probing method. Three experiments were conducted with different groups of observers. In Experiment 1, observers were instructed to manipulate MixIM 1.0 and match optically mixed lighting modes while discounting the materials. In Experiment 2, observers were shown a pair of stimuli and instructed to simultaneously judge whether the materials and lightings were the same or different in a four-category discrimination task. In Experiment 3, observers performed both the matching and discrimination tasks in which only the ambient and focus light were implemented. Overall, the matching and discrimination results were comparable as (a) robust asymmetric perceptual confounds were found and confirmed in both types of tasks, (b) performances were consistent and all above chance levels, and (c) observers had higher sensitivities to our canonical materials than to our canonical lightings. The latter result may be explained in terms of a generic insensitivity for naturally occurring variations in light conditions. Our findings suggest that midlevel image features are more robust across different materials than across different lightings and, thus, more diagnostic for materials than for lightings, causing the asymmetric perceptual confounds.
Collapse
Affiliation(s)
- Fan Zhang
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| | - Huib de Ridder
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| | - Sylvia C Pont
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| |
Collapse
|
7
|
Donovan I, Carrasco M. Endogenous spatial attention during perceptual learning facilitates location transfer. J Vis 2018; 18:7. [PMID: 30347094 PMCID: PMC6181190 DOI: 10.1167/18.11.7] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/02/2018] [Indexed: 11/24/2022] Open
Abstract
Covert attention and perceptual learning enhance perceptual performance. The relation between these two mechanisms is largely unknown. Previously, we showed that manipulating involuntary, exogenous spatial attention during training improved performance at trained and untrained locations, thus overcoming the typical location specificity. Notably, attention-induced transfer only occurred for high stimulus contrasts, at the upper asymptote of the psychometric function (i.e., via response gain). Here, we investigated whether and how voluntary, endogenous attention, the top-down and goal-based type of covert visual attention, influences perceptual learning. Twenty-six participants trained in an orientation discrimination task at two locations: half of participants received valid endogenous spatial precues (attention group), while the other half received neutral precues (neutral group). Before and after training, all participants were tested with neutral precues at two trained and two untrained locations. Within each session, stimulus contrast varied on a trial basis from very low (2%) to very high (64%). Performance was fit by a Weibull psychometric function separately for each day and location. Performance improved for both groups at the trained location, and unlike training with exogenous attention, at the threshold level (i.e., via contrast gain). The neutral group exhibited location specificity: Thresholds decreased at the trained locations, but not at the untrained locations. In contrast, participants in the attention group showed significant location transfer: Thresholds decreased to the same extent at both trained and untrained locations. These results indicate that, similar to exogenous spatial attention, endogenous spatial attention induces location transfer, but influences contrast gain instead of response gain.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
8
|
Zhang P, Hou F, Yan FF, Xi J, Lin BR, Zhao J, Yang J, Chen G, Zhang MY, He Q, Dosher BA, Lu ZL, Huang CB. High reward enhances perceptual learning. J Vis 2018; 18:11. [PMID: 30372760 PMCID: PMC6108453 DOI: 10.1167/18.8.11] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2017] [Accepted: 05/12/2018] [Indexed: 02/01/2023] Open
Abstract
Studies of perceptual learning have revealed a great deal of plasticity in adult humans. In this study, we systematically investigated the effects and mechanisms of several forms (trial-by-trial, block, and session rewards) and levels (no, low, high, subliminal) of monetary reward on the rate, magnitude, and generalizability of perceptual learning. We found that high monetary reward can greatly promote the rate and boost the magnitude of learning and enhance performance in untrained spatial frequencies and eye without changing interocular, interlocation, and interdirection transfer indices. High reward per se made unique contributions to the enhanced learning through improved internal noise reduction. Furthermore, the effects of high reward on perceptual learning occurred in a range of perceptual tasks. The results may have major implications for the understanding of the nature of the learning rule in perceptual learning and for the use of reward to enhance perceptual learning in practical applications.
Collapse
Affiliation(s)
- Pan Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
- Laboratory of Brain Processes (LOBES), Center for Cognitive and Brain Sciences, Center for Cognitive and Behavioral Brain Imaging, and Departments of Psychology, The Ohio State University, Columbus, OH, USA
| | - Fang Hou
- School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Fang-Fang Yan
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Jie Xi
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Bo-Rong Lin
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Jin Zhao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Jia Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Ge Chen
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
- School of Arts and Design, Zhengzhou University of Light Industry, Zhengzhou, Henan, China
| | - Meng-Yuan Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Qing He
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Laboratory of Brain Processes (LOBES), Center for Cognitive and Brain Sciences, Center for Cognitive and Behavioral Brain Imaging, and Departments of Psychology, The Ohio State University, Columbus, OH, USA
| | - Chang-Bing Huang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
Hicheur H, Chauvin A, Chassot S, Chenevière X, Taube W. Effects of age on the soccer-specific cognitive-motor performance of elite young soccer players: Comparison between objective measurements and coaches' evaluation. PLoS One 2017; 12:e0185460. [PMID: 28953958 PMCID: PMC5617197 DOI: 10.1371/journal.pone.0185460] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 09/13/2017] [Indexed: 01/19/2023] Open
Abstract
The cognitive-motor performance (CMP), defined here as the capacity to rapidly use sensory information and transfer it into efficient motor output, represents a major contributor to performance in almost all sports, including soccer. Here, we used a high-technology system (COGNIFOOT) which combines a visual environment simulator fully synchronized with a motion capture system. This system allowed us to measure objective real-time CMP parameters (passing accuracy/speed and response times) in a large turf-artificial grass playfield. Forty-six (46) young elite soccer players (including 2 female players) aged between 11 and 16 years who belonged to the same youth soccer academy were tested. Each player had to pass the ball as fast and as accurately as possible towards visual targets projected onto a large screen located 5.32 meters in front of him (a short pass situation). We observed a linear age-related increase in the CMP: the passing accuracy, speed and reactiveness of players improved by 4 centimeters, 2.3 km/h and 30 milliseconds per year of age, respectively. These data were converted into 5 point-scales and compared to the judgement of expert coaches, who also used a 5 point-scale to evaluate the same CMP parameters but based on their experience with the players during games and training. The objectively-measured age-related CMP changes were also observed in expert coaches’ judgments although these were more variable across coaches and age categories. This demonstrates that high-technology systems like COGNIFOOT can be used in complement to traditional approaches of talent identification and to objectively monitor the progress of soccer players throughout a cognitive-motor training cycle.
Collapse
Affiliation(s)
- Halim Hicheur
- Sport and Movement Sciences, Dept of Medicine, University of Fribourg, Fribourg, Switzerland
- * E-mail:
| | - Alan Chauvin
- Laboratoire de Psychologie et NeuroCognition, CNRS–UMR 5105, Univ. Grenoble Alpes, Grenoble, France
| | - Steve Chassot
- Sport and Movement Sciences, Dept of Medicine, University of Fribourg, Fribourg, Switzerland
| | - Xavier Chenevière
- Sport and Movement Sciences, Dept of Medicine, University of Fribourg, Fribourg, Switzerland
| | - Wolfgang Taube
- Sport and Movement Sciences, Dept of Medicine, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
10
|
Bays BC, Visscher KM, Le Dantec CC, Seitz AR. Alpha-band EEG activity in perceptual learning. J Vis 2015; 15:7. [PMID: 26370167 DOI: 10.1167/15.10.7] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing.
Collapse
|
11
|
Donovan I, Szpiro S, Carrasco M. Exogenous attention facilitates location transfer of perceptual learning. J Vis 2015; 15:11. [PMID: 26426818 PMCID: PMC4594468 DOI: 10.1167/15.10.11] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 08/15/2015] [Indexed: 01/07/2023] Open
Abstract
Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity.
Collapse
|
12
|
Matthews WJ, Meck WH. Time perception: the bad news and the good. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2014; 5:429-446. [PMID: 25210578 PMCID: PMC4142010 DOI: 10.1002/wcs.1298] [Citation(s) in RCA: 92] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 04/12/2014] [Accepted: 05/09/2014] [Indexed: 11/12/2022]
Abstract
Time perception is fundamental and heavily researched, but the field faces a number of obstacles to theoretical progress. In this advanced review, we focus on three pieces of 'bad news' for time perception research: temporal perception is highly labile across changes in experimental context and task; there are pronounced individual differences not just in overall performance but in the use of different timing strategies and the effect of key variables; and laboratory studies typically bear little relation to timing in the 'real world'. We describe recent examples of these issues and in each case offer some 'good news' by showing how new research is addressing these challenges to provide rich insights into the neural and information-processing bases of timing and time perception. WIREs Cogn Sci 2014, 5:429-446. doi: 10.1002/wcs.1298 This article is categorized under: Psychology > Perception and Psychophysics Neuroscience > Cognition.
Collapse
Affiliation(s)
| | - Warren H Meck
- Department of Psychology and Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
13
|
Abstract
Training or exposure to a visual feature leads to a long-term improvement in performance on visual tasks that employ this feature. Such performance improvements and the processes that govern them are called visual perceptual learning (VPL). As an ever greater volume of research accumulates in the field, we have reached a point where a unifying model of VPL should be sought. A new wave of research findings has exposed diverging results along three major directions in VPL: specificity versus generalization of VPL, lower versus higher brain locus of VPL, and task-relevant versus task-irrelevant VPL. In this review, we propose a new theoretical model that suggests the involvement of two different stages in VPL: a low-level, stimulus-driven stage, and a higher-level stage dominated by task demands. If experimentally verified, this model would not only constructively unify the current divergent results in the VPL field, but would also lead to a significantly better understanding of visual plasticity, which may, in turn, lead to interventions to ameliorate diseases affecting vision and other pathological or age-related visual and nonvisual declines.
Collapse
Affiliation(s)
- Kazuhisa Shibata
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island
| | | | | |
Collapse
|
14
|
Lapierre M, Howe PDL, Cropper SJ. Transfer of learning between hemifields in multiple object tracking: memory reduces constraints of attention. PLoS One 2013; 8:e83872. [PMID: 24349555 PMCID: PMC3859665 DOI: 10.1371/journal.pone.0083872] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2013] [Accepted: 11/08/2013] [Indexed: 11/27/2022] Open
Abstract
Many tasks involve tracking multiple moving objects, or stimuli. Some require that individuals adapt to changing or unfamiliar conditions to be able to track well. This study explores processes involved in such adaptation through an investigation of the interaction of attention and memory during tracking. Previous research has shown that during tracking, attention operates independently to some degree in the left and right visual hemifields, due to putative anatomical constraints. It has been suggested that the degree of independence is related to the relative dominance of processes of attention versus processes of memory. Here we show that when individuals are trained to track a unique pattern of movement in one hemifield, that learning can be transferred to the opposite hemifield, without any evidence of hemifield independence. However, learning is not influenced by an explicit strategy of memorisation of brief periods of recognisable movement. The findings lend support to a role for implicit memory in overcoming putative anatomical constraints on the dynamic, distributed spatial allocation of attention involved in tracking multiple objects.
Collapse
Affiliation(s)
- Mark Lapierre
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Victoria, Australia
- * E-mail:
| | - Piers D. L. Howe
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Victoria, Australia
| | - Simon J. Cropper
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|