1
|
Blaževski L, Stein T, Scholte HS. Feature binding is slow: Temporal integration explains apparent ultrafast binding. J Vis 2024; 24:3. [PMID: 39102229 PMCID: PMC11309034 DOI: 10.1167/jov.24.8.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 07/08/2024] [Indexed: 08/06/2024] Open
Abstract
Visual perception involves binding of distinct features into a unified percept. Although traditional theories link feature binding to time-consuming recurrent processes, Holcombe and Cavanagh (2001) demonstrated ultrafast, early binding of features that belong to the same object. The task required binding of orientation and luminance within an exceptionally short presentation time. However, because visual stimuli were presented over multiple presentation cycles, their findings can alternatively be explained by temporal integration over the extended stimulus sequence. Here, we conducted three experiments manipulating the number of presentation cycles. If early binding occurs, one extremely short cycle should be sufficient for feature integration. Conversely, late binding theories predict that successful binding requires substantial time and improves with additional presentation cycles. Our findings indicate that task-relevant binding of features from the same object occurs slowly, supporting late binding theories.
Collapse
Affiliation(s)
- Lucija Blaževski
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Timo Stein
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - H Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Du Y, Zhang G, Li W, Zhang E. Many Roads Lead to Rome: Differential Learning Processes for the Same Perceptual Improvement. Psychol Sci 2023; 34:313-325. [PMID: 36473146 DOI: 10.1177/09567976221134481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Repeatedly exercising a perceptual ability usually leads to improvement, yet it is unclear whether the mechanisms supporting the same perceptual learning could be flexibly adjusted according to the training settings. Here, we trained adult observers in an orientation-discrimination task at either a single (focused) retinal location or multiple (distributed) retinal locations. We examined the observers' discriminability (N = 52) and bias (N = 20) in orientation perception at the trained and untrained locations. The focused and distributed training enhanced orientation discriminability by the same amount and induced a bias in perceived orientation at the trained locations. Nevertheless, the distributed training promoted location generalization of both practice effects, whereas the focused training resulted in specificity. The two training tactics also differed in long-term retention of the training effects. Our results suggest that, depending on the training settings of the same task, the same discrimination learning could differentially engage location-specific and location-invariant representations of the learned stimulus feature.
Collapse
Affiliation(s)
- Yangyang Du
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
| | - Gongliang Zhang
- Department of Psychology, School of Education, Soochow University
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
| | - En Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
| |
Collapse
|
3
|
Abstract
Human perceptual learning, experience-induced gains in sensory discrimination, typically yields long-term performance improvements. Recent research revealed long-lasting transfer at the untrained location enabled by feature-based attention (FBA), reminiscent of its global effect (Hung & Carrasco, Scientific Reports, 11(1), 13914, (2021)). Visual Perceptual Learning (VPL) is typically studied while observers maintain fixation, but the role of fixational eye movements is unknown. Microsaccades - the largest of fixational eye movements - provide a continuous, online, physiological measure from the oculomotor system that reveals dynamic processing, which is unavailable from behavioral measures alone. We investigated whether and how microsaccades change after training in an orientation discrimination task. For human observers trained with or without FBA, microsaccade rates were significantly reduced during the response window in both trained and untrained locations and orientations. Critically, consistent with long-term training benefits, this microsaccade-rate reduction persisted over a year. Furthermore, microsaccades were biased toward the target location prior to stimulus onset and were more suppressed for incorrect than correct trials after observers' responses. These findings reveal that fixational eye movements and VPL are tightly coupled and that learning-induced microsaccade changes are long lasting. Thus, microsaccades reflect functional dynamics of the oculomotor system during information encoding, maintenance and readout, and may serve as a reliable long-term physiological correlate in VPL.
Collapse
|
4
|
Stimulus variability and task relevance modulate binding-learning. Atten Percept Psychophys 2021; 84:1151-1166. [PMID: 34282562 DOI: 10.3758/s13414-021-02338-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2021] [Indexed: 11/08/2022]
Abstract
Classical theories of attention posit that integration of features into object representation (or feature binding) requires engagement of focused attention. Studies challenging this idea have demonstrated that feature binding can happen outside of the focus of attention for familiar objects, as well as for arbitrary color-orientation conjunctions. Detection performance for arbitrary feature conjunction improves with training, suggesting a potential role of perceptual learning mechanisms in the integration of features, a process called "binding-learning". In the present study, we investigate whether stimulus variability and task relevance, two critical determinants of visual perceptual learning, also modulate binding-learning. Transfer of learning in a visual search task to a pre-exposed color-orientation conjunction was assessed under conditions of varying stimulus variability and task relevance. We found transfer of learning for the pre-exposed feature conjunctions that were trained with high variability (Experiment 1). Transfer of learning was not observed when the conjunction was rendered task-irrelevant during training due to pop-out targets (Experiment 2). Our findings show that feature binding is determined by principles of perceptual learning, and they support the idea that functions traditionally attributed to goal-driven attention can be grounded in the learning of the statistical structure of the environment.
Collapse
|
5
|
Hung SC, Carrasco M. Feature-based attention enables robust, long-lasting location transfer in human perceptual learning. Sci Rep 2021; 11:13914. [PMID: 34230522 PMCID: PMC8260789 DOI: 10.1038/s41598-021-93016-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/29/2021] [Indexed: 11/14/2022] Open
Abstract
Visual perceptual learning (VPL) is typically specific to the trained location and feature. However, the degree of specificity depends upon particular training protocols. Manipulating covert spatial attention during training facilitates learning transfer to other locations. Here we investigated whether feature-based attention (FBA), which enhances the representation of particular features throughout the visual field, facilitates VPL transfer, and how long such an effect would last. To do so, we implemented a novel task in which observers discriminated a stimulus orientation relative to two reference angles presented simultaneously before each block. We found that training with FBA enabled remarkable location transfer, reminiscent of its global effect across the visual field, but preserved orientation specificity in VPL. Critically, both the perceptual improvement and location transfer persisted after 1 year. Our results reveal robust, long-lasting benefits induced by FBA in VPL, and have translational implications for improving generalization of training protocols in visual rehabilitation.
Collapse
Affiliation(s)
- Shao-Chin Hung
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA. .,Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
6
|
Wagner BT, Shaffer LA, Ivanson OA, Jones JA. Assessing working memory capacity through picture span and feature binding with visual-graphic symbols during a visual search task with typical children and adults. Augment Altern Commun 2021; 37:39-51. [PMID: 33559490 DOI: 10.1080/07434618.2021.1879932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
This study investigated developmental memory capacity through picture span and feature binding. Participants included third grade students and college age adults with typical development. Picture span was used to assess working memory capacity when participants were asked to identify, locate, and sequence common visual-graphic symbols from experimental grid displays. Feature binding was assessed to evaluate how symbols, locations and sequences are bound together in working memory. The features assessed included symbol recall, location recall, symbol location binding, symbol sequence binding, and location sequence binding. All participants were shown a sequence of visual-graphic symbols on 4 by 4 stimulus grid displays. Participants were then asked to remember symbols amidst distractor symbols and place them in the correct location on a response grid, using the correct sequence. Results revealed expected developmental differences between third graders and adults on picture span. Significant differences between third graders and adults were also obtained for symbol sequence and location sequence binding. Performance for both groups on the sequence binding features were marginal (i.e., 30% of third graders and 60% of adults binding symbol sequence; 27% of third graders and 52% of adults binding location sequence). These results convey the influence of picture span and feature binding on working memory capacity. Implications are discussed in relation to theoretical models on working memory and compensatory strategies to increase feature binding with target and contextual memory.
Collapse
Affiliation(s)
- Barry T Wagner
- Department of Speech Pathology and Audiology, Ball State University, Muncie, IN, USA
| | - Lauren A Shaffer
- Department of Speech Pathology and Audiology, Ball State University, Muncie, IN, USA
| | - Olivia A Ivanson
- Department of Speech Pathology and Audiology, Ball State University, Muncie, IN, USA
| | - James A Jones
- Office of the Vice Provost for Academic Affairs, Ball State University, Muncie, IN, USA
| |
Collapse
|
7
|
Donovan I, Shen A, Tortarolo C, Barbot A, Carrasco M. Exogenous attention facilitates perceptual learning in visual acuity to untrained stimulus locations and features. J Vis 2020; 20:18. [PMID: 32340029 PMCID: PMC7405812 DOI: 10.1167/jov.20.4.18] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/08/2020] [Indexed: 12/11/2022] Open
Abstract
Visual perceptual learning (VPL) refers to the improvement in performance on a visual task due to practice. A hallmark of VPL is specificity, as improvements are often confined to the trained retinal locations or stimulus features. We have previously found that exogenous (involuntary, stimulus-driven) and endogenous (voluntary, goal-driven) spatial attention can facilitate the transfer of VPL across locations in orientation discrimination tasks mediated by contrast sensitivity. Here, we investigated whether exogenous spatial attention can facilitate such transfer in acuity tasks that have been associated with higher specificity. We trained observers for 3 days (days 2-4) in a Landolt acuity task (Experiment 1) or a Vernier hyperacuity task (Experiment 2), with either exogenous precues (attention group) or neutral precues (neutral group). Importantly, during pre-tests (day 1) and post-tests (day 5), all observers were tested with neutral precues; thus, groups differed only in their attentional allocation during training. For the Landolt acuity task, we found evidence of location transfer in both the neutral and attention groups, suggesting weak location specificity of VPL. For the Vernier hyperacuity task, we found evidence of location and feature specificity in the neutral group, and learning transfer in the attention group-similar improvement at trained and untrained locations and features. Our results reveal that, when there is specificity in a perceptual acuity task, exogenous spatial attention can overcome that specificity and facilitate learning transfer to both untrained locations and features simultaneously with the same training. Thus, in addition to improving performance, exogenous attention generalizes perceptual learning across locations and features.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology and Neural Science, New York University,New York,NY,USA
| | - Angela Shen
- Department of Psychology, New York University,New York,NY,USA
| | | | - Antoine Barbot
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| | - Marisa Carrasco
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| |
Collapse
|
8
|
Chesham A, Gerber SM, Schütz N, Saner H, Gutbrod K, Müri RM, Nef T, Urwyler P. Search and Match Task: Development of a Taskified Match-3 Puzzle Game to Assess and Practice Visual Search. JMIR Serious Games 2019; 7:e13620. [PMID: 31094325 PMCID: PMC6532342 DOI: 10.2196/13620] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 04/04/2019] [Accepted: 04/21/2019] [Indexed: 11/17/2022] Open
Abstract
Background Visual search declines with aging, dementia, and brain injury and is linked to limitations in everyday activities. Recent studies suggest that visual search can be improved with practice using computerized visual search tasks and puzzle video games. For practical use, it is important that visual search ability can be assessed and practiced in a controlled and adaptive way. However, commercial puzzle video games make it hard to control task difficulty, and there are little means to collect performance data. Objective The aim of this study was to develop and initially validate the search and match task (SMT) that combines an enjoyable tile-matching match-3 puzzle video game with features of the visual search paradigm (taskified game). The SMT was designed as a single-target visual search task that allows control over task difficulty variables and collection of performance data. Methods The SMT is played on a grid-based (width × height) puzzle board, filled with different types of colored polygons. A wide range of difficulty levels was generated by combinations of 3 task variables over a range from 4 to 8 including height and width of the puzzle board (set size) and the numbers of tile types (distractor heterogeneity). For each difficulty level, large numbers of playable trials were pregenerated using Python. Each trial consists of 4 consecutive puzzle boards, where the goal of the task is to find a target tile configuration (search) on the puzzle board and swap 2 adjacent tiles to create a line of 3 identical tiles (match). For each puzzle board, there is exactly 1 possible match (single target search). In a user study with 28 young adults (aged 18 to 31 years), 13 older (aged 64 to 79 years) and 11 oldest (aged 86 to 98 years) adults played the long (young and older adults) or short version (oldest adults) of the difficulty levels of the SMT. Participants rated their perception and the usability of the task and completed neuropsychological tests that measure cognitive domains engaged by the puzzle game. Results Results from the user study indicate that the target search time is associated with set size, distractor heterogeneity, and age. Results further indicate that search performance is associated with general cognitive ability, selective and divided attention, visual search, and visuospatial and pattern recognition ability. Conclusions Overall, this study shows that an everyday puzzle game–based task can be experimentally controlled, is enjoyable and user-friendly, and permits data collection to assess visual search and cognitive abilities. Further research is needed to evaluate the potential of the SMT game to assess and practice visual search ability in an enjoyable and adaptive way. A PsychoPy version of the SMT is freely available for researchers.
Collapse
Affiliation(s)
- Alvin Chesham
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland
| | | | - Narayan Schütz
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland
| | - Hugo Saner
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland.,Department of Cardiology, University Hospital (Inselspital), Bern, Switzerland
| | - Klemens Gutbrod
- Department of Neurology, University Neurorehabilitation, University Hospital Bern (Inselspital), University of Bern, Bern, Switzerland
| | - René Martin Müri
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland.,Department of Neurology, University Neurorehabilitation, University Hospital Bern (Inselspital), University of Bern, Bern, Switzerland
| | - Tobias Nef
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland.,Artificial Organ Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Prabitha Urwyler
- Gerontechnology & Rehabilitation, University of Bern, Bern, Switzerland.,Artificial Organ Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Donovan I, Carrasco M. Endogenous spatial attention during perceptual learning facilitates location transfer. J Vis 2018; 18:7. [PMID: 30347094 PMCID: PMC6181190 DOI: 10.1167/18.11.7] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/02/2018] [Indexed: 11/24/2022] Open
Abstract
Covert attention and perceptual learning enhance perceptual performance. The relation between these two mechanisms is largely unknown. Previously, we showed that manipulating involuntary, exogenous spatial attention during training improved performance at trained and untrained locations, thus overcoming the typical location specificity. Notably, attention-induced transfer only occurred for high stimulus contrasts, at the upper asymptote of the psychometric function (i.e., via response gain). Here, we investigated whether and how voluntary, endogenous attention, the top-down and goal-based type of covert visual attention, influences perceptual learning. Twenty-six participants trained in an orientation discrimination task at two locations: half of participants received valid endogenous spatial precues (attention group), while the other half received neutral precues (neutral group). Before and after training, all participants were tested with neutral precues at two trained and two untrained locations. Within each session, stimulus contrast varied on a trial basis from very low (2%) to very high (64%). Performance was fit by a Weibull psychometric function separately for each day and location. Performance improved for both groups at the trained location, and unlike training with exogenous attention, at the threshold level (i.e., via contrast gain). The neutral group exhibited location specificity: Thresholds decreased at the trained locations, but not at the untrained locations. In contrast, participants in the attention group showed significant location transfer: Thresholds decreased to the same extent at both trained and untrained locations. These results indicate that, similar to exogenous spatial attention, endogenous spatial attention induces location transfer, but influences contrast gain instead of response gain.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
10
|
Reavis EA, Frank SM, Tse PU. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions. Atten Percept Psychophys 2018; 80:1110-1126. [PMID: 29651754 PMCID: PMC6035115 DOI: 10.3758/s13414-018-1516-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.
Collapse
Affiliation(s)
- Eric A Reavis
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA.
- Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, 90024, USA.
- Desert Pacific Mental Illness Research, Education, and Clinical Center, Greater Los Angeles Veterans Affairs Healthcare System, Los Angeles, CA, 90073, USA.
| | - Sebastian M Frank
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, 02912, USA
| | - Peter U Tse
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
11
|
Yashar A, Denison RN. Feature reliability determines specificity and transfer of perceptual learning in orientation search. PLoS Comput Biol 2017; 13:e1005882. [PMID: 29240813 PMCID: PMC5746251 DOI: 10.1371/journal.pcbi.1005882] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 12/28/2017] [Accepted: 11/16/2017] [Indexed: 11/24/2022] Open
Abstract
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. Training can modify the visual system to produce improvements on perceptual tasks (visual perceptual learning), which is associated with adult brain plasticity. Visual perceptual learning has important clinical applications: it improves the vision of adults with visual deficits, e.g. amblyopia and cortical blindness, and even presbyopia (aging eye). A critical issue in visual perceptual learning is its specificity to the trained stimulus. Specificity gives insight into the processes underling experience-dependent plasticity but can be an obstacle in the development of efficient rehabilitation protocols. Under what circumstances visual perceptual learning transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: specificity in visual search depends on intrinsic variations in the reliability of feature representations; e.g., vertically oriented lines are represented in V1 with greater reliability than tilted lines. Our data and computational model suggest that training on sensory features with intrinsically low reliability can maximize the generalizability of learning, particularly in complex natural environments in which task performance is limited by low-reliability features. Our study has possible implications for the development of efficient clinical applications of perceptual learning.
Collapse
Affiliation(s)
- Amit Yashar
- Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
- The School of Psychological Sciences, Tel Aviv University, Tel-Aviv, Israel
- * E-mail:
| | - Rachel N. Denison
- Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
| |
Collapse
|