1
|
Zivony A, Eimer M. Attention and feature binding in the temporal domain. Psychon Bull Rev 2024:10.3758/s13423-024-02493-5. [PMID: 38587754 DOI: 10.3758/s13423-024-02493-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2024] [Indexed: 04/09/2024]
Abstract
Previous studies have shown that illusory conjunction can emerge for both spatially and temporally proximal objects. However, the mechanisms involved in binding in the temporal domain are not yet fully understood. In the current study, we investigated the role of attentional processes in correct and incorrect temporal binding, and specifically how feature binding is affected by the speed of attentional engagement. In two experiments, participants searched for a target in a rapid serial visual presentation stream and reported its colour and alphanumeric identity. Temporal binding errors were frequent. Critically, when participants reported the identity of a distractor instead of a target, they were also more likely to report the colour of this distractor. This association was observed both within and between individuals. These findings suggest that attentional engagement facilitates the binding of temporally co-occurring features. We discuss these results within a 'diachronic' framework of selective attention, and also consider other factors that contribute to temporal binding errors.
Collapse
Affiliation(s)
- Alon Zivony
- Department of Psychology, University of Sheffield, Sheffield, UK.
| | - Martin Eimer
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
2
|
Chen J, Golomb JD. Dynamic neural reconstructions of attended object location and features using EEG. J Neurophysiol 2023; 130:139-154. [PMID: 37283457 PMCID: PMC10393364 DOI: 10.1152/jn.00180.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/10/2023] [Accepted: 06/02/2023] [Indexed: 06/08/2023] Open
Abstract
Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.
Collapse
Affiliation(s)
- Jiageng Chen
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
3
|
Difficulty limits of visual mental imagery. Cognition 2023; 236:105436. [PMID: 36907115 DOI: 10.1016/j.cognition.2023.105436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 02/25/2023] [Accepted: 03/05/2023] [Indexed: 03/12/2023]
Abstract
While past work has focused on the representational format of mental imagery, and the similarities of its operation and neural substrate to online perception, surprisingly little has tested the boundaries of the level of detail that mental imagery can generate. To answer this question, we take inspiration from the visual short-term memory literature, a related field which has found that memory capacity is affected by the number of items, whether they are unique, and whether and how they move. We test these factors of set size, color heterogeneity, and transformation in mental imagery through both subjective (Exp 1; Exp 2) and objective (Exp 2) measures - difficulty ratings and a change detection task, respectively - to determine the capacity limits of our mental imagery, and find that limits on mental imagery are similar to those for visual short-term memory. In Experiment 1, participants rated the difficulty of imagining 1-4 colored items as subjectively more difficult when there were more items, when the items had unique colors instead of an identical color, and when they scaled or rotated instead of merely linearly translating. Experiment 2 isolated these subjective difficulty ratings of rotation for uniquely colored items, and added a rotation distance manipulation (10° to 110°), again finding higher subjective difficulty for more items, and for when those items rotated farther; the objective measure showed a decrease in performance for more items, but not for rotational degree. Congruities between the subjective and objective results suggest similar costs, but some incongruities suggest that subjective reports can be overly optimistic, likely because they are biased by an illusion of detail.
Collapse
|
4
|
Abstract
The brain’s ability to create a unified conscious representation of an object by integrating information from multiple perception pathways is called perceptual binding. Binding is crucial for normal cognitive function. Some perceptual binding errors and disorders have been linked to certain neurological conditions, brain lesions, and conditions that give rise to illusory conjunctions. However, the mechanism of perceptual binding remains elusive. Here, I present a computational model of binding using two sets of coupled oscillatory processes that are assumed to occur in response to two different percepts. I use the model to study the dynamic behavior of coupled processes to characterize how these processes can modulate each other and reach a temporal synchrony. I identify different oscillatory dynamic regimes that depend on coupling mechanisms and parameter values. The model can also discriminate different combinations of initial inputs that are set by initial states of coupled processes. Decoding brain signals that are formed through perceptual binding is a challenging task, but my modeling results demonstrate how crosstalk between two systems of processes can possibly modulate their outputs. Therefore, my mechanistic model can help one gain a better understanding of how crosstalk between perception pathways can affect the dynamic behavior of the systems that involve perceptual binding.
Collapse
|
5
|
Kiefer CM, Ito J, Weidner R, Boers F, Shah NJ, Grün S, Dammers J. Revealing Whole-Brain Causality Networks During Guided Visual Searching. Front Neurosci 2022; 16:826083. [PMID: 35250461 PMCID: PMC8894880 DOI: 10.3389/fnins.2022.826083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/17/2022] [Indexed: 11/24/2022] Open
Abstract
In our daily lives, we use eye movements to actively sample visual information from our environment ("active vision"). However, little is known about how the underlying mechanisms are affected by goal-directed behavior. In a study of 31 participants, magnetoencephalography was combined with eye-tracking technology to investigate how interregional interactions in the brain change when engaged in two distinct forms of active vision: freely viewing natural images or performing a guided visual search. Regions of interest with significant fixation-related evoked activity (FRA) were identified with spatiotemporal cluster permutation testing. Using generalized partial directed coherence, we show that, in response to fixation onset, a bilateral cluster consisting of four regions (posterior insula, transverse temporal gyri, superior temporal gyrus, and supramarginal gyrus) formed a highly connected network during free viewing. A comparable network also emerged in the right hemisphere during the search task, with the right supramarginal gyrus acting as a central node for information exchange. The results suggest that all four regions are vital to visual processing and guiding attention. Furthermore, the right supramarginal gyrus was the only region where activity during fixations on the search target was significantly negatively correlated with search response times. Based on our findings, we hypothesize that, following a fixation, the right supramarginal gyrus supplies the right supplementary eye field (SEF) with new information to update the priority map guiding the eye movements during the search task.
Collapse
Affiliation(s)
- Christian M. Kiefer
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Faculty of Mathematics, Computer Science and Natural Sciences, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Junji Ito
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Ralph Weidner
- Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Frank Boers
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - N. Jon Shah
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-11), Jülich Aachen Research Alliance (JARA), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Translational Medicine, Aachen, Germany
- Department of Neurology, University Hospital RWTH Aachen, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Jürgen Dammers
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
6
|
Gronau N. To Grasp the World at a Glance: The Role of Attention in Visual and Semantic Associative Processing. J Imaging 2021; 7:jimaging7090191. [PMID: 34564117 PMCID: PMC8470651 DOI: 10.3390/jimaging7090191] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/30/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022] Open
Abstract
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.
Collapse
Affiliation(s)
- Nurit Gronau
- Department of Psychology and Department of Cognitive Science Studies, The Open University of Israel, Raanana 4353701, Israel
| |
Collapse
|
7
|
Peters B, Kriegeskorte N. Capturing the objects of vision with neural networks. Nat Hum Behav 2021; 5:1127-1144. [PMID: 34545237 DOI: 10.1038/s41562-021-01194-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 08/06/2021] [Indexed: 01/31/2023]
Abstract
Human visual perception carves a scene at its physical joints, decomposing the world into objects, which are selectively attended, tracked and predicted as we engage our surroundings. Object representations emancipate perception from the sensory input, enabling us to keep in mind that which is out of sight and to use perceptual content as a basis for action and symbolic cognition. Human behavioural studies have documented how object representations emerge through grouping, amodal completion, proto-objects and object files. By contrast, deep neural network models of visual object recognition remain largely tethered to sensory input, despite achieving human-level performance at labelling objects. Here, we review related work in both fields and examine how these fields can help each other. The cognitive literature provides a starting point for the development of new experimental tasks that reveal mechanisms of human object perception and serve as benchmarks driving the development of deep neural network models that will put the object into object recognition.
Collapse
Affiliation(s)
- Benjamin Peters
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| | - Nikolaus Kriegeskorte
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA. .,Department of Psychology, Columbia University, New York, NY, USA. .,Department of Neuroscience, Columbia University, New York, NY, USA. .,Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
8
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
9
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
10
|
Frady EP, Kent SJ, Olshausen BA, Sommer FT. Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures. Neural Comput 2020; 32:2311-2331. [PMID: 33080162 DOI: 10.1162/neco_a_01331] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples-parsing of a tree-like data structure and parsing of a visual scene-how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.
Collapse
Affiliation(s)
- E Paxon Frady
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Intel Laboratories, Neuromorphic Computing Lab, San Francisco, CA, 94111, U.S.A.
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience and Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, and School of Optometry, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Intel Laboratories, Neuromorphic Computing Lab, San Francisco, CA 94111, U.S.A.
| |
Collapse
|
11
|
The structure of illusory conjunctions reveals hierarchical binding of multipart objects. Atten Percept Psychophys 2020; 82:550-563. [PMID: 31646439 DOI: 10.3758/s13414-019-01867-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The world around us is filled with complex objects, full of color, motion, shape, and texture, and these features seem to be represented separately in the early visual system. Anne Treisman pointed out that binding these separate features together into coherent conscious percepts is a serious challenge, and she argued that selective attention plays a critical role in this process. Treisman also showed that, consistent with this view, outside the focus of attention we suffer from illusory conjunctions: misperceived pairings of features into objects. Here we used Treisman's logic to study the structure of pre-attentive representations of multipart, multicolor objects, by exploring the patterns of illusory conjunctions that arise outside the focus of attention. We found consistent evidence of some pre-attentive binding of colors to their parts, and weaker evidence of binding multiple colors of the same object. The extent to which such hierarchical binding occurs seems to depend on the geometric structure of multipart objects: Objects whose parts are easier to separate seem to exhibit greater pre-attentive binding. Together, these results suggest that representations outside the focus of attention are not entirely a "shapeless bundles of features," but preserve some meaningful object structure.
Collapse
|
12
|
Abstract
Spatial attention is thought to be the "glue" that binds features together (e.g., Treisman & Gelade, 1980, Psychology, 12[1], 97-136)-but attention is dynamic, constantly moving across multiple goals and locations. For example, when a person moves her eyes, visual inputs that are coded relative to the eyes (retinotopic) must be rapidly updated to maintain stable world-centered (spatiotopic) representations. Here, we examined how dynamic updating of spatial attention after a saccadic eye movement affects object-feature binding. Immediately after a saccade, participants were simultaneously presented with four colored and oriented bars (one at a precued spatiotopic target location) and instructed to reproduce both the color and orientation of the target item. Object-feature binding was assessed by applying probabilistic mixture models to the joint distribution of feature errors: feature reports for the target item could be correlated (and thus bound together) or independent. We found that compared with holding attention without an eye movement, attentional updating after an eye movement produced more independent errors, including illusory conjunctions, in which one feature of the item at the spatiotopic target location was misbound with the other feature of the item at the initial retinotopic location. These findings suggest that even when only one spatiotopic location is task relevant, spatial attention-and thus object-feature binding-is malleable across and after eye movements, heightening the challenge that eye movements pose for the binding problem and for visual stability.
Collapse
|
13
|
Spatial congruency bias in identifying objects is triggered by retinal position congruence: Examination using the Ternus-Pikler illusion. Sci Rep 2020; 10:4630. [PMID: 32170153 PMCID: PMC7070042 DOI: 10.1038/s41598-020-61698-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 03/02/2020] [Indexed: 11/12/2022] Open
Abstract
When two different objects are sequentially presented at the same location, the viewer tends to misjudge them as identical (spatial congruency bias). The present study examined whether the spatial congruency bias would involve not only retinotopic but also non-retinotopic processing using the Ternus-Pikler illusion. In the experiments, two objects (central and peripheral) appeared in an initial frame. The target object was presented in the central area of the display, while the peripheral object was either on the left or right side of the target object. In the second frame, the target object was again presented in the central area, and the peripheral object was on the opposite side. Two kinds of inter-stimulus intervals were used. In the no-blank condition, the target object was perceived as stationary, and the peripheral object appeared to move to the opposite side. However, in the long-blank condition, the two objects were perceived to move together. Participants judged whether the target objects in the two frames were identical. As a result, the spatial congruency bias occurred irrespective of the ISI conditions. Our findings suggest that the spatial congruency bias is mainly based on retinotopic processing.
Collapse
|
14
|
Valdois S, Roulin JL, Line Bosse M. Visual attention modulates reading acquisition. Vision Res 2019; 165:152-161. [DOI: 10.1016/j.visres.2019.10.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 07/16/2019] [Accepted: 10/30/2019] [Indexed: 12/20/2022]
|
15
|
Multisensory feature integration in (and out) of the focus of spatial attention. Atten Percept Psychophys 2019; 82:363-376. [DOI: 10.3758/s13414-019-01813-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
16
|
Evans KK, Culpan AM, Wolfe JM. Detecting the "gist" of breast cancer in mammograms three years before localized signs of cancer are visible. Br J Radiol 2019; 92:20190136. [PMID: 31166769 DOI: 10.1259/bjr.20190136] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES After a 500 ms presentation, experts can distinguish abnormal mammograms at above chance levels even when only the breast contralateral to the lesion is shown. Here, we show that this signal of abnormality is detectable 3 years before localized signs of cancer become visible. METHODS In 4 prospective studies, 59 expert observers from 3 groups viewed 116-200 bilateral mammograms for 500 ms each. Half of the images were prior exams acquired 3 years prior to onset of visible, actionable cancer and half were normal. Exp. 1D included cases having visible abnormalities. Observers rated likelihood of abnormality on a 0-100 scale and categorized breast density. Performance was measured using receiver operating characteristic analysis. RESULTS In all three groups, observers could detect abnormal images at above chance levels 3 years prior to visible signs of breast cancer (p < 0.001). The results were not due to specific salient cases nor to breast density. Performance was correlated with expertise quantified by the number of mammographic cases read within a year. In Exp. 1D, with cases having visible actionable pathology included, the full group of readers failed to reliably detect abnormal priors; with the exception of a subgroup of the six most experienced observers. CONCLUSIONS Imaging specialists can detect signals of abnormality in mammograms acquired years before lesions become visible. Detection may depend on expertise acquired by reading large numbers of cases. ADVANCES IN KNOWLEDGE Global gist signal can serve as imaging risk factor with the potential to identify patients with elevated risk for developing cancer, resulting in improved early cancer diagnosis rates and improved prognosis for females with breast cancer.
Collapse
Affiliation(s)
- Karla K Evans
- 1 Psychology Department, University of York , York , United Kingdom
| | | | - Jeremy M Wolfe
- 3 Harvard Medical School and Brigham and Women's Hospital , Boston , MA, USA
| |
Collapse
|
17
|
Furutate M, Fujii Y, Morita H, Morita M. Visual Feature Integration of Three Attributes in Stimulus-Response Mapping Is Distinct From That of Two. Front Neurosci 2019; 13:35. [PMID: 30814924 PMCID: PMC6381064 DOI: 10.3389/fnins.2019.00035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 01/15/2019] [Indexed: 11/13/2022] Open
Abstract
In the human visual system, different attributes of an object are processed separately and are thought to be then temporarily bound by attention into an integrated representation to produce a specific response. However, if such representations existed in the brain for arbitrary multi-attribute objects, a combinatorial explosion problem would be unavoidable. Here, we show that attention may bind features of different attributes only in pairs and that bound feature pairs, rather than integrated object representations, are associated with responses for unfamiliar objects. We found that in a mapping task from three-attribute stimuli to responses, presenting three attributes in pairs (two attributes in each window) did not significantly complicate feature integration and response selection when the stimuli were not very familiar. We also found that repeated presentation of the same triple conjunctions significantly improved performance on the stimulus-response task when the correct responses were determined by the combination of three attributes, but this familiarity effect was not observed when the response could be determined by two attributes. These findings indicate that integration of three or more attributes is a distinct process from that of two, requiring long-term learning or some serial process. This suggests that integrated object representations are not formed or are formed only for a limited number of very familiar objects, which resolves the computational difficulty of the binding problem.
Collapse
Affiliation(s)
- Mizuki Furutate
- Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Yumiko Fujii
- Graduate School of Library, Information and Media Studies, University of Tsukuba, Tsukuba, Japan
| | - Hiromi Morita
- Faculty of Library, Information and Media Science, University of Tsukuba, Tsukuba, Japan
| | - Masahiko Morita
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
- *Correspondence: Masahiko Morita,
| |
Collapse
|
18
|
Wick FA, Alaoui Soce A, Garg S, Grace RC, Wolfe JM. Perception in dynamic scenes: What is your Heider capacity? J Exp Psychol Gen 2019; 148:252-271. [PMID: 30667269 PMCID: PMC6396302 DOI: 10.1037/xge0000557] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The classic animation experiment by Heider and Simmel (1944) revealed that humans have a strong tendency to impose narrative even on displays showing interactions between simple geometric shapes. In their most famous animation with three simple shapes, observers almost inevitably interpreted them as rational agents with intentions, desires, and beliefs ("That nasty big triangle!"). Much work on dynamic scenes has identified basic visual properties that can make shapes seem animate. Here, we investigate the limits on the ability to use narrative to share information about animated scenes. We created 30 second Heider-style cartoons with 3-9 items. Item trajectories were generated automatically by a simple set of rules, but without a script. In Experiments 1 and 2, 10 observers wrote short narratives for each cartoon. Next, new observers were shown a cartoon and then presented with a narrative generated for that specific cartoon or one generated for a different cartoon having the same items. Observers rated the fit of the narrative to the cartoon on a scale from 1 (clearly does not fit) to 5 (clearly fits). Performance declined markedly when the number of items was larger than 3. Experiment 3 had observers determine if a short clip of a cartoon came from a longer clip. Experiment 4 had observers determine which of two narratives fit a cartoon. Finally, in Experiment 5, narratives always mentioned every item in a display. In all cases of matching narrative to cartoon, performance drops most dramatically between 3 and 4 items. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Farahnaz A Wick
- Visual Attention Lab, Harvard Medical School/Brigham & Women's Hospital
| | | | - Sahaj Garg
- Department of Computer Science, Stanford University
| | - River C Grace
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Jeremy M Wolfe
- Visual Attention Lab, Harvard Medical School/Brigham & Women's Hospital
| |
Collapse
|
19
|
Dowd EW, Golomb JD. Object-Feature Binding Survives Dynamic Shifts of Spatial Attention. Psychol Sci 2019; 30:343-361. [PMID: 30694718 DOI: 10.1177/0956797618818481] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Visual object perception requires integration of multiple features; spatial attention is thought to be critical to this binding. But attention is rarely static-how does dynamic attention impact object integrity? Here, we manipulated covert spatial attention and had participants (total N = 48) reproduce multiple properties (color, orientation, location) of a target item. Object-feature binding was assessed by applying probabilistic models to the joint distribution of feature errors: Feature reports for the same object could be correlated (and thus bound together) or independent. We found that splitting attention across multiple locations degrades object integrity, whereas rapid shifts of spatial attention maintain bound objects. Moreover, we document a novel attentional phenomenon, wherein participants exhibit unintentional fluctuations- lapses of spatial attention-yet nevertheless preserve object integrity at the wrong location. These findings emphasize the importance of a single focus of spatial attention for object-feature binding, even when that focus is dynamically moving across the visual field.
Collapse
Affiliation(s)
- Emma Wu Dowd
- Department of Psychology, The Ohio State University
| | | |
Collapse
|
20
|
Kamkar S, Moghaddam HA, Lashgari R. Early Visual Processing of Feature Saliency Tasks: A Review of Psychophysical Experiments. Front Syst Neurosci 2018; 12:54. [PMID: 30416433 PMCID: PMC6212481 DOI: 10.3389/fnsys.2018.00054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 10/08/2018] [Indexed: 11/30/2022] Open
Abstract
The visual system is constantly bombarded with information originating from the outside world, but it is unable to process all the received information at any given time. In fact, the most salient parts of the visual scene are chosen to be processed involuntarily and immediately after the first glance along with endogenous signals in the brain. Vision scientists have shown that the early visual system, from retina to lateral geniculate nucleus (LGN) and then primary visual cortex, selectively processes the low-level features of the visual scene. Everything we perceive from the visual scene is based on these feature properties and their subsequent combination in higher visual areas. Different experiments have been designed to investigate the impact of these features on saliency and understand the relative visual mechanisms. In this paper, we review the psychophysical experiments which have been published in the last decades to indicate how the low-level salient features are processed in the early visual cortex and extract the most important and basic information of the visual scene. Important and open questions are discussed in this review as well and one might pursue these questions to investigate the impact of higher level features on saliency in complex scenes or natural images.
Collapse
Affiliation(s)
- Shiva Kamkar
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Brain Engineering Research Center, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Hamid Abrishami Moghaddam
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Reza Lashgari
- Brain Engineering Research Center, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| |
Collapse
|
21
|
Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res 2018; 29:585-606. [PMID: 27795942 DOI: 10.1163/22134808-00002529] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
Collapse
Affiliation(s)
- O Deroy
- Centre for the Study of the Senses, Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - N Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - C Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - C Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - M Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - U Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
22
|
Utochkin IS, Khvostov VA, Stakina YM. Continuous to discrete: Ensemble-based segmentation in the perception of multiple feature conjunctions. Cognition 2018; 179:178-191. [PMID: 29960219 DOI: 10.1016/j.cognition.2018.06.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 06/18/2018] [Accepted: 06/23/2018] [Indexed: 11/29/2022]
Abstract
Although objects around us vary in a number of continuous dimensions (color, size, orientation, etc.), we tend to perceive the objects using more discrete, categorical descriptions (e.g., berries and leaves). Previously, we described how continuous ensemble statistics of simple features are transformed into categorical classes: The visual system tests whether the feature distribution has one or several peaks, each representing a likely "category". Here, we tested the mechanism of segmentation for more complex conjunctions of features. Observers discriminated between two textures filled with lines of various lengths and orientations, which had same distributions between the textures, but opposite directions of correlations. Critically, feature distributions could be "segmentable" (only extreme feature values and a large gap between them) or "non-segmentable" (both extreme and middle values with smooth transition are present). Segmentable displays yielded steeper psychometric functions indicating better discrimination (Experiment 1). The effect of segmentability arises early in visual processing (Experiment 2) and is likely to be provided by global sampling of the entire field (Experiment 3). Also, rapid segmentation requires both feature dimensions having a "segmentable" distribution supporting division of the textures into categorical classes of conjunctions. We propose that observers select items from one side (peak) of one dimension and sample mean differences along a second dimension within the selected subset. In this scenario, subset selection is a limiting factor (Experiment 4) of texture discrimination. Yet, segmentability provided by the sharp feature distributions seems to facilitate both subset selection and mean comparison.
Collapse
Affiliation(s)
- Igor S Utochkin
- National Research University Higher School of Economics, Russian Federation.
| | | | - Yulia M Stakina
- National Research University Higher School of Economics, Russian Federation
| |
Collapse
|
23
|
Shibai A, Arimoto T, Yoshinaga T, Tsuchizawa Y, Khureltulga D, Brown ZP, Kakizuka T, Hosoda K. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish. Sci Rep 2018; 8:8589. [PMID: 29872061 PMCID: PMC5988670 DOI: 10.1038/s41598-018-26186-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Accepted: 05/08/2018] [Indexed: 01/30/2023] Open
Abstract
Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.
Collapse
Affiliation(s)
- Atsushi Shibai
- Graduate School of Information Science and Technology, Osaka University, Yamadaoka 1-5, Suita, Osaka, 565-0871, Japan.
| | - Tsunehiro Arimoto
- Graduate School of Engineering Science, Osaka University, Machikaneyama-cho 1-3, Toyonaka, Osaka, 560-8531, Japan
| | - Tsukasa Yoshinaga
- Graduate School of Engineering Science, Osaka University, Machikaneyama-cho 1-3, Toyonaka, Osaka, 560-8531, Japan
| | - Yuta Tsuchizawa
- Graduate School of Frontier Bioscience, Osaka University, Yamadaoka 1-3, Suita, Osaka, 565-0871, Japan
| | - Dashdavaa Khureltulga
- Graduate School of Information Science and Technology, Osaka University, Yamadaoka 1-5, Suita, Osaka, 565-0871, Japan
| | - Zuben P Brown
- Graduate School of Frontier Bioscience, Osaka University, Yamadaoka 1-3, Suita, Osaka, 565-0871, Japan
| | - Taishi Kakizuka
- Graduate School of Frontier Bioscience, Osaka University, Yamadaoka 1-3, Suita, Osaka, 565-0871, Japan
| | - Kazufumi Hosoda
- Graduate School of Information Science and Technology, Osaka University, Yamadaoka 1-5, Suita, Osaka, 565-0871, Japan.
- Institute for Academic Initiatives, Osaka University, Yamadaoka 1-5, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|
24
|
Abstract
Can observers determine the gist of a natural scene in a purely feedforward manner, or does this process require deliberation and feedback? Observers can recognise images that are presented for very brief periods of time before being masked. It is unclear whether this recognition process occurs in a purely feedforward manner or whether feedback from higher cortical areas to lower cortical areas is necessary. The current study revealed that the minimum presentation time required to identify or to determine the gist of a natural scene was no different from that required to determine the orientation or colour of an isolated line. Conversely, a visual task that would be expected to necessitate feedback (determining whether an image contained exactly six lines) required a significantly greater minimum presentation time. Assuming that the orientation or colour of an isolated line can be determined in a purely feedforward manner, these results indicate that the identification and the determination of the gist of a natural scene can also be performed in a purely feedforward manner. These results challenge a number of theories of visual recognition that require feedback.
Collapse
|
25
|
|
26
|
Chen N, Tanaka K, Namatame M, Watanabe K. Color-Shape Associations in Deaf and Hearing People. Front Psychol 2016; 7:355. [PMID: 27014161 PMCID: PMC4791540 DOI: 10.3389/fpsyg.2016.00355] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 02/26/2016] [Indexed: 11/13/2022] Open
Abstract
Studies have contended that neurotypical Japanese individuals exhibit consistent color-shape associations (red-circle, yellow-triangle, and blue-square) and those color-shape associations could be constructed by common semantic information between colors and shapes through learning and/or language experiences. Here, we conducted two experiments using a direct questionnaire survey and an indirect behavioral test (Implicit Association Test), to examine whether the construction of color-shape associations entailed phonological information by comparing color-shape associations in deaf and hearing participants. The results of the direct questionnaire showed that deaf and hearing participants had similar patterns of color-shape associations (red-circle, yellow-triangle, and blue-square). However, deaf participants failed to show any facilitated processing of congruent pairs in the IAT tasks as hearing participants did. The present results suggest that color-shape associations in deaf participants may not be strong enough to be proved by the indirect behavior tasks and relatively weaker in comparison to hearing participants. Thus, phonological information likely plays a role in the construction of color-shape associations.
Collapse
Affiliation(s)
- Na Chen
- Research Center for Advanced Science and Technology, The University of Tokyo Tokyo, Japan
| | - Kanji Tanaka
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Faculty of Science and Engineering, Waseda UniversityTokyo, Japan
| | - Miki Namatame
- Department of Synthetic Design, Tsukuba University of Technology Tsukuba, Japan
| | - Katsumi Watanabe
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Faculty of Science and Engineering, Waseda UniversityTokyo, Japan
| |
Collapse
|
27
|
Shevell SK, Wang W. Color-motion feature-binding errors are mediated by a higher-order chromatic representation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:A85-A92. [PMID: 26974945 PMCID: PMC5588901 DOI: 10.1364/josaa.33.000a85] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.
Collapse
Affiliation(s)
- Steven K. Shevell
- Institute for Mind and Biology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Psychology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Ophthalmology & Visual Science, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
| | - Wei Wang
- Institute for Mind and Biology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Psychology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
| |
Collapse
|
28
|
A biophysical neural network model for visual working memory that accounts for memory binding errors. BMC Neurosci 2015. [PMCID: PMC4697729 DOI: 10.1186/1471-2202-16-s1-p8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
29
|
Abstract
Color-motion feature-binding errors occur in the periphery when half of the objects are red and move downward, and the other half are green and move upward. When red and green objects in the central visual field are similar but move in the opposite directions (red upward, green downward), peripheral objects often take on the perceived motion direction of the like-colored central objects (Wu, Kanai, & Shimojo, 2004). The present study determined whether color is essential to elicit these motion-binding errors, and tested two hypotheses that attempt to explain them. One hypothesis holds that binding errors occur because peripheral and central objects become linked if they have combinations of features in common. A peripheral object's link to central objects overwhelms its posited weak peripheral representation for motion feature binding, so the peripheral object appears to move in the direction of the linked central objects. Eliminating color by making all stimuli achromatic, therefore, should not increase peripheral binding errors. An alternative hypothesis is that binding errors depend on the overall feature correspondence among central and peripheral features represented at a preconjunctive level. In this case, binding errors may increase when all objects are changed to achromatic because chromatic central/peripheral correspondence is maximal (100%). Experiments showed more motion-binding errors with all-achromatic objects than with half red and half green objects. This and additional findings imply that peripheral motion-binding errors (a) can be elicited without color and (b) depend at least in part on the similarity of central and peripheral features represented preconjunctively.
Collapse
|
30
|
Visual features for perception, attention, and working memory: Toward a three-factor framework. Cognition 2015; 145:43-52. [PMID: 26299507 DOI: 10.1016/j.cognition.2015.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2015] [Revised: 08/14/2015] [Accepted: 08/17/2015] [Indexed: 11/23/2022]
Abstract
Visual features are the general building blocks for attention, perception, and working memory. Here, I explore the factors which can quantitatively predict all the differences they make in various paradigms. I tried to combine the strengths of experimental and correlational approaches in a novel way by developing an individual-item differences analysis to extract the factors from 16 stimulus types on the basis of their roles in eight tasks. A large sample size (410) ensured that all eight tasks had a reliability (Cronbach's α) of no less than 0.975, allowing the factors to be precisely determined. Three orthogonal factors were identified which correspond respectively to featural strength (i.e., how close a stimulus is to a basic feature), visual strength (i.e., visual quality of the stimulus), and spatial strength (i.e., how well a stimulus can be represented as a spatial structure). Featural strength helped substantially in all the tasks but moderately less so in perceptual discrimination; visual strength helped substantially in low-level tasks but not in high-level tasks; and spatial strength helped change detection but hindered ensemble matching and visual search. Jointly, these three factors explained 96.4% of all the variances of the eight tasks, making it clear that they account for almost everything about the roles of these 16 stimulus types in these eight tasks.
Collapse
|
31
|
Feature integration in the mapping of multi-attribute visual stimuli to responses. Sci Rep 2015; 5:9056. [PMID: 25762010 PMCID: PMC4356980 DOI: 10.1038/srep09056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Accepted: 02/17/2015] [Indexed: 11/09/2022] Open
Abstract
In the human visual system, different attributes of an object, such as shape and color, are separately processed in different modules and then integrated to elicit a specific response. In this process, different attributes are thought to be temporarily “bound” together by focusing attention on the object; however, how such binding contributes to stimulus-response mapping remains unclear. Here we report that learning and performance of stimulus-response tasks was more difficult when three attributes of the stimulus determined the correct response than when two attributes did. We also found that spatially separated presentations of attributes considerably complicated the task, although they did not markedly affect target detection. These results are consistent with a paired-attribute model in which bound feature pairs, rather than object representations, are associated with responses by learning. This suggests that attention does not bind three or more attributes into a unitary object representation, and long-term learning is required for their integration.
Collapse
|
32
|
Rosa Salva O, Sovrano VA, Vallortigara G. What can fish brains tell us about visual perception? Front Neural Circuits 2014; 8:119. [PMID: 25324728 PMCID: PMC4179623 DOI: 10.3389/fncir.2014.00119] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2014] [Accepted: 09/09/2014] [Indexed: 12/26/2022] Open
Abstract
Fish are a complex taxonomic group, whose diversity and distance from other vertebrates well suits the comparative investigation of brain and behavior: in fish species we observe substantial differences with respect to the telencephalic organization of other vertebrates and an astonishing variety in the development and complexity of pallial structures. We will concentrate on the contribution of research on fish behavioral biology for the understanding of the evolution of the visual system. We shall review evidence concerning perceptual effects that reflect fundamental principles of the visual system functioning, highlighting the similarities and differences between distant fish groups and with other vertebrates. We will focus on perceptual effects reflecting some of the main tasks that the visual system must attain. In particular, we will deal with subjective contours and optical illusions, invariance effects, second order motion and biological motion and, finally, perceptual binding of object properties in a unified higher level representation.
Collapse
Affiliation(s)
- Orsola Rosa Salva
- Center for Mind/Brain Sciences, University of TrentoRovereto, Trento, Italy
| | - Valeria Anna Sovrano
- Center for Mind/Brain Sciences, University of TrentoRovereto, Trento, Italy
- Dipartimento di Psicologia e Scienze Cognitive, University of TrentoRovereto, Trento, Italy
| | - Giorgio Vallortigara
- Center for Mind/Brain Sciences, University of TrentoRovereto, Trento, Italy
- Dipartimento di Psicologia e Scienze Cognitive, University of TrentoRovereto, Trento, Italy
| |
Collapse
|
33
|
|
34
|
|
35
|
Misbinding of color and motion in human visual cortex. Curr Biol 2014; 24:1354-1360. [PMID: 24856212 DOI: 10.1016/j.cub.2014.04.045] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Revised: 04/09/2014] [Accepted: 04/22/2014] [Indexed: 11/21/2022]
Abstract
A fundamental challenge for the visual system is to integrate visual features into a coherent scene, known as the binding problem. The neural mechanisms of feature binding are hard to identify because of difficulties in separating active feature binding from feature co-occurrence. In previous studies on feature binding, visual features were superimposed and presented simultaneously. Neurons throughout the visual cortex are known to code multiple features. Therefore, the observed binding effects could be due to the physical co-occurrence of features and the sensory representation of feature pairings. It is uncertain whether the mechanisms responsible for perceptual binding were actually recruited. To address this issue, we performed psychophysical and fMRI experiments to investigate the neural mechanisms of a steady-state misbinding of color and motion, because feature misbinding is probably the most striking evidence for the active existence of the binding mechanisms. We found that adapting to the color-motion misbinding generated the color-contingent motion aftereffect, as well as the color-contingent motion adaptation effect in visual cortex. Notably, V2 exhibited the strongest adaptation effect, which significantly correlated with the aftereffect across subjects. Furthermore, effective connectivity analysis using dynamic causal modeling showed that the misbinding was closely associated with enhanced feedback from V4 and V5 to V2. These findings provide strong evidence for active feature binding in early visual cortex and suggest a critical role of reentrant connections from specialized intermediate areas to early visual cortex in this process.
Collapse
|
36
|
Burwick T. The binding problem. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2014; 5:305-15. [PMID: 26308565 DOI: 10.1002/wcs.1279] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Revised: 11/25/2013] [Accepted: 01/12/2014] [Indexed: 11/07/2022]
Abstract
UNLABELLED The brain processes information in a distributed manner so that features of the sensory input are detected at different sites and subsets of these features are integrated into objects. The notion of 'binding' refers to the corresponding integration process, leading to perception of these objects as entities, and 'the binding problem' either refers to the scientific challenge of identifying mechanisms that may achieve binding or to the difficulty that mind and brain may have with binding in certain situations. This review concentrates on binding of properties in visual perception, but other varieties of the binding problem are also mentioned. The binding problem is reviewed from psychological, neurobiological, and computational perspectives. For further resources related to this article, please visit the WIREs website. CONFLICT OF INTEREST The author has declared no conflicts of interest for this article.
Collapse
Affiliation(s)
- Thomas Burwick
- Frankfurt Institute for Advanced Studies (FIAS), Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
37
|
|
38
|
OTSUKA S, KAWAGUCHI J. SALIENT NATURAL SCENE ATTRACTS BOTH YOUNGER AND OLDER ADULTS’ ATTENTION. PSYCHOLOGIA 2014. [DOI: 10.2117/psysoc.2014.153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Affiliation(s)
- Sachio OTSUKA
- Kyoto University
- Japan Society for the Promotion of Science
| | | |
Collapse
|
39
|
Seeing and hearing a word: combining eye and ear is more efficient than combining the parts of a word. PLoS One 2013; 8:e64803. [PMID: 23734220 PMCID: PMC3667182 DOI: 10.1371/journal.pone.0064803] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Accepted: 04/18/2013] [Indexed: 11/21/2022] Open
Abstract
To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word’s image and sound, and combine them efficiently. The brain’s machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation.
Collapse
|
40
|
Stepwise connectivity of the modal cortex reveals the multimodal organization of the human brain. J Neurosci 2012; 32:10649-61. [PMID: 22855814 DOI: 10.1523/jneurosci.0759-12.2012] [Citation(s) in RCA: 213] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipitotemporal, prefrontal, and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called stepwise functional connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain.
Collapse
|
41
|
Mayberry MR, Crocker MW, Knoeferle P. Learning to attend: a connectionist model of situated language comprehension. Cogn Sci 2012; 33:449-96. [PMID: 21585477 DOI: 10.1111/j.1551-6709.2009.01019.x] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Evidence from numerous studies using the visual world paradigm has revealed both that spoken language can rapidly guide attention in a related visual scene and that scene information can immediately influence comprehension processes. These findings motivated the coordinated interplay account (Knoeferle & Crocker, 2006) of situated comprehension, which claims that utterance-mediated attention crucially underlies this closely coordinated interaction of language and scene processing. We present a recurrent sigma-pi neural network that models the rapid use of scene information, exploiting an utterance-mediated attentional mechanism that directly instantiates the CIA. The model is shown to achieve high levels of performance (both with and without scene contexts), while also exhibiting hallmark behaviors of situated comprehension, such as incremental processing, anticipation of appropriate role fillers, as well as the immediate use, and priority, of depicted event information through the coordinated use of utterance-mediated attention to the scene.
Collapse
Affiliation(s)
- Marshall R Mayberry
- Computational Linguistics, Saarland University, Germany Center for Research in Language, University of California, San Diego
| | | | | |
Collapse
|
42
|
|
43
|
Velik R. From simple receptors to complex multimodal percepts: a first global picture on the mechanisms involved in perceptual binding. Front Psychol 2012; 3:259. [PMID: 22837751 PMCID: PMC3402139 DOI: 10.3389/fpsyg.2012.00259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Accepted: 07/06/2012] [Indexed: 11/13/2022] Open
Abstract
The binding problem in perception is concerned with answering the question how information from millions of sensory receptors, processed by millions of neurons working in parallel, can be merged into a unified percept. Binding in perception reaches from the lowest levels of feature binding up to the levels of multimodal binding of information coming from the different sensor modalities and also from other functional systems. The last 40 years of research have shown that the binding problem cannot be solved easily. Today, it is considered as one of the key questions to brain understanding. To date, various solutions have been suggested to the binding problem including: (1) combination coding, (2) binding by synchrony, (3) population coding, (4) binding by attention, (5) binding by knowledge, expectation, and memory, (6) hardwired vs. on-demand binding, (7) bundling and binding of features, (8) the feature-integration theory of attention, and (9) synchronization through top-down processes. Each of those hypotheses addresses important aspects of binding. However, each of them also suffers from certain weak points and can never give a complete explanation. This article gives a brief overview of the so far suggested solutions of perceptual binding and then shows that those are actually not mutually exclusive but can complement each other. A computationally verified model is presented which shows that, most likely, the different described mechanisms of binding act (1) at different hierarchical levels and (2) in different stages of "perceptual knowledge acquisition." The model furthermore considers and explains a number of inhibitory "filter mechanisms" that suppress the activation of inappropriate or currently irrelevant information.
Collapse
|
44
|
Aging and performance on an everyday-based visual search task. Acta Psychol (Amst) 2012; 140:208-17. [PMID: 22664318 DOI: 10.1016/j.actpsy.2012.05.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 04/10/2012] [Accepted: 05/01/2012] [Indexed: 11/24/2022] Open
Abstract
Research on aging and visual search often requires older people to search computer screens for target letters or numbers. The aim of this experiment was to investigate age-related differences using an everyday-based visual search task in a large participant sample (n=261) aged 20-88 years. Our results show that: (1) old-old adults have more difficulty with triple conjunction searches with one highly distinctive feature compared to young-old and younger adults; (2) age-related declines in conjunction searches emerge in middle age then progress throughout older age; (3) age-related declines are evident in feature searches on target absent trials, as older people seem to exhaustively and serially search the whole display to determine a target's absence. Together, these findings suggest that declines emerge in middle age then progress throughout older age in feature integration, guided search, perceptual grouping and/or spreading suppression processes. Discussed are implications for enhancing everyday functioning throughout adulthood.
Collapse
|
45
|
Wolfe JM. The binding problem lives on: comment on Di Lollo. Trends Cogn Sci 2012; 16:307-8; author reply 308-9. [PMID: 22579974 PMCID: PMC5570537 DOI: 10.1016/j.tics.2012.04.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2012] [Accepted: 04/27/2012] [Indexed: 11/25/2022]
|
46
|
Koivisto M, Silvanto J. Visual feature binding: The critical time windows of V1/V2 and parietal activity. Neuroimage 2012; 59:1608-14. [PMID: 21925610 DOI: 10.1016/j.neuroimage.2011.08.089] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2011] [Accepted: 08/28/2011] [Indexed: 10/17/2022] Open
|
47
|
Bays PM, Gorgoraptis N, Wee N, Marshall L, Husain M. Temporal dynamics of encoding, storage, and reallocation of visual working memory. J Vis 2011; 11:11.10.6. [PMID: 21911739 DOI: 10.1167/11.10.6] [Citation(s) in RCA: 115] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here, we examine the temporal evolution of memory resolution, based on observers' ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cuing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event but was maintained if it indicated an object of particular relevance to the task. These cuing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information.
Collapse
Affiliation(s)
- Paul M Bays
- Sobell Department, UCL Institute of Neurology, London, UK.
| | | | | | | | | |
Collapse
|
48
|
Schmidt T, Haberkamp A, Veltkamp GM, Weber A, Seydell-Greenwald A, Schmidt F. Visual processing in rapid-chase systems: image processing, attention, and awareness. Front Psychol 2011; 2:169. [PMID: 21811484 PMCID: PMC3139957 DOI: 10.3389/fpsyg.2011.00169] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2011] [Accepted: 07/06/2011] [Indexed: 11/13/2022] Open
Abstract
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that "fast" visuomotor measures predominantly driven by feedforward processing should supplement "slow" psychophysical measures predominantly based on visual awareness.
Collapse
Affiliation(s)
- Thomas Schmidt
- Faculty of Social Sciences, Psychology I, University of Kaiserslautern Kaiserslautern, Germany
| | | | | | | | | | | |
Collapse
|
49
|
Palmer EM, Horowitz TS, Torralba A, Wolfe JM. What are the shapes of response time distributions in visual search? J Exp Psychol Hum Percept Perform 2011; 37:58-71. [PMID: 21090905 DOI: 10.1037/a0020747] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search.
Collapse
Affiliation(s)
- Evan M Palmer
- Visual Attention Laboratory, Brigham & Women’s Hospital and Harvard Medical School, 64 Sidney Street, Suite 170, Cambridge, MA 02139, USA
| | | | | | | |
Collapse
|
50
|
Wu CT, Libertus ME, Meyerhoff KL, Woldorff MG. The temporal dynamics of object processing in visual cortex during the transition from distributed to focused spatial attention. J Cogn Neurosci 2011; 23:4094-105. [PMID: 21563884 DOI: 10.1162/jocn_a_00045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Several major cognitive neuroscience models have posited that focal spatial attention is required to integrate different features of an object to form a coherent perception of it within a complex visual scene. Although many behavioral studies have supported this view, some have suggested that complex perceptual discrimination can be performed even with substantially reduced focal spatial attention, calling into question the complexity of object representation that can be achieved without focused spatial attention. In the present study, we took a cognitive neuroscience approach to this problem by recording cognition-related brain activity both to help resolve the questions about the role of focal spatial attention in object categorization processes and to investigate the underlying neural mechanisms, focusing particularly on the temporal cascade of these attentional and perceptual processes in visual cortex. More specifically, we recorded electrical brain activity in humans engaged in a specially designed cued visual search paradigm to probe the object-related visual processing before and during the transition from distributed to focal spatial attention. The onset times of the color popout cueing information, indicating where within an object array the subject was to shift attention, was parametrically varied relative to the presentation of the array (i.e., either occurring simultaneously or being delayed by 50 or 100 msec). The electrophysiological results demonstrate that some levels of object-specific representation can be formed in parallel for multiple items across the visual field under spatially distributed attention, before focal spatial attention is allocated to any of them. The object discrimination process appears to be subsequently amplified as soon as focal spatial attention is directed to a specific location and object. This set of novel neurophysiological findings thus provides important new insights on fundamental issues that have been long-debated in cognitive neuroscience concerning both object-related processing and the role of attention.
Collapse
Affiliation(s)
- Chien-Te Wu
- Center for Cognitive Neuroscience, Duke University, Box 90999, Durham, NC 27708, USA
| | | | | | | |
Collapse
|