1
|
Domijan D, Ivančić I. Accentuation, Boolean maps and perception of (dis)similarity in a neural model of visual segmentation. Vision Res 2024; 225:108506. [PMID: 39486210 DOI: 10.1016/j.visres.2024.108506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 10/10/2024] [Accepted: 10/22/2024] [Indexed: 11/04/2024]
Abstract
We developed an interactive cortical circuit for visual segmentation that integrates bottom-up and top-down processing to segregate or group visual elements. A bottom-up pathway incorporates stimulus-driven saliency computation, top-down feature-based weighting by relevance and winner-take-all selection. A top-down pathway encompasses multiscale feedback projections, an object-based attention network and a visual segmentation network. Computer simulations have shown that a salient element in the stimulus guides spatial attention and further influences the decomposition of the nearby object into its parts, as postulated by the principle of accentuation. By contrast, when no single salient element is present, top-down feature-based attention highlights all locations occupied by the attended feature and the model forms a Boolean map, i.e., a spatial representation that makes the feature-based grouping explicit. The same distinction between bottom-up and top-down influences in perceptual organization can also be applied to texture perception. The model suggests that the principle of accentuation and feature-based similarity grouping are two manifestations of the same cortical circuit designed to detect similarities and dissimilarities of visual elements in a stimulus.
Collapse
|
2
|
Mollard S, Wacongne C, Bohte SM, Roelfsema PR. Recurrent neural networks that learn multi-step visual routines with reinforcement learning. PLoS Comput Biol 2024; 20:e1012030. [PMID: 38683837 PMCID: PMC11081502 DOI: 10.1371/journal.pcbi.1012030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/09/2024] [Accepted: 04/01/2024] [Indexed: 05/02/2024] Open
Abstract
Many cognitive problems can be decomposed into series of subproblems that are solved sequentially by the brain. When subproblems are solved, relevant intermediate results need to be stored by neurons and propagated to the next subproblem, until the overarching goal has been completed. We will here consider visual tasks, which can be decomposed into sequences of elemental visual operations. Experimental evidence suggests that intermediate results of the elemental operations are stored in working memory as an enhancement of neural activity in the visual cortex. The focus of enhanced activity is then available for subsequent operations to act upon. The main question at stake is how the elemental operations and their sequencing can emerge in neural networks that are trained with only rewards, in a reinforcement learning setting. We here propose a new recurrent neural network architecture that can learn composite visual tasks that require the application of successive elemental operations. Specifically, we selected three tasks for which electrophysiological recordings of monkeys' visual cortex are available. To train the networks, we used RELEARNN, a biologically plausible four-factor Hebbian learning rule, which is local both in time and space. We report that networks learn elemental operations, such as contour grouping and visual search, and execute sequences of operations, solely based on the characteristics of the visual stimuli and the reward structure of a task. After training was completed, the activity of the units of the neural network elicited by behaviorally relevant image items was stronger than that elicited by irrelevant ones, just as has been observed in the visual cortex of monkeys solving the same tasks. Relevant information that needed to be exchanged between subroutines was maintained as a focus of enhanced activity and passed on to the subsequent subroutines. Our results demonstrate how a biologically plausible learning rule can train a recurrent neural network on multistep visual tasks.
Collapse
Affiliation(s)
- Sami Mollard
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Catherine Wacongne
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- AnotherBrain, Paris, France
| | - Sander M. Bohte
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Pieter R. Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris, France
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University, Amsterdam, The Netherlands
- Department of Neurosurgery, Academic Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Ullman S, Assif L, Strugatski A, Vatashsky BZ, Levi H, Netanyahu A, Yaari A. Human-like scene interpretation by a guided counterstream processing. Proc Natl Acad Sci U S A 2023; 120:e2211179120. [PMID: 37769256 PMCID: PMC10556630 DOI: 10.1073/pnas.2211179120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/24/2023] [Indexed: 09/30/2023] Open
Abstract
In modeling vision, there has been a remarkable progress in recognizing a range of scene components, but the problem of analyzing full scenes, an ultimate goal of visual perception, is still largely open. To deal with complete scenes, recent work focused on the training of models for extracting the full graph-like structure of a scene. In contrast with scene graphs, humans' scene perception focuses on selected structures in the scene, starting with a limited interpretation and evolving sequentially in a goal-directed manner [G. L. Malcolm, I. I. A. Groen, C. I. Baker, Trends. Cogn. Sci. 20, 843-856 (2016)]. Guidance is crucial throughout scene interpretation since the extraction of full scene representation is often infeasible. Here, we present a model that performs human-like guided scene interpretation, using an iterative bottom-up, top-down processing, in a "counterstream" structure motivated by cortical circuitry. The process proceeds by the sequential application of top-down instructions that guide the interpretation process. The results show how scene structures of interest to the viewer are extracted by an automatically selected sequence of top-down instructions. The model shows two further benefits. One is an inherent capability to deal well with the problem of combinatorial generalization-generalizing broadly to unseen scene configurations, which is limited in current network models [B. Lake, M. Baroni, 35th International Conference on Machine Learning, ICML 2018 (2018)]. The second is the ability to combine visual with nonvisual information at each cycle of the interpretation process, which is a key aspect for modeling human perception as well as advancing AI vision systems.
Collapse
Affiliation(s)
- Shimon Ullman
- Department of Computer Science, the Weizmann Institute of Science, Rehovot76100, Israel
| | - Liav Assif
- Department of Computer Science, the Weizmann Institute of Science, Rehovot76100, Israel
| | - Alona Strugatski
- Department of Computer Science, the Weizmann Institute of Science, Rehovot76100, Israel
| | - Ben-Zion Vatashsky
- Department of Computer Science, the Weizmann Institute of Science, Rehovot76100, Israel
| | - Hila Levi
- Department of Computer Science, the Weizmann Institute of Science, Rehovot76100, Israel
| | - Aviv Netanyahu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA02139
| | - Adam Yaari
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA02139
| |
Collapse
|
4
|
Schmid D, Jarvers C, Neumann H. Canonical circuit computations for computer vision. BIOLOGICAL CYBERNETICS 2023; 117:299-329. [PMID: 37306782 PMCID: PMC10600314 DOI: 10.1007/s00422-023-00966-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Advanced computer vision mechanisms have been inspired by neuroscientific findings. However, with the focus on improving benchmark achievements, technical solutions have been shaped by application and engineering constraints. This includes the training of neural networks which led to the development of feature detectors optimally suited to the application domain. However, the limitations of such approaches motivate the need to identify computational principles, or motifs, in biological vision that can enable further foundational advances in machine vision. We propose to utilize structural and functional principles of neural systems that have been largely overlooked. They potentially provide new inspirations for computer vision mechanisms and models. Recurrent feedforward, lateral, and feedback interactions characterize general principles underlying processing in mammals. We derive a formal specification of core computational motifs that utilize these principles. These are combined to define model mechanisms for visual shape and motion processing. We demonstrate how such a framework can be adopted to run on neuromorphic brain-inspired hardware platforms and can be extended to automatically adapt to environment statistics. We argue that the identified principles and their formalization inspires sophisticated computational mechanisms with improved explanatory scope. These and other elaborated, biologically inspired models can be employed to design computer vision solutions for different tasks and they can be used to advance neural network architectures of learning.
Collapse
Affiliation(s)
- Daniel Schmid
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Christian Jarvers
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Heiko Neumann
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| |
Collapse
|
5
|
Domijan D, Marić M. A multi-scale neurodynamic implementation of incremental grouping. Vision Res 2022; 197:108057. [PMID: 35487147 DOI: 10.1016/j.visres.2022.108057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/25/2022] [Accepted: 04/12/2022] [Indexed: 11/17/2022]
Abstract
Incremental grouping is a process entailing serial binding of distal image elements into a unified object representation. At the neural level, incremental grouping involves propagation of the enhanced firing rate among feature-tuned neurons in the early visual cortex. Here, we developed a multi-resolution neural model of incremental grouping. In the model, propagation of the enhanced firing rate is achieved by computing the activity difference between two sets of units: attentional or A-units, whose firing rate is modulated by their horizontal collaterals, and non-attentional or N-units that receive only feedforward input. The activity difference is computed on dendrites that act as independent computational subunits. The proposed model employs multiple spatial scales to account for a variable speed of incremental grouping. In addition, the model incorporates the L-junction detection network that enables incremental grouping over L-junctions. Computer simulations show that the timing of attentional modulations in the model is comparable with neurophysiological measurements in monkey primary visual cortex.
Collapse
|
6
|
Ramezanpour H, Fallah M. The role of temporal cortex in the control of attention. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100038. [PMID: 36685758 PMCID: PMC9846471 DOI: 10.1016/j.crneur.2022.100038] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 02/05/2022] [Accepted: 04/01/2022] [Indexed: 01/25/2023] Open
Abstract
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
Collapse
Affiliation(s)
- Hamidreza Ramezanpour
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Corresponding author. Centre for Vision Research, York University, Toronto, Ontario, Canada.
| | - Mazyar Fallah
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Department of Psychology, Faculty of Health, York University, Toronto, Ontario, Canada,Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada,Corresponding author. Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada.
| |
Collapse
|
7
|
Tsotsos JK, Abid O, Kotseruba I, Solbach MD. On the control of attentional processes in vision. Cortex 2021; 137:305-329. [PMID: 33677138 DOI: 10.1016/j.cortex.2021.01.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/17/2020] [Accepted: 01/07/2021] [Indexed: 11/26/2022]
Abstract
The study of attentional processing in vision has a long and deep history. Recently, several papers have presented insightful perspectives into how the coordination of multiple attentional functions in the brain might occur. These begin with experimental observations and the authors propose structures, processes, and computations that might explain those observations. Here, we consider a perspective that past works have not, as a complementary approach to the experimentally-grounded ones. We approach the same problem as past authors but from the other end of the computational spectrum, from the problem nature, as Marr's Computational Level would prescribe. What problem must the brain solve when orchestrating attentional processes in order to successfully complete one of the myriad possible visuospatial tasks at which we as humans excel? The hope, of course, is for the approaches to eventually meet and thus form a complete theory, but this is likely not soon. We make the first steps towards this by addressing the necessity of attentional control, examining the breadth and computational difficulty of the visuospatial and attentional tasks seen in human behavior, and suggesting a sketch of how attentional control might arise in the brain. The key conclusions of this paper are that an executive controller is necessary for human attentional function in vision, and that there is a 'first principles' computational approach to its understanding that is complementary to the previous approaches that focus on modelling or learning from experimental observations directly.
Collapse
|
8
|
Neural dynamics of spreading attentional labels in mental contour tracing. Neural Netw 2019; 119:113-138. [PMID: 31404805 DOI: 10.1016/j.neunet.2019.07.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/12/2019] [Accepted: 07/21/2019] [Indexed: 11/22/2022]
Abstract
Behavioral and neural data suggest that visual attention spreads along contour segments to bind them into a unified object representation. Such attentional labeling segregates the target contour from distractors in a process known as mental contour tracing. A recurrent competitive map is developed to simulate the dynamics of mental contour tracing. In the model, local excitation opposes global inhibition and enables enhanced activity to propagate on the path offered by the contour. The extent of local excitatory interactions is modulated by the output of the multi-scale contour detection network, which constrains the speed of activity spreading in a scale-dependent manner. Furthermore, an L-junction detection network enables tracing to switch direction at the L-junctions, but not at the X- or T-junctions, thereby preventing spillover to a distractor contour. Computer simulations reveal that the model exhibits a monotonic increase in tracing time as a function of the distance to be traced. Also, the speed of tracing increases with decreasing proximity to the distractor contour and with the reduced curvature of the contours. The proposed model demonstrated how an elaborated version of the winner-takes-all network can implement a complex cognitive operation such as contour tracing.
Collapse
|
9
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
10
|
Tsotsos JK. Attention: The Messy Reality. THE YALE JOURNAL OF BIOLOGY AND MEDICINE 2019; 92:127-137. [PMID: 30923480 PMCID: PMC6430176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
The human capability to attend has been both considered as easy and as impossible to understand by philosophers and scientists through the centuries. Much has been written by brain, cognitive, and philosophical scientists trying to explain attention as it applies to sensory and reasoning processes, let alone consciousness. It has been only in the last few decades that computational scientists have entered the picture adding a new language with which to express attentional behavior and function. This new perspective has produced some progress to the centuries-old goal, but there is still far to go. Although a central belief in many scientific disciplines has been to seek a unifying explanatory principle for natural observations, it may be that we need to put this aside as it applies to attention and accept the fact that attention is really an integrated set of mechanisms, too messy to cleanly and parsimoniously express with a single principle. These mechanisms are claimed to be critical to enable functional generalization of brain processes and thus an integrative perspective is important. Here we present first steps towards a theoretical and algorithmic view on how the many different attentional mechanisms may be deployed, coordinated, synchronized, and effectively utilized. A hierarchy of dynamically defined closed-loop control processes is proposed, each with its own optimization objective, which is extensible to multiple layers. Although mostly speculative, simulation and experimental work support important components.
Collapse
Affiliation(s)
- John K. Tsotsos
- To whom all correspondence should be addressed: John K. Tsotsos, Department of Electrical Engineering and Computer Science, York University, 4700 Keele St., Toronto, ON Canada M3P 1J3; Tel: 416-736-2100; x70135,
| |
Collapse
|
11
|
Lázaro-Gredilla M, Lin D, Guntupalli JS, George D. Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Sci Robot 2019; 4:4/26/eaav3150. [DOI: 10.1126/scirobotics.aav3150] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 11/19/2018] [Indexed: 01/29/2023]
Abstract
Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, then it would notably improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a computer architecture consisting of a visual perception system, working memory, and action controller. The instruction set of this cognitive computer has commands for parsing a visual scene, directing gaze and attention, imagining new objects, manipulating the contents of a visual working memory, and controlling arm movement. Inferring a concept corresponds to inducing a program that can transform the input to the output. Some concepts require the use of imagination and recursion. Previously learned concepts simplify the learning of subsequent, more elaborate concepts and create a hierarchy of abstractions. We demonstrate how a robot can use these abstractions to interpret novel concepts presented to it as schematic images and then apply those concepts in very different situations. By bringing cognitive science ideas on mental imagery, perceptual symbols, embodied cognition, and deictic mechanisms into the realm of machine learning, our work brings us closer to the goal of building robots that have interpretable representations and common sense.
Collapse
|
12
|
Abstract
It is almost universal to regard attention as the facility that permits an agent, human or machine, to give priority processing resources to relevant stimuli while ignoring the irrelevant. The reality of how this might manifest itself throughout all the forms of perceptual and cognitive processes possessed by humans, however, is not as clear. Here, we examine this reality with a broad perspective in order to highlight the myriad ways that attentional processes impact both perception and cognition. The paper concludes by showing two real-world problems that exhibit sufficient complexity to illustrate the ways in which attention and cognition connect. These then point to new avenues of research that might illuminate the overall cognitive architecture of spatial cognition.
Collapse
Affiliation(s)
- John K Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada.
| | - Iuliia Kotseruba
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - Amir Rasouli
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - Markus D Solbach
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| |
Collapse
|
13
|
|
14
|
Tsotsos JK. Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us about How the Brain Might Represent Visual Information? Front Psychol 2017; 8:1216. [PMID: 28848458 PMCID: PMC5552749 DOI: 10.3389/fpsyg.2017.01216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2016] [Accepted: 07/03/2017] [Indexed: 11/13/2022] Open
Abstract
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.
Collapse
Affiliation(s)
- John K Tsotsos
- Department of Electrical Engineering and Computer Science, York UniversityToronto, ON, Canada
| |
Collapse
|
15
|
van der Togt C, Stănişor L, Pooresmaeili A, Albantakis L, Deco G, Roelfsema PR. Learning a New Selection Rule in Visual and Frontal Cortex. Cereb Cortex 2016; 26:3611-26. [PMID: 27269960 PMCID: PMC4961027 DOI: 10.1093/cercor/bhw155] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process.
Collapse
Affiliation(s)
- Chris van der Togt
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, An Institute of the Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, The Netherlands
| | - Liviu Stănişor
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, An Institute of the Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, The Netherlands
| | - Arezoo Pooresmaeili
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, An Institute of the Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, The Netherlands
| | - Larissa Albantakis
- Madison School of Medicine, Department of Psychiatry, University of Wisconsin, 6001 Research Park Boulevard, Madison, WI 53719, USA
| | - Gustavo Deco
- Dept. de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, C\ Tanger, 122-140, 08018 Barcelona, Spain
| | - Pieter R Roelfsema
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, An Institute of the Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, The Netherlands Department of Integrative Neurophysiology, Centre for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands Psychiatry Department, Academic Medical Center, 1105 AZ Amsterdam, The Netherlands
| |
Collapse
|
16
|
Bruce ND, Wloka C, Frosst N, Rahman S, Tsotsos JK. On computational modeling of visual saliency: Examining what’s right, and what’s left. Vision Res 2015; 116:95-112. [DOI: 10.1016/j.visres.2015.01.010] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2014] [Revised: 12/16/2014] [Accepted: 01/19/2015] [Indexed: 11/26/2022]
|
17
|
Kyllingsbæ S, Vangkilde S, Bundesen C. Editorial: Theories of visual attention-linking cognition, neuropsychology, and neurophysiology. Front Psychol 2015; 6:767. [PMID: 26124730 PMCID: PMC4464144 DOI: 10.3389/fpsyg.2015.00767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 05/22/2015] [Indexed: 11/16/2022] Open
Affiliation(s)
- Søren Kyllingsbæ
- Department of Psychology, Center for Visual Cognition, University of Copenhagen Copenhagen, Denmark
| | - Signe Vangkilde
- Department of Psychology, Center for Visual Cognition, University of Copenhagen Copenhagen, Denmark
| | - Claus Bundesen
- Department of Psychology, Center for Visual Cognition, University of Copenhagen Copenhagen, Denmark
| |
Collapse
|