1
|
Domijan D, Ivančić I. Accentuation, Boolean maps and perception of (dis)similarity in a neural model of visual segmentation. Vision Res 2024; 225:108506. [PMID: 39486210 DOI: 10.1016/j.visres.2024.108506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 10/10/2024] [Accepted: 10/22/2024] [Indexed: 11/04/2024]
Abstract
We developed an interactive cortical circuit for visual segmentation that integrates bottom-up and top-down processing to segregate or group visual elements. A bottom-up pathway incorporates stimulus-driven saliency computation, top-down feature-based weighting by relevance and winner-take-all selection. A top-down pathway encompasses multiscale feedback projections, an object-based attention network and a visual segmentation network. Computer simulations have shown that a salient element in the stimulus guides spatial attention and further influences the decomposition of the nearby object into its parts, as postulated by the principle of accentuation. By contrast, when no single salient element is present, top-down feature-based attention highlights all locations occupied by the attended feature and the model forms a Boolean map, i.e., a spatial representation that makes the feature-based grouping explicit. The same distinction between bottom-up and top-down influences in perceptual organization can also be applied to texture perception. The model suggests that the principle of accentuation and feature-based similarity grouping are two manifestations of the same cortical circuit designed to detect similarities and dissimilarities of visual elements in a stimulus.
Collapse
|
2
|
Grabenhorst F, Ponce-Alvarez A, Battaglia-Mayer A, Deco G, Schultz W. A view-based decision mechanism for rewards in the primate amygdala. Neuron 2023; 111:3871-3884.e14. [PMID: 37725980 PMCID: PMC10914681 DOI: 10.1016/j.neuron.2023.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 07/12/2023] [Accepted: 08/23/2023] [Indexed: 09/21/2023]
Abstract
Primates make decisions visually by shifting their view from one object to the next, comparing values between objects, and choosing the best reward, even before acting. Here, we show that when monkeys make value-guided choices, amygdala neurons encode their decisions in an abstract, purely internal representation defined by the monkey's current view but not by specific object or reward properties. Across amygdala subdivisions, recorded activity patterns evolved gradually from an object-specific value code to a transient, object-independent code in which currently viewed and last-viewed objects competed to reflect the emerging view-based choice. Using neural-network modeling, we identified a sequence of computations by which amygdala neurons implemented view-based decision making and eventually recovered the chosen object's identity when the monkeys acted on their choice. These findings reveal a neural mechanism in the amygdala that derives object choices from abstract, view-based computations, suggesting an efficient solution for decision problems with many objects.
Collapse
Affiliation(s)
- Fabian Grabenhorst
- Department of Experimental Psychology, University of Oxford, Mansfield Road, Oxford OX1 3TA, UK; Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3DY, UK.
| | - Adrián Ponce-Alvarez
- Center for Brain and Cognition, Department of Technology and Information, Universitat Pompeu Fabra, Carrer Ramón Trias Fargas, 25-27, 08005 Barcelona, Spain; Departament de Matemàtiques, EPSEB, Universitat Politècnica de Catalunya, Barcelona, 08028 Barcelona, Spain
| | | | - Gustavo Deco
- Center for Brain and Cognition, Department of Technology and Information, Universitat Pompeu Fabra, Carrer Ramón Trias Fargas, 25-27, 08005 Barcelona, Spain; Institució Catalana de la Recerca i Estudis Avançats, Universitat Barcelona, Passeig Lluís Companys 23, 08010 Barcelona, Spain
| | - Wolfram Schultz
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3DY, UK
| |
Collapse
|
3
|
Fu Z, Sajad A, Errington SP, Schall JD, Rutishauser U. Neurophysiological mechanisms of error monitoring in human and non-human primates. Nat Rev Neurosci 2023; 24:153-172. [PMID: 36707544 PMCID: PMC10231843 DOI: 10.1038/s41583-022-00670-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2022] [Indexed: 01/29/2023]
Abstract
Performance monitoring is an important executive function that allows us to gain insight into our own behaviour. This remarkable ability relies on the frontal cortex, and its impairment is an aspect of many psychiatric diseases. In recent years, recordings from the macaque and human medial frontal cortex have offered a detailed understanding of the neurophysiological substrate that underlies performance monitoring. Here we review the discovery of single-neuron correlates of error monitoring, a key aspect of performance monitoring, in both species. These neurons are the generators of the error-related negativity, which is a non-invasive biomarker that indexes error detection. We evaluate a set of tasks that allows the synergistic elucidation of the mechanisms of cognitive control across the two species, consider differences in brain anatomy and testing conditions across species, and describe the clinical relevance of these findings for understanding psychopathology. Last, we integrate the body of experimental facts into a theoretical framework that offers a new perspective on how error signals are computed in both species and makes novel, testable predictions.
Collapse
Affiliation(s)
- Zhongzheng Fu
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.
| | - Amirsaman Sajad
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Steven P Errington
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Jeffrey D Schall
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Centre for Vision Research, York University, Toronto, Ontario, Canada.
- Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.
- Department of Biology, Faculty of Science, York University, Toronto, Ontario, Canada.
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- Center for Neural Science and Medicine, Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Barkdoll K, Lu Y, Barranca VJ. New insights into binocular rivalry from the reconstruction of evolving percepts using model network dynamics. Front Comput Neurosci 2023; 17:1137015. [PMID: 37034441 PMCID: PMC10079880 DOI: 10.3389/fncom.2023.1137015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 03/07/2023] [Indexed: 04/11/2023] Open
Abstract
When the two eyes are presented with highly distinct stimuli, the resulting visual percept generally switches every few seconds between the two monocular images in an irregular fashion, giving rise to a phenomenon known as binocular rivalry. While a host of theoretical studies have explored potential mechanisms for binocular rivalry in the context of evoked model dynamics in response to simple stimuli, here we investigate binocular rivalry directly through complex stimulus reconstructions based on the activity of a two-layer neuronal network model with competing downstream pools driven by disparate monocular stimuli composed of image pixels. To estimate the dynamic percept, we derive a linear input-output mapping rooted in the non-linear network dynamics and iteratively apply compressive sensing techniques for signal recovery. Utilizing a dominance metric, we are able to identify when percept alternations occur and use data collected during each dominance period to generate a sequence of percept reconstructions. We show that despite the approximate nature of the input-output mapping and the significant reduction in neurons downstream relative to stimulus pixels, the dominant monocular image is well-encoded in the network dynamics and improvements are garnered when realistic spatial receptive field structure is incorporated into the feedforward connectivity. Our model demonstrates gamma-distributed dominance durations and well obeys Levelt's four laws for how dominance durations change with stimulus strength, agreeing with key recurring experimental observations often used to benchmark rivalry models. In light of evidence that individuals with autism exhibit relatively slow percept switching in binocular rivalry, we corroborate the ubiquitous hypothesis that autism manifests from reduced inhibition in the brain by systematically probing our model alternation rate across choices of inhibition strength. We exhibit sufficient conditions for producing binocular rivalry in the context of natural scene stimuli, opening a clearer window into the dynamic brain computations that vary with the generated percept and a potential path toward further understanding neurological disorders.
Collapse
|
5
|
Abstract
The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.
Collapse
Affiliation(s)
- Chiara Bartolozzi
- Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, via San Quirico 19D, 16163, Genova, Italy.
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| |
Collapse
|
6
|
Golosio B, De Luca C, Capone C, Pastorelli E, Stegel G, Tiddia G, De Bonis G, Paolucci PS. Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep. PLoS Comput Biol 2021; 17:e1009045. [PMID: 34181642 PMCID: PMC8270441 DOI: 10.1371/journal.pcbi.1009045] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2021] [Accepted: 05/05/2021] [Indexed: 01/19/2023] Open
Abstract
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories. We created a thalamo-cortical spiking model (ThaCo) with the purpose of demonstrating a link among two phenomena that we believe to be essential for the brain capability of efficient incremental learning from few examples in noisy environments. Grounded in two experimental observations—the first about the effects of deep-sleep on pre- and post-sleep firing rate distributions, the second about the combination of perceptual and contextual information in pyramidal neurons—our model joins these two ingredients. ThaCo alternates phases of incremental learning, classification and deep-sleep. Memories of handwritten digit examples are learned through thalamo-cortical and cortico-cortical plastic synapses. In absence of noise, the combination of contextual information with perception enables fast incremental learning. Deep-sleep becomes crucial when noisy inputs are considered. We observed in ThaCo both homeostatic and associative processes: deep-sleep fights noise in perceptual and internal knowledge and it supports the categorical association of examples belonging to the same digit class, through reinforcement of class-specific cortico-cortical synapses. The distributions of pre-sleep and post-sleep firing rates during classification change in a manner similar to those of experimental observation. These changes promote energetic efficiency during recall of memories, better representation of individual memories and categories and higher classification performances.
Collapse
Affiliation(s)
- Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
- * E-mail:
| | - Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Giovanni Stegel
- Dipartimento di Chimica e Farmacia, Università di Sassari, Sassari, Italy
| | - Gianmarco Tiddia
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Giulia De Bonis
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
7
|
Abstract
Animals frequently need to choose the best alternative from a set of possibilities, whether it is which direction to swim in or which food source to favor. How long should a network of neurons take to choose the best of N options? Theoretical results suggest that the optimal time grows as log(N), if the values of each option are imperfectly perceived. However, standard self-terminating neural network models of decision-making cannot achieve this optimal behavior. We show how using certain additional nonlinear response properties in neurons, which are ignored in standard models, results in a decision-making architecture that both achieves the optimal scaling of decision time and accounts for multiple experimentally observed features of neural decision-making. An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes ∼Nlog(N) time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick’s law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick’s law may be a symptom of near-optimal parallel decision-making with noisy input.
Collapse
|
8
|
Fleischer P, Hélie S. A unified model of rule-set learning and selection. Neural Netw 2020; 124:343-356. [PMID: 32044561 DOI: 10.1016/j.neunet.2020.01.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 01/21/2020] [Accepted: 01/23/2020] [Indexed: 10/25/2022]
Abstract
The ability to focus on relevant information and ignore irrelevant information is a fundamental part of intelligent behavior. It not only allows faster acquisition of new tasks by reducing the size of the problem space but also allows for generalizations to novel stimuli. Task-switching, task-sets, and rule-set learning are all intertwined with this ability. There are many models that attempt to individually describe these cognitive abilities. However, there are few models that try to capture the breadth of these topics in a unified model and fewer still that do it while adhering to the biological constraints imposed by the findings from the field of neuroscience. Presented here is a comprehensive model of rule-set learning and selection that can capture the learning curve results, error-type data, and transfer effects found in rule-learning studies while also replicating the reaction time data and various related effects of task-set and task-switching experiments. The model also factors in many disparate neurological findings, several of which are often disregarded by similar models.
Collapse
|
9
|
Neural dynamics of spreading attentional labels in mental contour tracing. Neural Netw 2019; 119:113-138. [PMID: 31404805 DOI: 10.1016/j.neunet.2019.07.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/12/2019] [Accepted: 07/21/2019] [Indexed: 11/22/2022]
Abstract
Behavioral and neural data suggest that visual attention spreads along contour segments to bind them into a unified object representation. Such attentional labeling segregates the target contour from distractors in a process known as mental contour tracing. A recurrent competitive map is developed to simulate the dynamics of mental contour tracing. In the model, local excitation opposes global inhibition and enables enhanced activity to propagate on the path offered by the contour. The extent of local excitatory interactions is modulated by the output of the multi-scale contour detection network, which constrains the speed of activity spreading in a scale-dependent manner. Furthermore, an L-junction detection network enables tracing to switch direction at the L-junctions, but not at the X- or T-junctions, thereby preventing spillover to a distractor contour. Computer simulations reveal that the model exhibits a monotonic increase in tracing time as a function of the distance to be traced. Also, the speed of tracing increases with decreasing proximity to the distractor contour and with the reduced curvature of the contours. The proposed model demonstrated how an elaborated version of the winner-takes-all network can implement a complex cognitive operation such as contour tracing.
Collapse
|
10
|
Miyawaki H, Watson BO, Diba K. Neuronal firing rates diverge during REM and homogenize during non-REM. Sci Rep 2019; 9:689. [PMID: 30679509 PMCID: PMC6345798 DOI: 10.1038/s41598-018-36710-8] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Accepted: 11/25/2018] [Indexed: 12/02/2022] Open
Abstract
Neurons fire at highly variable intrinsic rates and recent evidence suggests that low- and high-firing rate neurons display different plasticity and dynamics. Furthermore, recent publications imply possibly differing rate-dependent effects in hippocampus versus neocortex, but those analyses were carried out separately and with potentially important differences. To more effectively synthesize these questions, we analyzed the firing rate dynamics of populations of neurons in both hippocampal CA1 and frontal cortex under one framework that avoids the pitfalls of previous analyses and accounts for regression to the mean (RTM). We observed several consistent effects across these regions. While rapid eye movement (REM) sleep was marked by decreased hippocampal firing and increased neocortical firing, in both regions firing rate distributions widened during REM due to differential changes in high- versus low-firing rate cells in parallel with increased interneuron activity. In contrast, upon non-REM (NREM) sleep, firing rate distributions narrowed while interneuron firing decreased. Interestingly, hippocampal interneuron activity closely followed the patterns observed in neocortical principal cells rather than the hippocampal principal cells, suggestive of long-range interactions. Following these undulations in variance, the net effect of sleep was a decrease in firing rates. These decreases were greater in lower-firing hippocampal neurons but also higher-firing frontal cortical neurons, suggestive of greater plasticity in these cell groups. Our results across two different regions, and with statistical corrections, indicate that the hippocampus and neocortex show a mixture of differences and similarities as they cycle between sleep states with a unifying characteristic of homogenization of firing during NREM and diversification during REM.
Collapse
Affiliation(s)
- Hiroyuki Miyawaki
- Department of Psychology, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI, 53211, USA
- Department of Physiology, Graduate School of Medicine, Osaka City University, Asahimachi 1-4-3, Abeno-ku, Osaka, 545-8585, Japan
| | - Brendon O Watson
- Department of Psychiatry, University of Michigan Medical School, 109 Zina Pitcher Pl, Ann Arbor, MI, 48109, USA
| | - Kamran Diba
- Department of Psychology, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI, 53211, USA.
- Department of Anesthesiology, University of Michigan Medical School, 1500 E Medical Center Drive, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
11
|
Barranca VJ, Huang H, Kawakita G. Network structure and input integration in competing firing rate models for decision-making. J Comput Neurosci 2019; 46:145-168. [PMID: 30661144 DOI: 10.1007/s10827-018-0708-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 12/05/2018] [Accepted: 12/17/2018] [Indexed: 11/30/2022]
Abstract
Making a decision among numerous alternatives is a pervasive and central undertaking encountered by mammals in natural settings. While decision making for two-option tasks has been studied extensively both experimentally and theoretically, characterizing decision making in the face of a large set of alternatives remains challenging. We explore this issue by formulating a scalable mechanistic network model for decision making and analyzing the dynamics evoked given various potential network structures. In the case of a fully-connected network, we provide an analytical characterization of the model fixed points and their stability with respect to winner-take-all behavior for fair tasks. We compare several means of input integration, demonstrating a more gradual sigmoidal transfer function is likely evolutionarily advantageous relative to binary gain commonly utilized in engineered systems. We show via asymptotic analysis and numerical simulation that sigmoidal transfer functions with smaller steepness yield faster response times but depreciation in accuracy. However, in the presence of noise or degradation of connections, a sigmoidal transfer function garners significantly more robust and accurate decision-making dynamics. For fair tasks and sigmoidal gain, our model network also exhibits a stable parameter regime that produces high accuracy and persists across tasks with diverse numbers of alternatives and difficulties, satisfying physiological energetic constraints. In the case of more sparse and structured network topologies, including random, regular, and small-world connectivity, we show the high-accuracy parameter regime persists for biologically realistic connection densities. Our work shows how neural system architecture is potentially optimal in making economic, reliable, and advantageous decisions across tasks.
Collapse
Affiliation(s)
| | - Han Huang
- Swarthmore College, 500 College Avenue, Swarthmore, PA, 19081, USA
| | - Genji Kawakita
- Swarthmore College, 500 College Avenue, Swarthmore, PA, 19081, USA
| |
Collapse
|
12
|
de Andres-Bragado L, Mazza C, Senn W, Sprecher SG. Statistical modelling of navigational decisions based on intensity versus directionality in Drosophila larval phototaxis. Sci Rep 2018; 8:11272. [PMID: 30050066 PMCID: PMC6062584 DOI: 10.1038/s41598-018-29533-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 07/12/2018] [Indexed: 11/08/2022] Open
Abstract
Organisms use environmental cues for directed navigation. Understanding the basic logic behind navigational decisions critically depends on the complexity of the nervous system. Due to the comparably simple organization of the nervous system of the fruit fly larva, it stands as a powerful model to study decision-making processes that underlie directed navigation. We have quantitatively measured phototaxis in response to well-defined sensory inputs. Subsequently, we have formulated a statistical stochastic model based on biased Markov chains to characterize the behavioural basis of negative phototaxis. Our experiments show that larvae make navigational decisions depending on two independent physical variables: light intensity and its spatial gradient. Furthermore, our statistical model quantifies how larvae balance two potentially-contradictory factors: minimizing exposure to light intensity and at the same time maximizing their distance to the light source. We find that the response to the light field is manifestly non-linear, and saturates above an intensity threshold. The model has been validated against our experimental biological data yielding insight into the strategy that larvae use to achieve their goal with respect to the navigational cue of light, an important piece of information for future work to study the role of the different neuronal components in larval phototaxis.
Collapse
Affiliation(s)
| | - Christian Mazza
- Department of Mathematics, University of Fribourg, Fribourg, Switzerland.
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland.
| | - Simon G Sprecher
- Department of Biology, University of Fribourg, Fribourg, Switzerland.
| |
Collapse
|
13
|
Rutishauser U, Slotine JJ, Douglas RJ. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks. Neural Comput 2018; 30:1359-1393. [PMID: 29566357 PMCID: PMC5930080 DOI: 10.1162/neco_a_01074] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, U.S.A., and Cedars-Sinai Medical Center, Departments of Neurosurgery, Neurology and Biomedical Sciences, Los Angeles, CA 90048, U.S.A.
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A.
| | - Rodney J Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich 8057, Switzerland
| |
Collapse
|
14
|
Marić M, Domijan D. A Neurodynamic Model of Feature-Based Spatial Selection. Front Psychol 2018; 9:417. [PMID: 29643826 PMCID: PMC5883145 DOI: 10.3389/fpsyg.2018.00417] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 03/13/2018] [Indexed: 11/21/2022] Open
Abstract
Huang and Pashler (2007) suggested that feature-based attention creates a special form of spatial representation, which is termed a Boolean map. It partitions the visual scene into two distinct and complementary regions: selected and not selected. Here, we developed a model of a recurrent competitive network that is capable of state-dependent computation. It selects multiple winning locations based on a joint top-down cue. We augmented a model of the WTA circuit that is based on linear-threshold units with two computational elements: dendritic non-linearity that acts on the excitatory units and activity-dependent modulation of synaptic transmission between excitatory and inhibitory units. Computer simulations showed that the proposed model could create a Boolean map in response to a featured cue and elaborate it using the logical operations of intersection and union. In addition, it was shown that in the absence of top-down guidance, the model is sensitive to bottom-up cues such as saliency and abrupt visual onset.
Collapse
Affiliation(s)
- Mateja Marić
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| | - Dražen Domijan
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| |
Collapse
|
15
|
Burylko O, Kazanovich Y, Borisyuk R. Winner-take-all in a phase oscillator system with adaptation. Sci Rep 2018; 8:416. [PMID: 29323149 PMCID: PMC5765106 DOI: 10.1038/s41598-017-18666-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 12/15/2017] [Indexed: 11/09/2022] Open
Abstract
We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation.
Collapse
Affiliation(s)
- Oleksandr Burylko
- Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska 3, 01601, Kyiv, Ukraine.
| | - Yakov Kazanovich
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia
| | - Roman Borisyuk
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia.,School of Computing and Mathematics, Plymouth University, PL4 8AA, Plymouth, United Kingdom
| |
Collapse
|
16
|
Chen Y. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks. Front Comput Neurosci 2017; 11:20. [PMID: 28484384 PMCID: PMC5399521 DOI: 10.3389/fncom.2017.00020] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 03/20/2017] [Indexed: 11/13/2022] Open
Abstract
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.
Collapse
|
17
|
Namiki S, Kanzaki R. The neurobiological basis of orientation in insects: insights from the silkmoth mating dance. CURRENT OPINION IN INSECT SCIENCE 2016; 15:16-26. [PMID: 27436728 DOI: 10.1016/j.cois.2016.02.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 02/14/2016] [Accepted: 02/17/2016] [Indexed: 06/06/2023]
Abstract
Counterturning is a common movement pattern during orientation behavior in insects. Once male moths sense sex pheromones and then lose the input, they demonstrate zigzag movements, alternating between left and right turns, to increase the probability to contact with the pheromone plume. We summarize the anatomy and function of the neural circuit involved in pheromone orientation in the silkmoth. A neural circuit, the lateral accessory lobe (LAL), serves a role as the circuit module for zigzag movements and controls this operation using a flip-flop neural switch. Circuit design of the LAL is well conserved across species. We hypothesize that this zigzag module is utilized in a wide range of insect behavior. We introduce two examples of the potential use: orientation flight and the waggle dance in bees.
Collapse
Affiliation(s)
- Shigehiro Namiki
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8904, Japan.
| | - Ryohei Kanzaki
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8904, Japan.
| |
Collapse
|
18
|
Schaub MT, Billeh YN, Anastassiou CA, Koch C, Barahona M. Emergence of Slow-Switching Assemblies in Structured Neuronal Networks. PLoS Comput Biol 2015; 11:e1004196. [PMID: 26176664 PMCID: PMC4503787 DOI: 10.1371/journal.pcbi.1004196] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Accepted: 02/16/2015] [Indexed: 11/18/2022] Open
Abstract
Unraveling the interplay between connectivity and spatio-temporal dynamics in neuronal networks is a key step to advance our understanding of neuronal information processing. Here we investigate how particular features of network connectivity underpin the propensity of neural networks to generate slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased firing within assemblies of neurons which transition slowly between different assemblies throughout the network. We show that the emergence of SSA activity is linked to spectral properties of the asymmetric synaptic weight matrix. In particular, the leading eigenvalues that dictate the slow dynamics exhibit a gap with respect to the bulk of the spectrum, and the associated Schur vectors exhibit a measure of block-localization on groups of neurons, thus resulting in coherent dynamical activity on those groups. Through simple rate models, we gain analytical understanding of the origin and importance of the spectral gap, and use these insights to develop new network topologies with alternative connectivity paradigms which also display SSA activity. Specifically, SSA dynamics involving excitatory and inhibitory neurons can be achieved by modifying the connectivity patterns between both types of neurons. We also show that SSA activity can occur at multiple timescales reflecting a hierarchy in the connectivity, and demonstrate the emergence of SSA in small-world like networks. Our work provides a step towards understanding how network structure (uncovered through advancements in neuroanatomy and connectomics) can impact on spatio-temporal neural activity and constrain the resulting dynamics.
Collapse
Affiliation(s)
- Michael T. Schaub
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Yazan N. Billeh
- Computation and Neural Systems Program, California Institute of Technology, Pasadena, California, United States of America
| | | | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| |
Collapse
|
19
|
Marx S, Gruenhage G, Walper D, Rutishauser U, Einhäuser W. Competition with and without priority control: linking rivalry to attention through winner-take-all networks with memory. Ann N Y Acad Sci 2015; 1339:138-53. [PMID: 25581077 PMCID: PMC4376592 DOI: 10.1111/nyas.12575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Competition is ubiquitous in perception. For example, items in the visual field compete for processing resources, and attention controls their priority (biased competition). The inevitable ambiguity in the interpretation of sensory signals yields another form of competition: distinct perceptual interpretations compete for access to awareness. Rivalry, where two equally likely percepts compete for dominance, explicates the latter form of competition. Building upon the similarity between attention and rivalry, we propose to model rivalry by a generic competitive circuit that is widely used in the attention literature-a winner-take-all (WTA) network. Specifically, we show that a network of two coupled WTA circuits replicates three common hallmarks of rivalry: the distribution of dominance durations, their dependence on input strength ("Levelt's propositions"), and the effects of stimulus removal (blanking). This model introduces a form of memory by forming discrete states and explains experimental data better than competitive models of rivalry without memory. This result supports the crucial role of memory in rivalry specifically and in competitive processes in general. Our approach unifies the seemingly distinct phenomena of rivalry, memory, and attention in a single model with competition as the common underlying principle.
Collapse
Affiliation(s)
- Svenja Marx
- Neurophysics, Philipp-University of MarburgMarburg, Germany
| | - Gina Gruenhage
- Bernstein Center for Computational NeurosciencesBerlin, Germany
| | - Daniel Walper
- Neurophysics, Philipp-University of MarburgMarburg, Germany
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical CenterLos Angeles, California
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, California
| | | |
Collapse
|
20
|
Rutishauser U, Slotine JJ, Douglas R. Computation in dynamically bounded asymmetric systems. PLoS Comput Biol 2015; 11:e1004039. [PMID: 25617645 PMCID: PMC4305289 DOI: 10.1371/journal.pcbi.1004039] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 11/12/2014] [Indexed: 11/18/2022] Open
Abstract
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, California Institute of Technology, Pasadena, California, United States of America
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, United States of America
- Departments of Neurosurgery, Neurology and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, United States of America
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Rodney Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
21
|
Binas J, Rutishauser U, Indiveri G, Pfeiffer M. Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity. Front Comput Neurosci 2014; 8:68. [PMID: 25071538 PMCID: PMC4086298 DOI: 10.3389/fncom.2014.00068] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 06/16/2014] [Indexed: 12/31/2022] Open
Abstract
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal-restoration to state-dependent processing. However, such networks require fine-tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean-field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring.
Collapse
Affiliation(s)
- Jonathan Binas
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Ueli Rutishauser
- Department of Neurosurgery and Department of Neurology, Cedars-Sinai Medical CenterLos Angeles, CA, USA
- Computation and Neural Systems Program, Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, CA, USA
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| |
Collapse
|
22
|
Mostafa H, Indiveri G. Sequential activity in asymmetrically coupled winner-take-all circuits. Neural Comput 2014; 26:1973-2004. [PMID: 24877737 DOI: 10.1162/neco_a_00619] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding the sequence generation and learning mechanisms used by recurrent neural networks in the nervous system is an important problem that has been studied extensively. However, most of the models proposed in the literature are either not compatible with neuroanatomy and neurophysiology experimental findings, or are not robust to noise and rely on fine tuning of the parameters. In this work, we propose a novel model of sequence learning and generation that is based on the interactions among multiple asymmetrically coupled winner-take-all (WTA) circuits. The network architecture is consistent with mammalian cortical connectivity data and uses realistic neuronal and synaptic dynamics that give rise to noise-robust patterns of sequential activity. The novel aspect of the network we propose lies in its ability to produce robust patterns of sequential activity that can be halted, resumed, and readily modulated by external input, and in its ability to make use of realistic plastic synapses to learn and reproduce the arbitrary input-imposed sequential patterns. Sequential activity takes the form of a single activity bump that stably propagates through multiple WTA circuits along one of a number of possible paths. Because the network can be configured to either generate spontaneous sequences or wait for external inputs to trigger a transition in the sequence, it provides the basis for creating state-dependent perception-action loops. We first analyze a rate-based approximation of the proposed spiking network to highlight the relevant features of the network dynamics and then show numerical simulation results with spiking neurons, realistic conductance-based synapses, and spike-timing dependent plasticity (STDP) rules to validate the rate-based model.
Collapse
Affiliation(s)
- Hesham Mostafa
- Institute for Neuroinformatics University of Zurich and ETH Zurich Zurich 8057, Switzerland
| | | |
Collapse
|
23
|
Maoz U, Rutishauser U, Kim S, Cai X, Lee D, Koch C. Predeliberation activity in prefrontal cortex and striatum and the prediction of subsequent value judgment. Front Neurosci 2013; 7:225. [PMID: 24324396 PMCID: PMC3840801 DOI: 10.3389/fnins.2013.00225] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 11/05/2013] [Indexed: 11/15/2022] Open
Abstract
Rational, value-based decision-making mandates selecting the option with highest subjective expected value after appropriate deliberation. We examined activity in the dorsolateral prefrontal cortex (DLPFC) and striatum of monkeys deciding between smaller, immediate rewards and larger, delayed ones. We previously found neurons that modulated their activity in this task according to the animal's choice, while it deliberated (choice neurons). Here we found neurons whose spiking activities were predictive of the spatial location of the selected target (spatial-bias neurons) or the size of the chosen reward (reward-bias neurons) before the onset of the cue presenting the decision-alternatives, and thus before rational deliberation could begin. Their predictive power increased as the values the animals associated with the two decision alternatives became more similar. The ventral striatum (VS) preferentially contained spatial-bias neurons; the caudate nucleus (CD) preferentially contained choice neurons. In contrast, the DLPFC contained significant numbers of all three neuron types, but choice neurons were not preferentially also bias neurons of either kind there, nor were spatial-bias neurons preferentially also choice neurons, and vice versa. We suggest a simple winner-take-all (WTA) circuit model to account for the dissociation of choice and bias neurons. The model reproduced our results and made additional predictions that were borne out empirically. Our data are compatible with the hypothesis that the DLPFC and striatum harbor dissociated neural populations that represent choices and predeliberation biases that are combined after cue onset; the bias neurons have a weaker effect on the ultimate decision than the choice neurons, so their influence is progressively apparent for trials where the values associated with the decision alternatives are increasingly similar.
Collapse
Affiliation(s)
- Uri Maoz
- Division of Biology, California Institute of Technology Pasadena, CA, USA
| | | | | | | | | | | |
Collapse
|
24
|
da Costa NM, Martin KA. Sparse reconstruction of brain circuits: Or, how to survive without a microscopic connectome. Neuroimage 2013; 80:27-36. [DOI: 10.1016/j.neuroimage.2013.04.054] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Revised: 04/10/2013] [Accepted: 04/15/2013] [Indexed: 11/30/2022] Open
|
25
|
Krüger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, Rodríguez-Sánchez AJ, Wiskott L. Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:1847-1871. [PMID: 23787340 DOI: 10.1109/tpami.2012.272] [Citation(s) in RCA: 105] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Collapse
Affiliation(s)
- Norbert Krüger
- Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvej 55, Odense M 5230, Denmark.
| | | | | | | | | | | | | | | |
Collapse
|
26
|
Abstract
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a "soft state machine" running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.
Collapse
|
27
|
Chen Y, McKinstry JL, Edelman GM. Versatile networks of simulated spiking neurons displaying winner-take-all behavior. Front Comput Neurosci 2013; 7:16. [PMID: 23515493 PMCID: PMC3601301 DOI: 10.3389/fncom.2013.00016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Accepted: 03/01/2013] [Indexed: 12/02/2022] Open
Abstract
We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Collapse
|
28
|
Li S, Liu B, Li Y. Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:301-309. [PMID: 24808283 DOI: 10.1109/tnnls.2012.2230451] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The winner-take-all (WTA) competition is widely observed in both inanimate and biological media and society. Many mathematical models are proposed to describe the phenomena discovered in different fields. These models are capable of demonstrating the WTA competition. However, they are often very complicated due to the compromise with experimental realities in the particular fields; it is often difficult to explain the underlying mechanism of such a competition from the perspective of feedback based on those sophisticate models. In this paper, we make steps in that direction and present a simple model, which produces the WTA competition by taking advantage of selective positive-negative feedback through the interaction of neurons via p-norm. Compared to existing models, this model has an explicit explanation of the competition mechanism. The ultimate convergence behavior of this model is proven analytically. The convergence rate is discussed and simulations are conducted in both static and dynamic competition scenarios. Both theoretical and numerical results validate the effectiveness of the dynamic equation in describing the nonlinear phenomena of WTA competition.
Collapse
|
29
|
Genot AJ, Fujii T, Rondelez Y. Computing with competition in biochemical networks. PHYSICAL REVIEW LETTERS 2012; 109:208102. [PMID: 23215526 DOI: 10.1103/physrevlett.109.208102] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2012] [Revised: 08/28/2012] [Indexed: 06/01/2023]
Abstract
Cells rely on limited resources such as enzymes or transcription factors to process signals and make decisions. However, independent cellular pathways often compete for a common molecular resource. Competition is difficult to analyze because of its nonlinear global nature, and its role remains unclear. Here we show how decision pathways such as transcription networks may exploit competition to process information. Competition for one resource leads to the recognition of convex sets of patterns, whereas competition for several resources (overlapping or cascaded regulons) allows even more general pattern recognition. Competition also generates surprising couplings, such as correlating species that share no resource but a common competitor. The mechanism we propose relies on three primitives that are ubiquitous in cells: multiinput motifs, competition for a resource, and positive feedback loops.
Collapse
Affiliation(s)
- Anthony J Genot
- LIMMS/CNRS-IIS, Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Tokyo 153-8505, Japan
| | | | | |
Collapse
|
30
|
Rutishauser U, Slotine JJ, Douglas RJ. Competition through selective inhibitory synchrony. Neural Comput 2012; 24:2033-52. [PMID: 22509969 DOI: 10.1162/neco_a_00304] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily because their axons and dendrites are colocalized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this colocalization assumption is not valid. In this letter, we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron and even across different cortical areas. We prove by nonlinear contraction analysis and demonstrate by simulation that distributed WTA subsystems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Department of Neural Systems, Max Planck Institute for Brain Research, Frankfurt am Main, Hessen 60528, Germany.
| | | | | |
Collapse
|
31
|
|
32
|
Urban A, Ermentrout B. Sequentially firing neurons confer flexible timing in neural pattern generators. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 83:051914. [PMID: 21728578 DOI: 10.1103/physreve.83.051914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2010] [Revised: 01/19/2011] [Indexed: 05/31/2023]
Abstract
Neuronal networks exhibit a variety of complex spatiotemporal patterns that include sequential activity, synchrony, and wavelike dynamics. Inhibition is the primary means through which such patterns are implemented. This behavior is dependent on both the intrinsic dynamics of the individual neurons as well as the connectivity patterns. Many neural circuits consist of networks of smaller subcircuits (motifs) that are coupled together to form the larger system. In this paper, we consider a particularly simple motif, comprising purely inhibitory interactions, which generates sequential periodic dynamics. We first describe the dynamics of the single motif both for general balanced coupling (all cells receive the same number and strength of inputs) and then for a specific class of balanced networks: circulant systems. We couple these motifs together to form larger networks. We use the theory of weak coupling to derive phase models which, themselves, have a certain structure and symmetry. We show that this structure endows the coupled system with the ability to produce arbitrary timing relationships between symmetrically coupled motifs and that the phase relationships are robust over a wide range of frequencies. The theory is applicable to many other systems in biology and physics.
Collapse
Affiliation(s)
- Alexander Urban
- Department of Physics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| | | |
Collapse
|
33
|
Evolving Probabilistic Spiking Neural Networks for Spatio-temporal Pattern Recognition: A Preliminary Study on Moving Object Recognition. ACTA ACUST UNITED AC 2011. [DOI: 10.1007/978-3-642-24965-5_25] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
34
|
Nuntalid N, Dhoble K, Kasabov N. EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network. NEURAL INFORMATION PROCESSING 2011. [DOI: 10.1007/978-3-642-24955-6_54] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|