101
|
Abstract
The visual recognition of actions is an important visual function that is critical for motor learning and social communication. Action-selective neurons have been found in different cortical regions, including the superior temporal sulcus, parietal and premotor cortex. Among those are mirror neurons, which link visual and motor representations of body movements. While numerous theoretical models for the mirror neuron system have been proposed, the computational basis of the visual processing of goal-directed actions remains largely unclear. While most existing models focus on the possible role of motor representations in action recognition, we propose a model showing that many critical properties of action-selective visual neurons can be accounted for by well-established visual mechanisms. Our model accomplishes the recognition of hand actions from real video stimuli, exploiting exclusively mechanisms that can be implemented in a biologically plausible way by cortical neurons. We show that the model provides a unifying quantitatively consistent account of a variety of electrophysiological results from action-selective visual neurons. In addition, it makes a number of predictions, some of which could be confirmed in recent electrophysiological experiments.
Collapse
|
102
|
Kornysheva K, Sierk A, Diedrichsen J. Interaction of temporal and ordinal representations in movement sequences. J Neurophysiol 2012; 109:1416-24. [PMID: 23221413 DOI: 10.1152/jn.00509.2012] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The production of movement sequences requires an accurate control of muscle activation in time. How does the nervous system encode the precise timing of these movements? One possibility is that the timing of movements (temporal sequence) is an emergent property of the dynamic state of the nervous system and therefore intimately linked to a representation of the sequence of muscle commands (ordinal sequence). Alternatively, timing may be represented independently of the motor effectors and would be transferable to a new ordinal sequence. Some studies have found that a learned temporal sequence cannot be transferred to a new ordinal sequence, thus arguing for an integrated representation. Others have observed temporal transfer across movement sequences and have advocated an independent representation of temporal information. Using a modified serial reaction time task, we tested alternative models of the representation of temporal structure and the interaction between the output of separate ordinal and temporal sequence representations. Temporal transfer depended on whether a novel ordinal sequence was fixed within each test block. Our results confirm the presence of an independent representation of temporal structure and advocate a nonlinear multiplicative neural interaction of temporal and ordinal signals in the production of movements.
Collapse
Affiliation(s)
- Katja Kornysheva
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom.
| | | | | |
Collapse
|
103
|
Wulff S, Bosco A, Havermann K, Placenti G, Fattori P, Lappe M. Eye position effects in saccadic adaptation in macaque monkeys. J Neurophysiol 2012; 108:2819-26. [DOI: 10.1152/jn.00212.2012] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The saccadic amplitude of humans and monkeys can be adapted using intrasaccadic target steps in the McLaughlin paradigm. It is generally believed that, as a result of a purely retinal reference frame, after adaptation of a saccade of a certain amplitude and direction, saccades of the same amplitude and direction are all adapted to the same extent, independently from the initial eye position. However, recent studies in humans have put the pure retinal coding in doubt by revealing that the initial eye position has an effect on the transfer of adaptation to saccades of different starting points. Since humans and monkeys show some species differences in adaptation, we tested the eye position dependence in monkeys. Two trained Macaca fascicularis performed reactive rightward saccades from five equally horizontally distributed starting positions. All saccades were made to targets with the same retinotopic motor vector. In each session, the saccades that started at one particular initial eye position, the adaptation position, were adapted to shorter amplitude, and the adaptation of the saccades starting at the other four positions was measured. The results show that saccades that started at the other positions were less adapted than saccades that started at the adaptation position. With increasing distance between the starting position of the test saccade and the adaptation position, the amplitude change of the test saccades decreased with a Gaussian profile. We conclude that gain-decreasing saccadic adaptation in macaques is specific to the initial eye position at which the adaptation has been induced.
Collapse
Affiliation(s)
- Svenja Wulff
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| | - Annalisa Bosco
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Katharina Havermann
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| | - Giacomo Placenti
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| |
Collapse
|
104
|
Lipinski J, Schneegans S, Sandamirskaya Y, Spencer JP, Schöner G. A neurobehavioral model of flexible spatial language behaviors. J Exp Psychol Learn Mem Cogn 2012; 38:1490-511. [PMID: 21517224 PMCID: PMC3665425 DOI: 10.1037/a0022643] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes.
Collapse
Affiliation(s)
- John Lipinski
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany.
| | | | | | | | | |
Collapse
|
105
|
Tanaka H, Sejnowski TJ. Computing reaching dynamics in motor cortex with Cartesian spatial coordinates. J Neurophysiol 2012; 109:1182-201. [PMID: 23114209 DOI: 10.1152/jn.00279.2012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
How neurons in the primary motor cortex control arm movements is not yet understood. Here we show that the equations of motion governing reaching simplify when expressed in spatial coordinates. In this fixed reference frame, joint torques are the sums of vector cross products between the spatial positions of limb segments and their spatial accelerations and velocities. The consequences that follow from this model explain many properties of neurons in the motor cortex, including directional broad, cosinelike tuning, nonuniformly distributed preferred directions dependent on the workspace, and the rotation of the population vector during arm movements. Remarkably, the torques can be directly computed as a linearly weighted sum of responses from cortical motoneurons, and the muscle tensions can be obtained as rectified linear sums of the joint torques. This allows the required muscle tensions to be computed rapidly from a trajectory in space with a feedforward network model.
Collapse
Affiliation(s)
- Hirokazu Tanaka
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, USA.
| | | |
Collapse
|
106
|
Abstract
In this article, I survey the integrated connectionist/symbolic (ICS) cognitive architecture in which higher cognition must be formally characterized on two levels of description. At the microlevel, parallel distributed processing (PDP) characterizes mental processing; this PDP system has special organization in virtue of which it can be characterized at the macrolevel as a kind of symbolic computational system. The symbolic system inherits certain properties from its PDP substrate; the symbolic functions computed constitute optimization of a well-formedness measure called Harmony. The most important outgrowth of the ICS research program is optimality theory (Prince & Smolensky, 1993/2004), an optimization-based grammatical theory that provides a formal theory of cross-linguistic typology. Linguistically, Harmony maximization corresponds to minimization of markedness or structural ill-formedness. Cognitive explanation in ICS requires the collaboration of symbolic and connectionist principles. ICS is developed in detail in Smolensky and Legendre (2006a); this article is a précis of and guide to those volumes.
Collapse
|
107
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
108
|
McCollum G, Klam F, Graf W. Face-infringement space: the frame of reference of the ventral intraparietal area. BIOLOGICAL CYBERNETICS 2012; 106:219-239. [PMID: 22653480 DOI: 10.1007/s00422-012-0491-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2010] [Accepted: 05/02/2012] [Indexed: 06/01/2023]
Abstract
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.
Collapse
Affiliation(s)
- Gin McCollum
- Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, PO Box 751, Portland, OR, 97207-751, USA.
| | | | | |
Collapse
|
109
|
Casarotti M, Lisi M, Umiltà C, Zorzi M. Paying Attention through Eye Movements: A Computational Investigation of the Premotor Theory of Spatial Attention. J Cogn Neurosci 2012; 24:1519-31. [DOI: 10.1162/jocn_a_00231] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Growing evidence indicates that planning eye movements and orienting visuospatial attention share overlapping brain mechanisms. A tight link between endogenous attention and eye movements is maintained by the premotor theory, in contrast to other accounts that postulate the existence of specific attention mechanisms that modulate the activity of information processing systems. The strong assumption of equivalence between attention and eye movements, however, is challenged by demonstrations that human observers are able to keep attention on a specific location while moving the eyes elsewhere. Here we investigate whether a recurrent model of saccadic planning can account for attentional effects without requiring additional or specific mechanisms separate from the circuits that perform sensorimotor transformations for eye movements. The model builds on the basis function approach and includes a circuit that performs spatial remapping using an “internal forward model” of how visual inputs are modified as a result of saccadic movements. Simulations show that the latter circuit is crucial to account for dissociations between attention and eye movements that may be invoked to disprove the premotor theory. The model provides new insights into how spatial remapping may be implemented in parietal cortex and offers a computational framework for recent proposals that link visual stability with remapping of attention pointers.
Collapse
|
110
|
Diedrichsen J, Wiestler T, Krakauer JW. Two distinct ipsilateral cortical representations for individuated finger movements. ACTA ACUST UNITED AC 2012; 23:1362-77. [PMID: 22610393 PMCID: PMC3643717 DOI: 10.1093/cercor/bhs120] [Citation(s) in RCA: 110] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Movements of the upper limb are controlled mostly through the contralateral hemisphere. Although overall activity changes in the ipsilateral motor cortex have been reported, their functional significance remains unclear. Using human functional imaging, we analyzed neural finger representations by studying differences in fine-grained activation patterns for single isometric finger presses. We demonstrate that cortical motor areas encode ipsilateral movements in 2 fundamentally different ways. During unimanual ipsilateral finger presses, primary sensory and motor cortices show, underneath global suppression, finger-specific activity patterns that are nearly identical to those elicited by contralateral mirror-symmetric action. This component vanishes when both motor cortices are functionally engaged during bimanual actions. We suggest that the ipsilateral representation present during unimanual presses arises because otherwise functionally idle circuits are driven by input from the opposite hemisphere. A second type of representation becomes evident in caudal premotor and anterior parietal cortices during bimanual actions. In these regions, ipsilateral actions are represented as nonlinear modulation of activity patterns related to contralateral actions, an encoding scheme that may provide the neural substrate for coordinating bimanual movements. We conclude that ipsilateral cortical representations change their informational content and functional role, depending on the behavioral context.
Collapse
Affiliation(s)
- Jörn Diedrichsen
- Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | | |
Collapse
|
111
|
PAUWELS KARL, VAN HULLE MARCM. HEAD-CENTRIC DISPARITY AND EPIPOLAR GEOMETRY ESTIMATION FROM A POPULATION OF BINOCULAR ENERGY NEURONS. Int J Neural Syst 2012; 22:1250007. [DOI: 10.1142/s0129065712500074] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We present a hybrid neural network architecture that supports the estimation of binocular disparity in a cyclopean, head-centric coordinate system without explicitly establishing retinal correspondences. Instead the responses of binocular energy neurons are gain-modulated by oculomotor signals. The network can handle the full six degrees of freedom of binocular gaze and operates directly on image pairs of possibly varying contrast. Furthermore, we show that in the absence of an oculomotor signal the same architecture is capable of estimating the epipolar geometry directly from the population response. The increased complexity of the scenarios considered in this work provides an important step towards the application of computational models centered on gain modulation mechanisms in real-world robotic applications. The proposed network is shown to outperform a standard computer vision technique on a disparity estimation task involving real-world stereo images.
Collapse
Affiliation(s)
- KARL PAUWELS
- Computer Architecture and Technology Department, University of Granada, Calle Periodista Daniel Saucedo, s/n, 18071 Granada, Spain
| | - MARC M. VAN HULLE
- Laboratorium voor Neuro-en Psychofysiologie, K.U. Leuven, Campus Gasthuisberg, O&N II Herestraat 49–Bus 1021, 3000 Leuven, Belgium
| |
Collapse
|
112
|
De Meyer K, Spratling MW. A Model of Partial Reference Frame Transforms Through Pooling of Gain-Modulated Responses. Cereb Cortex 2012; 23:1230-9. [DOI: 10.1093/cercor/bhs117] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
113
|
Neural theory for the perception of causal actions. PSYCHOLOGICAL RESEARCH 2012; 76:476-93. [DOI: 10.1007/s00426-012-0437-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2011] [Accepted: 04/04/2012] [Indexed: 10/28/2022]
|
114
|
Schneegans S, Schöner G. A neural mechanism for coordinate transformation predicts pre-saccadic remapping. BIOLOGICAL CYBERNETICS 2012; 106:89-109. [PMID: 22481644 DOI: 10.1007/s00422-012-0484-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2011] [Accepted: 03/13/2012] [Indexed: 05/06/2023]
Abstract
Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.
Collapse
|
115
|
Gain field encoding of the kinematics of both arms in the internal model enables flexible bimanual action. J Neurosci 2012; 31:17058-68. [PMID: 22114275 DOI: 10.1523/jneurosci.2982-11.2011] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Bimanual action requires the neural controller (internal model) for each arm to predictively compensate for mechanical interactions resulting from movement of both that arm and its counterpart on the opposite side of the body. Here, we demonstrate that the brain may accomplish this by constructing the internal model with primitives multiplicatively encoding information from the kinematics of both arms. We had human participants adapt to a novel force field imposed on one arm while both arms were moving in particular directions and examined the generalization pattern of motor learning when changing the movement directions of both arms. The generalization pattern was consistent with the pattern predicted from the multiplicative encoding scheme. As proposed by previous theoretical studies, the strength of multiplicative encoding was manifested in the observation that participants could adapt reaching movements to complicated force fields depending nonlinearly on the movement directions of both arms. These results indicate that multiplicative neuronal influence of the kinematics of the opposing arm on the internal models enables the brain to control bimanual movement by providing great flexible ability to handle arbitrary dynamical environments resulting from the interactions of both arms.
Collapse
|
116
|
Dynamics of eye-position signals in the dorsal visual system. Curr Biol 2012; 22:173-9. [PMID: 22225775 DOI: 10.1016/j.cub.2011.12.032] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2011] [Revised: 12/01/2011] [Accepted: 12/12/2011] [Indexed: 11/22/2022]
Abstract
BACKGROUND Many visual areas of the primate brain contain signals related to the current position of the eyes in the orbit. These cortical eye-position signals are thought to underlie the transformation of retinal input-which changes with every eye movement-into a stable representation of visual space. For this coding scheme to work, such signals would need to be updated fast enough to keep up with the eye during normal exploratory behavior. We examined the dynamics of cortical eye-position signals in four dorsal visual areas of the macaque brain: the lateral and ventral intraparietal areas (LIP; VIP), the middle temporal area (MT), and the medial-superior temporal area (MST). We recorded extracellular activity of single neurons while the animal performed sequences of fixations and saccades in darkness. RESULTS The data show that eye-position signals are updated predictively, such that the representation shifts in the direction of a saccade prior to (<100 ms) the actual eye movement. Despite this early start, eye-position signals remain inaccurate until shortly after (10-150 ms) the eye movement. By using simulated behavioral experiments, we show that this brief misrepresentation of eye position provides a neural explanation for the psychophysical phenomenon of perisaccadic mislocalization, in which observers misperceive the positions of visual targets flashed around the time of saccadic eye movements. CONCLUSIONS Together, these results suggest that eye-position signals in the dorsal visual system are updated rapidly across eye movements and play a direct role in perceptual localization, even when they are erroneous.
Collapse
|
117
|
|
118
|
Martin JP, Beyerlein A, Dacks AM, Reisenman CE, Riffell JA, Lei H, Hildebrand JG. The neurobiology of insect olfaction: sensory processing in a comparative context. Prog Neurobiol 2011; 95:427-47. [PMID: 21963552 DOI: 10.1016/j.pneurobio.2011.09.007] [Citation(s) in RCA: 118] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2011] [Revised: 09/10/2011] [Accepted: 09/19/2011] [Indexed: 10/17/2022]
Abstract
The simplicity and accessibility of the olfactory systems of insects underlie a body of research essential to understanding not only olfactory function but also general principles of sensory processing. As insect olfactory neurobiology takes advantage of a variety of species separated by millions of years of evolution, the field naturally has yielded some conflicting results. Far from impeding progress, the varieties of insect olfactory systems reflect the various natural histories, adaptations to specific environments, and the roles olfaction plays in the life of the species studied. We review current findings in insect olfactory neurobiology, with special attention to differences among species. We begin by describing the olfactory environments and olfactory-based behaviors of insects, as these form the context in which neurobiological findings are interpreted. Next, we review recent work describing changes in olfactory systems as adaptations to new environments or behaviors promoting speciation. We proceed to discuss variations on the basic anatomy of the antennal (olfactory) lobe of the brain and higher-order olfactory centers. Finally, we describe features of olfactory information processing including gain control, transformation between input and output by operations such as broadening and sharpening of tuning curves, the role of spiking synchrony in the antennal lobe, and the encoding of temporal features of encounters with an odor plume. In each section, we draw connections between particular features of the olfactory neurobiology of a species and the animal's life history. We propose that this perspective is beneficial for insect olfactory neurobiology in particular and sensory neurobiology in general.
Collapse
Affiliation(s)
- Joshua P Martin
- Department of Neuroscience, College of Science, University of Arizona, 1040 East Fourth Street, Tucson, AZ 85721-0077, USA.
| | | | | | | | | | | | | |
Collapse
|
119
|
Bouvrie J, Slotine JJ. Synchronization and redundancy: implications for robustness of neural learning and decision making. Neural Comput 2011; 23:2915-41. [PMID: 21732858 DOI: 10.1162/neco_a_00183] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by nonideal biological building blocks that can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error, which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. We discuss range of situations in which the mechanisms we model arise in brain science and draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.
Collapse
Affiliation(s)
- Jake Bouvrie
- Department of Mathematics, Duke University, Durham, NC 27708, USA.
| | | |
Collapse
|
120
|
De Meyer K, Spratling MW. Multiplicative Gain Modulation Arises Through Unsupervised Learning in a Predictive Coding Model of Cortical Function. Neural Comput 2011; 23:1536-67. [DOI: 10.1162/neco_a_00130] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The combination of two or more population-coded signals in a neural model of predictive coding can give rise to multiplicative gain modulation in the response properties of individual neurons. Synaptic weights generating these multiplicative response properties can be learned using an unsupervised, Hebbian learning rule. The behavior of the model is compared to empirical data on gaze-dependent gain modulation of cortical cells and found to be in good agreement with a range of physiological observations. Furthermore, it is demonstrated that the model can learn to represent a set of basis functions. This letter thus connects an often-observed neurophysiological phenomenon and important neurocomputational principle (gain modulation) with an influential theory of brain operation (predictive coding).
Collapse
Affiliation(s)
- Kris De Meyer
- Department of Informatics and Division of Engineering, King's College London, WC2R 2LS, U.K
| | - Michael W. Spratling
- Department of Informatics and Division of Engineering, King's College London, WC2R 2LS, U.K
| |
Collapse
|
121
|
Cuppini C, Magosso E, Ursino M. Organization, maturation, and plasticity of multisensory integration: insights from computational modeling studies. Front Psychol 2011; 2:77. [PMID: 21687448 PMCID: PMC3110383 DOI: 10.3389/fpsyg.2011.00077] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2010] [Accepted: 04/12/2011] [Indexed: 11/15/2022] Open
Abstract
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | |
Collapse
|
122
|
Chinellato E, Grzyb BJ, Marzocchi N, Bosco A, Fattori P, del Pobil AP. The Dorso-medial visual stream: From neural activation to sensorimotor interaction. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2010.07.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
123
|
|
124
|
Zibner SKU, Faubel C, Iossifidis I, Schoner G. Dynamic Neural Fields as Building Blocks of a Cortex-Inspired Architecture for Robotic Scene Representation. ACTA ACUST UNITED AC 2011. [DOI: 10.1109/tamd.2011.2109714] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
125
|
Favorov OV, Kursun O. Neocortical layer 4 as a pluripotent function linearizer. J Neurophysiol 2011; 105:1342-60. [PMID: 21248059 DOI: 10.1152/jn.00708.2010] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A highly effective kernel-based strategy used in machine learning is to transform the input space into a new "feature" space where nonlinear problems become linear and more readily solvable with efficient linear techniques. We propose that a similar "problem-linearization" strategy is used by the neocortical input layer 4 to reduce the difficulty of learning nonlinear relations between the afferent inputs to a cortical column and its to-be-learned upper layer outputs. The key to this strategy is the presence of broadly tuned feed-forward inhibition in layer 4: it turns local layer 4 domains into functional analogs of radial basis function networks, which are known for their universal function approximation capabilities. With the use of a computational model of layer 4 with feed-forward inhibition and Hebbian afferent connections, self-organized on natural images to closely match structural and functional properties of layer 4 of the cat primary visual cortex, we show that such layer-4-like networks have a strong intrinsic tendency to perform input transforms that automatically linearize a broad repertoire of potential nonlinear functions over the afferent inputs. This capacity for pluripotent function linearization, which is highly robust to variations in network parameters, suggests that layer 4 might contribute importantly to sensory information processing as a pluripotent function linearizer, performing such a transform of afferent inputs to a cortical column that makes it possible for neurons in the upper layers of the column to learn and perform their complex functions using primarily linear operations.
Collapse
Affiliation(s)
- Oleg V Favorov
- Department of Biomedical Engineering, University of North Carolina School of Medicine, Chapel Hill, NC 27599-7545, USA.
| | | |
Collapse
|
126
|
Sabes PN. Sensory integration for reaching: models of optimality in the context of behavior and the underlying neural circuits. PROGRESS IN BRAIN RESEARCH 2011; 191:195-209. [PMID: 21741553 DOI: 10.1016/b978-0-444-53752-2.00004-7] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Although multisensory integration has been well modeled at the behavioral level, the link between these behavioral models and the underlying neural circuits is still not clear. This gap is even greater for the problem of sensory integration during movement planning and execution. The difficulty lies in applying simple models of sensory integration to the complex computations that are required for movement control and to the large networks of brain areas that perform these computations. Here I review psychophysical, computational, and physiological work on multisensory integration during movement planning, with an emphasis on goal-directed reaching. I argue that sensory transformations must play a central role in any modeling effort. In particular, the statistical properties of these transformations factor heavily into the way in which downstream signals are combined. As a result, our models of optimal integration are only expected to apply "locally," that is, independently for each brain area. I suggest that local optimality can be reconciled with globally optimal behavior if one views the collection of parietal sensorimotor areas not as a set of task-specific domains, but rather as a palette of complex, sensorimotor representations that are flexibly combined to drive downstream activity and behavior.
Collapse
Affiliation(s)
- Philip N Sabes
- Department of Physiology, Keck Center for Integrative Neuroscience, University of California, San Francisco, CA, USA.
| |
Collapse
|
127
|
Rao RPN. Decision making under uncertainty: a neural model based on partially observable markov decision processes. Front Comput Neurosci 2010; 4:146. [PMID: 21152255 PMCID: PMC2998859 DOI: 10.3389/fncom.2010.00146] [Citation(s) in RCA: 101] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2010] [Accepted: 10/24/2010] [Indexed: 11/13/2022] Open
Abstract
A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs). Actions are selected based not on a single "optimal" estimate of state but on the posterior distribution over states (the "belief" state). We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD) learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize rewards.
Collapse
Affiliation(s)
- Rajesh P. N. Rao
- Department of Computer Science and Engineering and Neurobiology and Behavior Program, University of WashingtonSeattle, WA, USA
| |
Collapse
|
128
|
Fix J, Rougier N, Alexandre F. A Dynamic Neural Field Approach to the Covert and Overt Deployment of Spatial Attention. Cognit Comput 2010. [DOI: 10.1007/s12559-010-9083-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
129
|
Rigotti M, Ben Dayan Rubin D, Wang XJ, Fusi S. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. Front Comput Neurosci 2010; 4:24. [PMID: 21048899 PMCID: PMC2967380 DOI: 10.3389/fncom.2010.00024] [Citation(s) in RCA: 107] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2010] [Accepted: 06/29/2010] [Indexed: 11/17/2022] Open
Abstract
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.
Collapse
Affiliation(s)
- Mattia Rigotti
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University New York, NY, USA
| | | | | | | |
Collapse
|
130
|
Maier JX, Groh JM. Comparison of gain-like properties of eye position signals in inferior colliculus versus auditory cortex of primates. Front Integr Neurosci 2010; 4. [PMID: 20838470 PMCID: PMC2936885 DOI: 10.3389/fnint.2010.00121] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2010] [Accepted: 08/02/2010] [Indexed: 11/17/2022] Open
Abstract
We evaluated to what extent the influence of eye position in the auditory pathway of primates can be described as a gain field. We compared single unit activity in the inferior colliculus (IC), core auditory cortex (A1) and the caudomedial belt (CM) region of auditory cortex (AC) in primates, and found stronger evidence for gain field-like interactions in the IC than in AC. In the IC, eye position signals showed both multiplicative and additive interactions with auditory responses, whereas in AC the effects were not as well predicted by a gain field model.
Collapse
Affiliation(s)
- Joost X Maier
- Center for Cognitive Neuroscience, Duke University Durham, NC, USA
| | | |
Collapse
|
131
|
Abstract
The vast computational power of the brain has traditionally been viewed as arising from the complex connectivity of neural networks, in which an individual neuron acts as a simple linear summation and thresholding device. However, recent studies show that individual neurons utilize a wealth of nonlinear mechanisms to transform synaptic input into output firing. These mechanisms can arise from synaptic plasticity, synaptic noise, and somatic and dendritic conductances. This tool kit of nonlinear mechanisms confers considerable computational power on both morphologically simple and more complex neurons, enabling them to perform a range of arithmetic operations on signals encoded ina variety of different ways.
Collapse
Affiliation(s)
- R Angus Silver
- Department of Neuroscience, University College, London WC1E 6BT, UK.
| |
Collapse
|
132
|
Bruner E. Morphological Differences in the Parietal Lobes within the Human Genus. CURRENT ANTHROPOLOGY 2010. [DOI: 10.1086/650729] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
133
|
Magosso E. Integrating Information From Vision and Touch: A Neural Network Modeling Study. ACTA ACUST UNITED AC 2010; 14:598-612. [DOI: 10.1109/titb.2010.2040750] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
134
|
Magosso E, Zavaglia M, Serino A, di Pellegrino G, Ursino M. Visuotactile representation of peripersonal space: a neural network study. Neural Comput 2010; 22:190-243. [PMID: 19764874 DOI: 10.1162/neco.2009.01-08-694] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurophysiological and behavioral studies suggest that the peripersonal space is represented in a multisensory fashion by integrating stimuli of different modalities. We developed a neural network to simulate the visual-tactile representation of the peripersonal space around the right and left hands. The model is composed of two networks (one per hemisphere), each with three areas of neurons: two are unimodal (visual and tactile) and communicate by synaptic connections with a third downstream multimodal (visual-tactile) area. The hemispheres are interconnected by inhibitory synapses. We applied a combination of analytic and computer simulation techniques. The analytic approach requires some simplifying assumptions and approximations (linearization and a reduced number of neurons) and is used to investigate network stability as a function of parameter values, providing some emergent properties. These are then tested and extended by computer simulations of a more complex nonlinear network that does not rely on the previous simplifications. With basal parameter values, the extended network reproduces several in vivo phenomena: multisensory coding of peripersonal space, reinforcement of unisensory perception by multimodal stimulation, and coexistence of simultaneous right- and left-hand representations in bilateral stimulation. By reducing the strength of the synapses from the right tactile neurons, the network is able to mimic the responses characteristic of right-brain-damaged patients with left tactile extinction: perception of unilateral left tactile stimulation, cross-modal extinction and cross-modal facilitation in bilateral stimulation. Finally, a variety of sensitivity analyses on some key parameters was performed to shed light on the contribution of single-model components in network behaviour. The model may help us understand the neural circuitry underlying peripersonal space representation and identify its alterations explaining neurological deficits. In perspective, it could help in interpreting results of psychophysical and behavioral trials and clarifying the neural correlates of multisensory-based rehabilitation procedures.
Collapse
Affiliation(s)
- Elisa Magosso
- Department of Electronics, Computer Science and Systems, University of Bologna, Cesena, Italy.
| | | | | | | | | |
Collapse
|
135
|
Nagy B, Corneil BD. Representation of Horizontal head-on-body position in the primate superior colliculus. J Neurophysiol 2009; 103:858-74. [PMID: 20007503 DOI: 10.1152/jn.00099.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement-related activity within the superior colliculus (SC) represents the desired displacement of an impending gaze shift. This representation must ultimately be transformed into position-based reference frames appropriate for coordinated eye-head gaze shifts. Parietal areas that project to the SC are modulated by the initial position of both the eye-re-head and head-re-body and SC activity is modulated by eye-re-head position. These considerations led us to investigate whether SC activity is modulated by the head-re-body position. We recorded activity from movement-related SC neurons while head-restrained monkeys performed a delayed-saccade task. Across blocks of trials, the horizontal position of the body was rotated under a space-fixed head to three to five different positions spanning +/-25 degrees . We observed a significant influence of body-under-head position on SC activity in 50/60 neurons. This influence was expressed predominantly as a linear gain field, scaling task-related SC activity without changing the location of the response field (linear gain fields explained >/=20% of the variance in neural activity in approximately 50% of our sample). Smaller nonlinear modulations were also observed in roughly 30% of our sample. SC activity was equally likely to increase or decrease as the body was rotated to the side of neuronal recording and we found no systematic relationship between the directionality or magnitude of the linear gain field with recording location in the SC. We conclude that a signal conveying head-re-body position is present in the SC. Although the functional significance remains open, our findings are consistent with the SC contributing to a displacement-to-position transformation for oculomotor control.
Collapse
Affiliation(s)
- Benjamin Nagy
- Canadian Institutes of Health Research Group in Action and Perception, University of Western Ontario, London, Ontario, Canada
| | | |
Collapse
|
136
|
Spratling M. Learning Posture Invariant Spatial Representations Through Temporal Correlations. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tamd.2009.2038494] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
137
|
Fischer BJ, Anderson CH, Peña JL. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes. PLoS One 2009; 4:e8015. [PMID: 19956693 PMCID: PMC2776990 DOI: 10.1371/journal.pone.0008015] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2009] [Accepted: 10/06/2009] [Indexed: 12/03/2022] Open
Abstract
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Occidental College, Los Angeles, California, United States of America
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - Charles H. Anderson
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - José Luis Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
138
|
Philipp ST, Michler F, Wachtler T. Unsupervised learning of head-centered representations in a network of spiking neurons. BMC Neurosci 2009. [DOI: 10.1186/1471-2202-10-s1-p148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
139
|
Maier JX, Groh JM. Multisensory guidance of orienting behavior. Hear Res 2009; 258:106-12. [PMID: 19520151 DOI: 10.1016/j.heares.2009.05.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2008] [Revised: 05/15/2009] [Accepted: 05/20/2009] [Indexed: 11/18/2022]
Abstract
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Collapse
Affiliation(s)
- Joost X Maier
- Center for Cognitive Neuroscience, Department of Neurobiology, Department of Psychology and Neuroscience, Duke University, LSRC B203, Durham NC 27708, USA.
| | | |
Collapse
|
140
|
Rank-order-selective neurons form a temporal basis set for the generation of motor sequences. J Neurosci 2009; 29:4369-80. [PMID: 19357265 DOI: 10.1523/jneurosci.0164-09.2009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain.
Collapse
|
141
|
Volcic R, Wijntjes MWA, Kappers AML. Haptic mental rotation revisited: multiple reference frame dependence. Acta Psychol (Amst) 2009; 130:251-9. [PMID: 19243731 DOI: 10.1016/j.actpsy.2009.01.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2008] [Revised: 12/11/2008] [Accepted: 01/20/2009] [Indexed: 12/01/2022] Open
Abstract
The nature of reference frames involved in haptic spatial processing was addressed by means of a haptic mental rotation task. Participants assessed the parity of two objects located in various spatial locations by exploring them with different hand orientations. The resulting response times were fitted with a triangle wave function. Phase shifts were found to depend on the relation between the hands and the objects, and between the objects and the body. We rejected the possibility that a single reference frame drives spatial processing. Instead, we found evidence of multiple interacting reference frames with the hand-centered reference frame playing the dominant role. We propose that a weighted average of the allocentric, the hand-centered and the body-centered reference frames influences the haptic encoding of spatial information. In addition, we showed that previous results can be reinterpreted within the framework of multiple reference frames. This mechanism has proved to be ubiquitously present in haptic spatial processing.
Collapse
Affiliation(s)
- Robert Volcic
- Helmholtz Institute, Utrecht University, Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | | | |
Collapse
|
142
|
Multimodal activity in the parietal cortex. Hear Res 2009; 258:100-5. [PMID: 19450431 DOI: 10.1016/j.heares.2009.01.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2008] [Revised: 01/14/2009] [Accepted: 01/14/2009] [Indexed: 11/23/2022]
Abstract
Goal-directed behavior can be thought of as dynamic links between sensory stimuli and motor acts. Neural correlates of many of the intermediate events of both auditory and visual goal-directed behaviors are found in the posterior parietal cortex. Here, we review studies that have focused on how neurons in the lateral intraparietal area (area LIP) differentially process auditory and visual stimuli. Together, these studies suggest that area LIP contains a modality-dependent representation that is highly dependent on behavioral context.
Collapse
|
143
|
Abstract
To act as computational devices, neurons must perform mathematical operations as they transform synaptic and modulatory input into output firing rate1. Experiments and theory suggest that neuronal firing typically represents the sum of synaptic inputs1-3, an additive operation, but multiplication of inputs is essential for many computations1. Multiplication by a constant produces a change in the slope, or gain, of the input-output relation, amplifying or scaling down the neuron's sensitivity to changes in its input. Such gain modulation occurs in vivo, during contrast invariance of orientation tuning4, attentional scaling5, translation-invariant object recognition6, auditory processing7 and coordinate transformations8,9. Moreover, theoretical studies highlight the necessity of gain modulation in several of these tasks9-11. While potential cellular mechanisms for gain modulation have been identified, they often rely on membrane noise and require restrictive conditions to work3,12-18. Because nonlinear components are used to scale signals in electronics, we examined whether synaptic nonlinearities are involved in neuronal gain modulation. We used synaptic stimulation and dynamic-clamp to investigate gain modulation in granule cells (GCs) in acute cerebellar slices. Here we show that when excitation is mediated by synapses with short-term depression (STD), neuronal gain is controlled by an inhibitory conductance in a noise-independent manner, allowing driving and modulatory inputs to be multiplied together. The nonlinearity introduced by STD transforms inhibition-mediated additive shifts in the input-output relation into multiplicative gain changes. When GCs were driven with bursts of high-frequency mossy fibre (MF) input, as observed in vivo19,20, larger inhibition-mediated gain changes were observed, as expected with greater STD. Simulations of synaptic integration in more complex neocortical neurons confirm that STD-based gain modulation can also operate in neurons with large dendritic trees. Our results establish that neurons receiving depressing excitatory inputs can act as powerful multiplicative devices even when integration of postsynaptic conductances is linear.
Collapse
|
144
|
Abstract
Complex cognitive tasks present a range of computational and algorithmic challenges for neural accounts of both learning and inference. In particular, it is extremely hard to solve them using the sort of simple policies that have been extensively studied as solutions to elementary Markov decision problems. There has thus been recent interest in architectures for the instantiation and even learning of policies that are formally more complicated than these, involving operations such as gated working memory. However, the focus of these ideas and methods has largely been on what might best be considered as automatized, routine or, in the sense of animal conditioning, habitual, performance. Thus, they have yet to provide a route towards understanding the workings of rule-based control, which is critical for cognitively sophisticated competence. Here, we review a recent suggestion for a uniform architecture for habitual and rule-based execution, discuss some of the habitual mechanisms that underpin the use of rules, and consider a statistical relationship between rules and habits.
Collapse
Affiliation(s)
- Peter Dayan
- Gatsby Computational Neuroscience Unit, UCLLondon, UK
| |
Collapse
|
145
|
Lehky SR, Peng X, McAdams CJ, Sereno AB. Spatial modulation of primate inferotemporal responses by eye position. PLoS One 2008; 3:e3492. [PMID: 18946508 PMCID: PMC2567040 DOI: 10.1371/journal.pone.0003492] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2008] [Accepted: 09/15/2008] [Indexed: 01/19/2023] Open
Abstract
Background A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.
Collapse
Affiliation(s)
- Sidney R. Lehky
- Computational Neuroscience Laboratory, The Salk Institute, La Jolla, California, United States of America
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Xinmiao Peng
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Carrie J. McAdams
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| | - Anne B. Sereno
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
146
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
147
|
Reviews. PHILOSOPHICAL PSYCHOLOGY 2008. [DOI: 10.1080/09515080802426057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
148
|
Cassanello CR, Nihalani AT, Ferrera VP. Neuronal responses to moving targets in monkey frontal eye fields. J Neurophysiol 2008; 100:1544-56. [PMID: 18632886 DOI: 10.1152/jn.01401.2007] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Due to delays in visuomotor processing, eye movements directed toward moving targets must integrate both target position and velocity to be accurate. It is unknown where and how target velocity information is incorporated into the planning of rapid (saccadic) eye movements. We recorded the activity of neurons in frontal eye fields (FEFs) while monkeys made saccades to stationary and moving targets. A substantial fraction of FEF neurons was found to encode not only the initial position of a moving target, but the metrics (amplitude and direction) of the saccade needed to intercept the target. Many neurons also encoded target velocity in a nearly linear manner. The quasi-linear dependence of firing rate on target velocity means that the neuronal response can be directly read out to compute the future position of a target moving with constant velocity. This is demonstrated using a quantitative model in which saccade amplitude is encoded in the population response of neurons tuned to retinal target position and modulated by target velocity.
Collapse
Affiliation(s)
- Carlos R Cassanello
- Department of Neuroscience, Columbia University, and Keck-Mahoney Center for Mind and Brain, New York, New York, USA.
| | | | | |
Collapse
|
149
|
Ma WJ, Pouget A. Linking neurons to behavior in multisensory perception: a computational review. Brain Res 2008; 1242:4-12. [PMID: 18602905 DOI: 10.1016/j.brainres.2008.04.082] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2008] [Revised: 04/25/2008] [Accepted: 04/27/2008] [Indexed: 11/18/2022]
Abstract
A large body of psychophysical and physiological findings has characterized how information is integrated across multiple senses. This work has focused on two major issues: how do we integrate information, and when do we integrate, i.e., how do we decide if two signals come from the same source or different sources. Recent studies suggest that humans and animals use Bayesian strategies to solve both problems. With regard to how to integrate, computational studies have also started to shed light on the neural basis of this Bayes-optimal computation, suggesting that, if neuronal variability is Poisson-like, a simple linear combination of population activity is all that is required for optimality. We review both sets of developments, which together lay out a path towards a complete neural theory of multisensory perception.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | | |
Collapse
|
150
|
Hernandez TD, Levitan CA, Banks MS, Schor CM. How does saccade adaptation affect visual perception? J Vis 2008; 8:3.1-16. [PMID: 18831626 DOI: 10.1167/8.8.3] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2007] [Accepted: 03/20/2008] [Indexed: 11/24/2022] Open
Abstract
Three signals are used to visually localize targets and stimulate saccades: (1) retinal location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, and (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in the SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in the SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of the SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.
Collapse
Affiliation(s)
- Teresa D Hernandez
- Vision Science Group, School of Optometry, University of California, Berkeley, CA, USA.
| | | | | | | |
Collapse
|