1
|
Rolls ET, Yan X, Deco G, Zhang Y, Jousmaki V, Feng J. A ventromedial visual cortical 'Where' stream to the human hippocampus for spatial scenes revealed with magnetoencephalography. Commun Biol 2024; 7:1047. [PMID: 39183244 PMCID: PMC11345434 DOI: 10.1038/s42003-024-06719-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 08/12/2024] [Indexed: 08/27/2024] Open
Abstract
The primate including the human hippocampus implicated in episodic memory and navigation represents a spatial view, very different from the place representations in rodents. To understand this system in humans, and the computations performed, the pathway for this spatial view information to reach the hippocampus was analysed in humans. Whole-brain effective connectivity was measured with magnetoencephalography between 30 visual cortical regions and 150 other cortical regions using the HCP-MMP1 atlas in 21 participants while performing a 0-back scene memory task. In a ventromedial visual stream, V1-V4 connect to the ProStriate region where the retrosplenial scene area is located. The ProStriate region has connectivity to ventromedial visual regions VMV1-3 and VVC. These ventromedial regions connect to the medial parahippocampal region PHA1-3, which, with the VMV regions, include the parahippocampal scene area. The medial parahippocampal regions have effective connectivity to the entorhinal cortex, perirhinal cortex, and hippocampus. In contrast, when viewing faces, the effective connectivity was more through a ventrolateral visual cortical stream via the fusiform face cortex to the inferior temporal visual cortex regions TE2p and TE2a. A ventromedial visual cortical 'Where' stream to the hippocampus for spatial scenes was supported by diffusion topography in 171 HCP participants at 7 T.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.
- Department of Computer Science, University of Warwick, Coventry, UK.
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China.
| | - Xiaoqian Yan
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China
| | - Gustavo Deco
- Department of Information and Communication Technologies, Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona, Spain
| | - Yi Zhang
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China
| | - Veikko Jousmaki
- Aalto NeuroImaging, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China
| |
Collapse
|
2
|
Rolls ET, Treves A. A theory of hippocampal function: New developments. Prog Neurobiol 2024; 238:102636. [PMID: 38834132 DOI: 10.1016/j.pneurobio.2024.102636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/15/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
We develop further here the only quantitative theory of the storage of information in the hippocampal episodic memory system and its recall back to the neocortex. The theory is upgraded to account for a revolution in understanding of spatial representations in the primate, including human, hippocampus, that go beyond the place where the individual is located, to the location being viewed in a scene. This is fundamental to much primate episodic memory and navigation: functions supported in humans by pathways that build 'where' spatial view representations by feature combinations in a ventromedial visual cortical stream, separate from those for 'what' object and face information to the inferior temporal visual cortex, and for reward information from the orbitofrontal cortex. Key new computational developments include the capacity of the CA3 attractor network for storing whole charts of space; how the correlations inherent in self-organizing continuous spatial representations impact the storage capacity; how the CA3 network can combine continuous spatial and discrete object and reward representations; the roles of the rewards that reach the hippocampus in the later consolidation into long-term memory in part via cholinergic pathways from the orbitofrontal cortex; and new ways of analysing neocortical information storage using Potts networks.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.
| | | |
Collapse
|
3
|
Ugolini G, Graf W. Pathways from the superior colliculus and the nucleus of the optic tract to the posterior parietal cortex in macaque monkeys: Functional frameworks for representation updating and online movement guidance. Eur J Neurosci 2024; 59:2792-2825. [PMID: 38544445 DOI: 10.1111/ejn.16314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 05/22/2024]
Abstract
The posterior parietal cortex (PPC) integrates multisensory and motor-related information for generating and updating body representations and movement plans. We used retrograde transneuronal transfer of rabies virus combined with a conventional tracer in macaque monkeys to identify direct and disynaptic pathways to the arm-related rostral medial intraparietal area (MIP), the ventral lateral intraparietal area (LIPv), belonging to the parietal eye field, and the pursuit-related lateral subdivision of the medial superior temporal area (MSTl). We found that these areas receive major disynaptic pathways via the thalamus from the nucleus of the optic tract (NOT) and the superior colliculus (SC), mainly ipsilaterally. NOT pathways, targeting MSTl most prominently, serve to process the sensory consequences of slow eye movements for which the NOT is the key sensorimotor interface. They potentially contribute to the directional asymmetry of the pursuit and optokinetic systems. MSTl and LIPv receive feedforward inputs from SC visual layers, which are potential correlates for fast detection of motion, perceptual saccadic suppression and visual spatial attention. MSTl is the target of efference copy pathways from saccade- and head-related compartments of SC motor layers and head-related reticulospinal neurons. They are potential sources of extraretinal signals related to eye and head movement in MSTl visual-tracking neurons. LIPv and rostral MIP receive efference copy pathways from all SC motor layers, providing online estimates of eye, head and arm movements. Our findings have important implications for understanding the role of the PPC in representation updating, internal models for online movement guidance, eye-hand coordination and optic ataxia.
Collapse
Affiliation(s)
- Gabriella Ugolini
- Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR9197 CNRS - Université Paris-Saclay, Campus CEA Saclay, Saclay, France
| | - Werner Graf
- Department of Physiology and Biophysics, Howard University, Washington, DC, USA
| |
Collapse
|
4
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
5
|
Ruggiero G, Ruotolo F, Nunziata S, Abagnale S, Iachini T, Bartolo A. Spatial representations of objects used away and towards the body: The effect of near and far space. Q J Exp Psychol (Hove) 2024:17470218241235161. [PMID: 38356182 DOI: 10.1177/17470218241235161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
An action with an object can be accomplished only if we encode the position of the object with respect to our body (i.e., egocentrically) and/or to another element in the environment (i.e., allocentrically). However, some actions with the objects are directed towards our body, such as brushing our teeth, and others away from the body, such as writing. Objects can be near the body, that is within arm reaching, or far from the body, that is outside arm reaching. The aim of this study was to verify if the direction of use of the objects influences the way we represent their position in both near and far space. Objects typically used towards (TB) or away from the body (AB) were presented in near or far space and participants had to judge whether an object was closer to them (i.e., egocentric judgement) or closer to another object (i.e., allocentric judgement). Results showed that egocentric judgements on TB objects were more accurate in near than in far space. Moreover, allocentric judgements on AB objects were less accurate than egocentric judgements in near space but not in far space. These results are discussed with respect to the different roles that visuo-motor and visuo-spatial mechanisms play in near space and far space, respectively.
Collapse
Affiliation(s)
- Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Scila Nunziata
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
- Univ. Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, Lille, France
| | - Simona Abagnale
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
- Univ. Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, Lille, France
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Angela Bartolo
- Univ. Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, Lille, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
6
|
Cross KP, Cook DJ, Scott SH. Rapid Online Corrections for Proprioceptive and Visual Perturbations Recruit Similar Circuits in Primary Motor Cortex. eNeuro 2024; 11:ENEURO.0083-23.2024. [PMID: 38238081 PMCID: PMC10867723 DOI: 10.1523/eneuro.0083-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 12/22/2023] [Accepted: 01/09/2024] [Indexed: 02/16/2024] Open
Abstract
An important aspect of motor function is our ability to rapidly generate goal-directed corrections for disturbances to the limb or behavioral goal. The primary motor cortex (M1) is a key region involved in processing feedback for rapid motor corrections, yet we know little about how M1 circuits are recruited by different sources of sensory feedback to make rapid corrections. We trained two male monkeys (Macaca mulatta) to make goal-directed reaches and on random trials introduced different sensory errors by either jumping the visual location of the goal (goal jump), jumping the visual location of the hand (cursor jump), or applying a mechanical load to displace the hand (proprioceptive feedback). Sensory perturbations evoked a broad response in M1 with ∼73% of neurons (n = 257) responding to at least one of the sensory perturbations. Feedback responses were also similar as response ranges between the goal and cursor jumps were highly correlated (range of r = [0.91, 0.97]) as were the response ranges between the mechanical loads and the visual perturbations (range of r = [0.68, 0.86]). Lastly, we identified the neural subspace each perturbation response resided in and found a strong overlap between the two visual perturbations (range of overlap index, 0.73-0.89) and between the mechanical loads and visual perturbations (range of overlap index, 0.36-0.47) indicating each perturbation evoked similar structure of activity at the population level. Collectively, our results indicate rapid responses to errors from different sensory sources target similar overlapping circuits in M1.
Collapse
Affiliation(s)
- Kevin P Cross
- Neuroscience Center, University of North Carolina, Chapel Hill, North Carolina 27599
| | - Douglas J Cook
- Department of Surgery, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Departments of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Medicine, Queen's University, Kingston, Ontario K7L 3N6, Canada
| |
Collapse
|
7
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
8
|
Hidaka S, Chen N, Ishii N, Iketani R, Suzuki K, Longo MR, Wada M. No differences in implicit hand maps among different degrees of autistic traits. Autism Res 2023; 16:1750-1764. [PMID: 37409496 DOI: 10.1002/aur.2979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 06/18/2023] [Indexed: 07/07/2023]
Abstract
People with autism spectrum disorder (ASD) or higher levels of autistic traits have atypical characteristics in sensory processing. Atypicalities have been reported for proprioceptive judgments, which are tightly related to internal bodily representations underlying position sense. However, no research has directly investigated whether self-bodily representations are different in individuals with ASD. Implicit hand maps, estimated based on participants' proprioceptive sensations without sight of their hand, are known to be distorted such that the shape is stretched along the medio-lateral hand axis even for neurotypical participants. Here, with the view of ASD as falling on a continuous distribution among the general population, we explored differences in implicit body representations along with autistic traits by focusing on relationships between autistic traits and the magnitudes of the distortions in implicit hand maps (N ~ 100). We estimated the magnitudes of distortions in implicit hand maps both for fingers and hand surfaces on the dorsal and palmar sides of the hand. Autistic traits were measured by questionnaires (Autism Spectrum [AQ] and Empathy/Systemizing [EQ-SQ] Quotients). The distortions in implicit hand maps were replicated in our experimental situations. However, there were no significant relationships between autistic traits and the magnitudes of the distortions as well as within-individual variabilities in the maps and localization performances. Consistent results were observed from comparisons between IQ-matched samples of people with and without a diagnosis of ASD. Our findings suggest that there exist perceptual and neural processes for implicit body representations underlying position sense consistent across levels of autistic traits.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Rikkyo University, Tokyo, Japan
- Department of Psychology, Faculty of Human Sciences, Sophia University, Tokyo, Japan
| | - Na Chen
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa City, Japan
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Naomi Ishii
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa City, Japan
| | - Risa Iketani
- Department of Psychology, Rikkyo University, Tokyo, Japan
| | - Kirino Suzuki
- Department of Psychology, Rikkyo University, Tokyo, Japan
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Makoto Wada
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa City, Japan
| |
Collapse
|
9
|
Klautke J, Foster C, Medendorp WP, Heed T. Dynamic spatial coding in parietal cortex mediates tactile-motor transformation. Nat Commun 2023; 14:4532. [PMID: 37500625 PMCID: PMC10374589 DOI: 10.1038/s41467-023-39959-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 07/05/2023] [Indexed: 07/29/2023] Open
Abstract
Movements towards touch on the body require integrating tactile location and body posture information. Tactile processing and movement planning both rely on posterior parietal cortex (PPC) but their interplay is not understood. Here, human participants received tactile stimuli on their crossed and uncrossed feet, dissociating stimulus location relative to anatomy versus external space. Participants pointed to the touch or the equivalent location on the other foot, which dissociates sensory and motor locations. Multi-voxel pattern analysis of concurrently recorded fMRI signals revealed that tactile location was coded anatomically in anterior PPC but spatially in posterior PPC during sensory processing. After movement instructions were specified, PPC exclusively represented the movement goal in space, in regions associated with visuo-motor planning and with regional overlap for sensory, rule-related, and movement coding. Thus, PPC flexibly updates its spatial codes to accommodate rule-based transformation of sensory input to generate movement to environment and own body alike.
Collapse
Affiliation(s)
- Janina Klautke
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Celia Foster
- Biopsychology & Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
- Center of Excellence in Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - W Pieter Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany.
- Center of Excellence in Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.
- Cognitive Psychology, Department of Psychology, University of Salzburg, Salzburg, Austria.
- Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| |
Collapse
|
10
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
11
|
Rolls ET, Deco G, Huang CC, Feng J. The human posterior parietal cortex: effective connectome, and its relation to function. Cereb Cortex 2023; 33:3142-3170. [PMID: 35834902 PMCID: PMC10401905 DOI: 10.1093/cercor/bhac266] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 01/04/2023] Open
Abstract
The effective connectivity between 21 regions in the human posterior parietal cortex, and 360 cortical regions was measured in 171 Human Connectome Project (HCP) participants using the HCP atlas, and complemented with functional connectivity and diffusion tractography. Intraparietal areas LIP, VIP, MIP, and AIP have connectivity from early cortical visual regions, and to visuomotor regions such as the frontal eye fields, consistent with functions in eye saccades and tracking. Five superior parietal area 7 regions receive from similar areas and from the intraparietal areas, but also receive somatosensory inputs and connect with premotor areas including area 6, consistent with functions in performing actions to reach for, grasp, and manipulate objects. In the anterior inferior parietal cortex, PFop, PFt, and PFcm are mainly somatosensory, and PF in addition receives visuo-motor and visual object information, and is implicated in multimodal shape and body image representations. In the posterior inferior parietal cortex, PFm and PGs combine visuo-motor, visual object, and reward input and connect with the hippocampal system. PGi in addition provides a route to motion-related superior temporal sulcus regions involved in social interactions. PGp has connectivity with intraparietal regions involved in coordinate transforms and may be involved in idiothetic update of hippocampal visual scene representations.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain
- Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, Institute of Brain and Education Innovation, East China Normal University, Shanghai 200602, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
12
|
Yang C, Chen H, Naya Y. Allocentric information represented by self-referenced spatial coding in the primate medial temporal lobe. Hippocampus 2023; 33:522-532. [PMID: 36728411 DOI: 10.1002/hipo.23501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/16/2022] [Accepted: 12/30/2022] [Indexed: 02/03/2023]
Abstract
For living organisms, the ability to acquire information regarding the external space around them is critical for future actions. While the information must be stored in an allocentric frame to facilitate its use in various spatial contexts, each case of use requires the information to be represented in a particular self-referenced frame. Previous studies have explored neural substrates responsible for the linkage between self-referenced and allocentric spatial representations based on findings in rodents. However, the behaviors of rodents are different from those of primates in several aspects; for example, rodents mainly explore their environments through locomotion, while primates use eye movements. In this review, we discuss the brain mechanisms responsible for the linkage in nonhuman primates. Based on recent physiological studies, we propose that two types of neural substrates link the first-person perspective with allocentric coding. The first is the view-center background signal, which represents an image of the background surrounding the current position of fixation on the retina. This perceptual signal is transmitted from the ventral visual pathway to the hippocampus (HPC) via the perirhinal cortex and parahippocampal cortex. Because images that share the same objective-position in the environment tend to appear similar when seen from different self-positions, the view-center background signals are easily associated with one another in the formation of allocentric position coding and storage. The second type of neural substrate is the HPC neurons' dynamic activity that translates the stored location memory to the first-person perspective depending on the current spatial context.
Collapse
Affiliation(s)
- Cen Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - He Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
13
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
14
|
Rolls ET, Wirth S, Deco G, Huang C, Feng J. The human posterior cingulate, retrosplenial, and medial parietal cortex effective connectome, and implications for memory and navigation. Hum Brain Mapp 2023; 44:629-655. [PMID: 36178249 PMCID: PMC9842927 DOI: 10.1002/hbm.26089] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
The human posterior cingulate, retrosplenial, and medial parietal cortex are involved in memory and navigation. The functional anatomy underlying these cognitive functions was investigated by measuring the effective connectivity of these Posterior Cingulate Division (PCD) regions in the Human Connectome Project-MMP1 atlas in 171 HCP participants, and complemented with functional connectivity and diffusion tractography. First, the postero-ventral parts of the PCD (31pd, 31pv, 7m, d23ab, and v23ab) have effective connectivity with the temporal pole, inferior temporal visual cortex, cortex in the superior temporal sulcus implicated in auditory and semantic processing, with the reward-related vmPFC and pregenual anterior cingulate cortex, with the inferior parietal cortex, and with the hippocampal system. This connectivity implicates it in hippocampal episodic memory, providing routes for "what," reward and semantic schema-related information to access the hippocampus. Second, the antero-dorsal parts of the PCD (especially 31a and 23d, PCV, and also RSC) have connectivity with early visual cortical areas including those that represent spatial scenes, with the superior parietal cortex, with the pregenual anterior cingulate cortex, and with the hippocampal system. This connectivity implicates it in the "where" component for hippocampal episodic memory and for spatial navigation. The dorsal-transitional-visual (DVT) and ProStriate regions where the retrosplenial scene area is located have connectivity from early visual cortical areas to the parahippocampal scene area, providing a ventromedial route for spatial scene information to reach the hippocampus. These connectivities provide important routes for "what," reward, and "where" scene-related information for human hippocampal episodic memory and navigation. The midcingulate cortex provides a route from the anterior dorsal parts of the PCD and the supracallosal part of the anterior cingulate cortex to premotor regions.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229CNRS and University of LyonBronFrance
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Brain and CognitionPompeu Fabra UniversityBarcelonaSpain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA)Universitat Pompeu FabraBarcelonaSpain
| | - Chu‐Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive ScienceEast China Normal UniversityShanghaiChina
| | - Jianfeng Feng
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| |
Collapse
|
15
|
Ruehl RM, Flanagin VL, Ophey L, Raiser TM, Seiderer K, Ertl M, Conrad J, Zu Eulenburg P. The human egomotion network. Neuroimage 2022; 264:119715. [PMID: 36334557 DOI: 10.1016/j.neuroimage.2022.119715] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/18/2022] [Accepted: 10/25/2022] [Indexed: 11/07/2022] Open
Abstract
All volitional movement in a three-dimensional space requires multisensory integration, in particular of visual and vestibular signals. Where and how the human brain processes and integrates self-motion signals remains enigmatic. Here, we applied visual and vestibular self-motion stimulation using fast and precise whole-brain neuroimaging to delineate and characterize the entire cortical and subcortical egomotion network in a substantial cohort (n=131). Our results identify a core egomotion network consisting of areas in the cingulate sulcus (CSv, PcM/pCi), the cerebellum (uvula), and the temporo-parietal cortex including area VPS and an unnamed region in the supramarginal gyrus. Based on its cerebral connectivity pattern and anatomical localization, we propose that this region represents the human homologue of macaque area 7a. Whole-brain connectivity and gradient analyses imply an essential role of the connections between the cingulate sulcus and the cerebellar uvula in egomotion perception. This could be via feedback loops involved updating visuo-spatial and vestibular information. The unique functional connectivity patterns of PcM/pCi hint at central role in multisensory integration essential for the perception of self-referential spatial awareness. All cortical egomotion hubs showed modular functional connectivity with other visual, vestibular, somatosensory and higher order motor areas, underlining their mutual function in general sensorimotor integration.
Collapse
Affiliation(s)
- Ria Maxine Ruehl
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany.
| | - Virginia L Flanagin
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany
| | - Leoni Ophey
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Theresa Marie Raiser
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Katharina Seiderer
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Matthias Ertl
- Institute of Psychology and Inselspital, Fabrikstrasse 8, 3012 Bern, University of Bern, Switzerland
| | - Julian Conrad
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Department of Neurology, Theodor-Kutze Ufer 1-3, 68167 Mannheim, Medical Faculty Mannheim, University of Heidelberg, Germany
| | - Peter Zu Eulenburg
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany; Institute for Neuroradiology, University Hospital Munich, Marchionini Str. 15, 81377 Munich, Ludwig-Maximilians-University Munich, Germany
| |
Collapse
|
16
|
Rolls ET, Deco G, Huang CC, Feng J. Prefrontal and somatosensory-motor cortex effective connectivity in humans. Cereb Cortex 2022; 33:4939-4963. [PMID: 36227217 DOI: 10.1093/cercor/bhac391] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 11/12/2022] Open
Abstract
Effective connectivity, functional connectivity, and tractography were measured between 57 cortical frontal and somatosensory regions and the 360 cortical regions in the Human Connectome Project (HCP) multimodal parcellation atlas for 171 HCP participants. A ventral somatosensory stream connects from 3b and 3a via 1 and 2 and then via opercular and frontal opercular regions to the insula, which then connects to inferior parietal PF regions. This stream is implicated in "what"-related somatosensory processing of objects and of the body and in combining with visual inputs in PF. A dorsal "action" somatosensory stream connects from 3b and 3a via 1 and 2 to parietal area 5 and then 7. Inferior prefrontal regions have connectivity with the inferior temporal visual cortex and orbitofrontal cortex, are implicated in working memory for "what" processing streams, and provide connectivity to language systems, including 44, 45, 47l, TPOJ1, and superior temporal visual area. The dorsolateral prefrontal cortex regions that include area 46 have connectivity with parietal area 7 and somatosensory inferior parietal regions and are implicated in working memory for actions and planning. The dorsal prefrontal regions, including 8Ad and 8Av, have connectivity with visual regions of the inferior parietal cortex, including PGs and PGi, and are implicated in visual and auditory top-down attention.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain.,Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China.,Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
17
|
Rolls ET. The hippocampus, ventromedial prefrontal cortex, and episodic and semantic memory. Prog Neurobiol 2022; 217:102334. [PMID: 35870682 DOI: 10.1016/j.pneurobio.2022.102334] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/07/2022] [Accepted: 07/19/2022] [Indexed: 11/24/2022]
Abstract
The human ventromedial prefrontal cortex (vmPFC)/anterior cingulate cortex is implicated in reward and emotion, but also in memory. It is shown how the human orbitofrontal cortex connecting with the vmPFC and anterior cingulate cortex provide a route to the hippocampus for reward and emotional value to be incorporated into episodic memory, enabling memory of where a reward was seen. It is proposed that this value component results in primarily episodic memories with some value component to be repeatedly recalled from the hippocampus so that they are more likely to become incorporated into neocortical semantic and autobiographical memories. The same orbitofrontal and anterior cingulate regions also connect in humans to the septal and basal forebrain cholinergic nuclei, thereby helping to consolidate memory, and helping to account for why damage to the vMPFC impairs memory. The human hippocampus and vmPFC thus contribute in complementary ways to forming episodic and semantic memories.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; University of Warwick, Department of Computer Science, Coventry, UK.
| |
Collapse
|
18
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
19
|
Goodroe SC, Spiers HJ. Extending neural systems for navigation to hunting behavior. Curr Opin Neurobiol 2022; 73:102545. [PMID: 35483308 DOI: 10.1016/j.conb.2022.102545] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/18/2022] [Accepted: 03/21/2022] [Indexed: 11/03/2022]
Abstract
For decades, a central question in neuroscience has been: How does the brain support navigation? Recent research on navigation has explored how brain regions support the capacity to adapt to changes in the environment and track the distance and direction to goal locations. Here, we provide a brief review of this literature and speculate how these neural systems may be involved in another, parallel behavior-hunting. Hunting shares many of the same challenges as navigation. Like navigation, hunting requires the hunter to orient towards a goal while minimizing their distance from it while traveling. Likewise, hunting may require the accommodation of detours to locate prey or the exploitation of shortcuts for a quicker capture. Recent research suggests that neurons in the periaqueductal gray, hypothalamus, and dorsal anterior cingulate play key roles in such hunting behavior. In this review, we speculate on how these regions may operate functionally with other key brain regions involved in navigation, such as the hippocampus, to support hunting. Additionally, we posit that hunting in a group presents an additional set of challenges, where success relies on multicentric tracking and prediction of prey position as well as the position of co-hunters.
Collapse
Affiliation(s)
- Sarah C Goodroe
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| | - Hugo J Spiers
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, United Kingdom.
| |
Collapse
|
20
|
Fujimoto K, Ashida H. Postural adjustment as a function of scene orientation. J Vis 2022; 22:1. [PMID: 35234839 PMCID: PMC8899856 DOI: 10.1167/jov.22.4.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual orientation plays an important role in postural control, but the specific characteristics of postural response to orientation remain unknown. In this study, we investigated the relationship between postural response and the subjective visual vertical (SVV) as a function of scene orientation. We presented a virtual room including everyday objects through a head-mounted display and measured head tilt around the naso-occipital axis. The room orientation varied from 165° counterclockwise to 180° clockwise around the center of display in 15° increments. In a separate session, we also conducted a rod adjustment task to record the participant's SVV in the tilted room. We applied a weighted vector sum model to head tilt and SVV error and obtained the weight of three visual cues to orientation: frame, horizon, and polarity. We found significant contributions for all visual cues to head tilt and SVV error. For SVV error, frame cues made the largest contribution, whereas polarity contribution made the smallest. For head tilt, there was no clear difference across visual cue types, although the order of contribution was similar to the SVV. These findings suggest that multiple visual cues to orientation are involved in postural control and imply different representations of vertical orientation across postural control and perception.
Collapse
Affiliation(s)
- Kanon Fujimoto
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan.,Japan Society for the Promotion of Science, Tokyo, Japan.,
| | - Hiroshi Ashida
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan.,
| |
Collapse
|
21
|
Cisek P. Evolution of behavioural control from chordates to primates. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200522. [PMID: 34957850 PMCID: PMC8710891 DOI: 10.1098/rstb.2020.0522] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 08/16/2021] [Indexed: 12/16/2022] Open
Abstract
This article outlines a hypothetical sequence of evolutionary innovations, along the lineage that produced humans, which extended behavioural control from simple feedback loops to sophisticated control of diverse species-typical actions. I begin with basic feedback mechanisms of ancient mobile animals and follow the major niche transitions from aquatic to terrestrial life, the retreat into nocturnality in early mammals, the transition to arboreal life and the return to diurnality. Along the way, I propose a sequence of elaboration and diversification of the behavioural repertoire and associated neuroanatomical substrates. This includes midbrain control of approach versus escape actions, telencephalic control of local versus long-range foraging, detection of affordances by the dorsal pallium, diversified control of nocturnal foraging in the mammalian neocortex and expansion of primate frontal, temporal and parietal cortex to support a wide variety of primate-specific behavioural strategies. The result is a proposed functional architecture consisting of parallel control systems, each dedicated to specifying the affordances for guiding particular species-typical actions, which compete against each other through a hierarchy of selection mechanisms. This article is part of the theme issue 'Systems neuroscience through the lens of evolutionary theory'.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montreal CP 6123 Succursale Centre-ville, Montréal, Québec, Canada H3C 3J7
| |
Collapse
|
22
|
Ma Q, Rolls ET, Huang CC, Cheng W, Feng J. Extensive cortical functional connectivity of the human hippocampal memory system. Cortex 2021; 147:83-101. [PMID: 35026557 DOI: 10.1016/j.cortex.2021.11.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/12/2021] [Accepted: 11/22/2021] [Indexed: 01/09/2023]
Abstract
The cortical connections of the human hippocampal memory system are fundamental to understanding its operation in health and disease, especially in the context of the great development of the human cortex. The functional connectivity of the human hippocampal system was analyzed in 172 participants imaged at 7T in the Human Connectome Project. The human hippocampus has high functional connectivity not only with the entorhinal cortex, but also with areas that are more distant in the ventral 'what' stream including the perirhinal cortex and temporal cortical visual areas. Parahippocampal gyrus TF in humans has connectivity with this ventral 'what' subsystem. Correspondingly for the dorsal stream, the hippocampus has high functional connectivity not only with the presubiculum, but also with areas more distant, the medial parahippocampal cortex TH which includes the parahippocampal place or scene area, the posterior cingulate including retrosplenial cortex, and the parietal cortex. Further, there is considerable cross connectivity between the ventral and dorsal streams with the hippocampus. The findings are supported by anatomical connections, which together provide an unprecedented and quantitative overview of the extensive cortical connectivity of the human hippocampal system that goes beyond hierarchically organised and segregated pathways connecting the hippocampus and neocortex, and leads to new concepts on the operation of the hippocampal memory system in humans.
Collapse
Affiliation(s)
- Qing Ma
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China
| | - Edmund T Rolls
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Department of Computer Science, University of Warwick, Coventry, UK; Oxford Centre for Computational Neuroscience, Oxford, UK.
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Wei Cheng
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China; Fudan ISTBI-ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China.
| | - Jianfeng Feng
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Department of Computer Science, University of Warwick, Coventry, UK; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China; Fudan ISTBI-ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China.
| |
Collapse
|
23
|
Foster C, Sheng WA, Heed T, Ben Hamed S. The macaque ventral intraparietal area has expanded into three homologue human parietal areas. Prog Neurobiol 2021; 209:102185. [PMID: 34775040 DOI: 10.1016/j.pneurobio.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/27/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The macaque ventral intraparietal area (VIP) in the fundus of the intraparietal sulcus has been implicated in a diverse range of sensorimotor and cognitive functions such as motion processing, multisensory integration, processing of head peripersonal space, defensive behavior, and numerosity coding. Here, we exhaustively review macaque VIP function, cytoarchitectonics, and anatomical connectivity and integrate it with human studies that have attempted to identify a potential human VIP homologue. We show that human VIP research has consistently identified three, rather than one, bilateral parietal areas that each appear to subsume some, but not all, of the macaque area's functionality. Available evidence suggests that this human "VIP complex" has evolved as an expansion of the macaque area, but that some precursory specialization within macaque VIP has been previously overlooked. The three human areas are dominated, roughly, by coding the head or self in the environment, visual heading direction, and the peripersonal environment around the head, respectively. A unifying functional principle may be best described as prediction in space and time, linking VIP to state estimation as a key parietal sensorimotor function. VIP's expansive differentiation of head and self-related processing may have been key in the emergence of human bodily self-consciousness.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Wei-An Sheng
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany; Department of Psychology, University of Salzburg, Salzburg, Austria; Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France.
| |
Collapse
|
24
|
Rolls ET. Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning. Front Comput Neurosci 2021; 15:686239. [PMID: 34366818 PMCID: PMC8335547 DOI: 10.3389/fncom.2021.686239] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry, United Kingdom
| |
Collapse
|
25
|
Neural Representations of Covert Attention across Saccades: Comparing Pattern Similarity to Shifting and Holding Attention during Fixation. eNeuro 2021; 8:ENEURO.0186-20.2021. [PMID: 33558269 PMCID: PMC8026251 DOI: 10.1523/eneuro.0186-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold attention”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.
Collapse
|
26
|
Rolls ET. Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus 2021; 31:593-611. [PMID: 33760309 DOI: 10.1002/hipo.23324] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 03/01/2021] [Accepted: 03/13/2021] [Indexed: 01/11/2023]
Abstract
A new theory is proposed of mechanisms of navigation in primates including humans in which spatial view cells found in the primate hippocampus and parahippocampal gyrus are used to guide the individual from landmark to landmark. The navigation involves approach to each landmark in turn (taxis), using spatial view cells to identify the next landmark in the sequence, and does not require a topological map. Two other cell types found in primates, whole body motion cells, and head direction cells, can be utilized in the spatial view cell navigational mechanism, but are not essential. If the landmarks become obscured, then the spatial view representations can be updated by self-motion (idiothetic) path integration using spatial coordinate transform mechanisms in the primate dorsal visual system to transform from egocentric to allocentric spatial view coordinates. A continuous attractor network or time cells or working memory is used in this approach to navigation to encode and recall the spatial view sequences involved. I also propose how navigation can be performed using a further type of neuron found in primates, allocentric-bearing-to-a-landmark neurons, in which changes of direction are made when a landmark reaches a particular allocentric bearing. This is useful if a landmark cannot be approached. The theories are made explicit in models of navigation, which are then illustrated by computer simulations. These types of navigation are contrasted with triangulation, which requires a topological map. It is proposed that the first strategy utilizing spatial view cells is used frequently in humans, and is relatively simple because primates have spatial view neurons that respond allocentrically to locations in spatial scenes. An advantage of this approach to navigation is that hippocampal spatial view neurons are also useful for episodic memory, and for imagery.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
27
|
Numssen O, Bzdok D, Hartwigsen G. Functional specialization within the inferior parietal lobes across cognitive domains. eLife 2021; 10:63591. [PMID: 33650486 PMCID: PMC7946436 DOI: 10.7554/elife.63591] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 03/01/2021] [Indexed: 11/13/2022] Open
Abstract
The inferior parietal lobe (IPL) is a key neural substrate underlying diverse mental processes, from basic attention to language and social cognition, that define human interactions. Its putative domain-global role appears to tie into poorly understood differences between cognitive domains in both hemispheres. Across attentional, semantic, and social cognitive tasks, our study explored functional specialization within the IPL. The task specificity of IPL subregion activity was substantiated by distinct predictive signatures identified by multivariate pattern-learning algorithms. Moreover, the left and right IPL exerted domain-specific modulation of effective connectivity among their subregions. Task-evoked functional interactions of the anterior and posterior IPL subregions involved recruitment of distributed cortical partners. While anterior IPL subregions were engaged in strongly lateralized coupling links, both posterior subregions showed more symmetric coupling patterns across hemispheres. Our collective results shed light on how under-appreciated hemispheric specialization in the IPL supports some of the most distinctive human mental capacities.
Collapse
Affiliation(s)
- Ole Numssen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Leipzig, Germany
| | - Danilo Bzdok
- Department of Biomedical Engineering, McConnell Brain Imaging Centre, Montreal Neurological Institute, Faculty of Medicine, McGill University, Montreal, Canada.,Mila - Quebec Artificial Intelligence Institute, Montreal, Canada
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Leipzig, Germany
| |
Collapse
|
28
|
D'Antonio E, Galofaro E, Zenzeri J, Patané F, Konczak J, Casadio M, Masia L. Robotic Assessment of Wrist Proprioception During Kinaesthetic Perturbations: A Neuroergonomic Approach. Front Neurorobot 2021; 15:640551. [PMID: 33732131 PMCID: PMC7958920 DOI: 10.3389/fnbot.2021.640551] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 02/09/2021] [Indexed: 12/01/2022] Open
Abstract
Position sense refers to an aspect of proprioception crucial for motor control and learning. The onset of neurological diseases can damage such sensory afference, with consequent motor disorders dramatically reducing the associated recovery process. In regular clinical practice, assessment of proprioceptive deficits is run by means of clinical scales which do not provide quantitative measurements. However, existing robotic solutions usually do not involve multi-joint movements but are mostly applied to a single proximal or distal joint. The present work provides a testing paradigm for assessing proprioception during coordinated multi-joint distal movements and in presence of kinaesthetic perturbations: we evaluated healthy subjects' ability to match proprioceptive targets along two of the three wrist's degrees of freedom, flexion/extension and abduction/adduction. By introducing rotations along the pronation/supination axis not involved in the matching task, we tested two experimental conditions, which differed in terms of the temporal imposition of the external perturbation: in the first one, the disturbance was provided after the presentation of the proprioceptive target, while in the second one, the rotation of the pronation/ supination axis was imposed during the proprioceptive target presentation. We investigated if (i) the amplitude of the perturbation along the pronation/supination would lead to proprioceptive miscalibration; (ii) the encoding of proprioceptive target, would be influenced by the presentation sequence between the target itself and the rotational disturbance. Eighteen participants were tested by means of a haptic neuroergonomic wrist device: our findings provided evidence that the order of disturbance presentation does not alter proprioceptive acuity. Yet, a further effect has been noticed: proprioception is highly anisotropic and dependent on perturbation amplitude. Unexpectedly, the configuration of the forearm highly influences sensory feedbacks, and significantly alters subjects' performance in matching the proprioceptive targets, defining portions of the wrist workspace where kinaesthetic and proprioceptive acuity are more sensitive. This finding may suggest solutions and applications in multiple fields: from general haptics where, knowing how wrist configuration influences proprioception, might suggest new neuroergonomic solutions in device design, to clinical evaluation after neurological damage, where accurately assessing proprioceptive deficits can dramatically complement regular therapy for a better prediction of the recovery path.
Collapse
Affiliation(s)
- Erika D'Antonio
- Assistive Robotics and Interactive Exosuits (ARIES) Laboratory, Institute of Computer Engineering (ZITI), University of Heidelberg, Heidelberg, Germany
| | - Elisa Galofaro
- Assistive Robotics and Interactive Exosuits (ARIES) Laboratory, Institute of Computer Engineering (ZITI), University of Heidelberg, Heidelberg, Germany.,Department of Informatics, Bioengineering, Robotics, and System Engineering (DIBRIS), University of Genoa, Genoa, Italy
| | - Jacopo Zenzeri
- Robotics, Brain, and Cognitive Sciences Unit, Italian Institute of Technology, Genoa, Italy
| | - Fabrizio Patané
- Mechanical Measurements and Microelectronics (M3Lab) Lab, Engineering Department, University Niccolò Cusano, Rome, Italy
| | - Jürgen Konczak
- Human Sensorimotor Control Laboratory, University of Minnesota, Minneapolis, MN, United States
| | - Maura Casadio
- Department of Informatics, Bioengineering, Robotics, and System Engineering (DIBRIS), University of Genoa, Genoa, Italy
| | - Lorenzo Masia
- Assistive Robotics and Interactive Exosuits (ARIES) Laboratory, Institute of Computer Engineering (ZITI), University of Heidelberg, Heidelberg, Germany.,Faculty of Engineering, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark (SDU), Odense, Denmark
| |
Collapse
|
29
|
Baumann O, Mattingley JB. Extrahippocampal contributions to spatial navigation in humans: A review of the neuroimaging evidence. Hippocampus 2021; 31:640-657. [DOI: 10.1002/hipo.23313] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/18/2021] [Accepted: 01/24/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Oliver Baumann
- School of Psychology Bond University Robina Queensland Australia
| | - Jason B. Mattingley
- Queensland Brain Institute The University of Queensland Brisbane Queensland Australia
- School of Psychology The University of Queensland Brisbane Queensland Australia
- Canadian Institute for Advanced Research (CIFAR) Toronto Ontario Canada
| |
Collapse
|
30
|
Ertl M, Zu Eulenburg P, Woller M, Dieterich M. The role of delta and theta oscillations during ego-motion in healthy adult volunteers. Exp Brain Res 2021; 239:1073-1083. [PMID: 33534022 PMCID: PMC8068649 DOI: 10.1007/s00221-020-06030-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/30/2020] [Indexed: 12/15/2022]
Abstract
The successful cortical processing of multisensory input typically requires the integration of data represented in different reference systems to perform many fundamental tasks, such as bipedal locomotion. Animal studies have provided insights into the integration processes performed by the neocortex and have identified region specific tuning curves for different reference frames during ego-motion. Yet, there remains almost no data on this topic in humans. In this study, an experiment originally performed in animal research with the aim to identify brain regions modulated by the position of the head and eyes relative to a translational ego-motion was adapted for humans. Subjects sitting on a motion platform were accelerated along a translational pathway with either eyes and head aligned or a 20° yaw-plane offset relative to the motion direction while EEG was recorded. Using a distributed source localization approach, it was found that activity in area PFm, a part of Brodmann area 40, was modulated by the congruency of translational motion direction, eye, and head position. In addition, an asymmetry between the hemispheres in the opercular-insular region was observed during the cortical processing of the vestibular input. A frequency specific analysis revealed that low-frequency oscillations in the delta- and theta-band are modulated by vestibular stimulation. Source-localization estimated that the observed low-frequency oscillations are generated by vestibular core-regions, such as the parieto-opercular region and frontal areas like the mid-orbital gyrus and the medial frontal gyrus.
Collapse
Affiliation(s)
- M Ertl
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland.
- Department of Neurology, Ludwig-Maximilians-Universität München, München, Germany.
| | - P Zu Eulenburg
- German Center for Vertigo and Balance Disorders (IFBLMU), Ludwig-Maximilians-Universität München, München, Germany
- Institute for Neuroradiology, Ludwig-Maximilians-Universität München, München, Germany
| | - M Woller
- Department of Neurology, Ludwig-Maximilians-Universität München, München, Germany
| | - M Dieterich
- Department of Neurology, Ludwig-Maximilians-Universität München, München, Germany
- German Center for Vertigo and Balance Disorders (IFBLMU), Ludwig-Maximilians-Universität München, München, Germany
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians-Universität München, München, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
31
|
Bretas R, Taoka M, Hihara S, Cleeremans A, Iriki A. Neural Evidence of Mirror Self-Recognition in the Secondary Somatosensory Cortex of Macaque: Observations from a Single-Cell Recording Experiment and Implications for Consciousness. Brain Sci 2021; 11:brainsci11020157. [PMID: 33503993 PMCID: PMC7911187 DOI: 10.3390/brainsci11020157] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/20/2021] [Accepted: 01/21/2021] [Indexed: 11/23/2022] Open
Abstract
Despite mirror self-recognition being regarded as a classical indication of self-awareness, little is known about its neural underpinnings. An increasing body of evidence pointing to a role of multimodal somatosensory neurons in self-recognition guided our investigation toward the secondary somatosensory cortex (SII), as we observed single-neuron activity from a macaque monkey sitting in front of a mirror. The monkey was previously habituated to the mirror, successfully acquiring the ability of mirror self-recognition. While the monkey underwent visual and somatosensory stimulation, multimodal visual and somatosensory activity was detected in the SII, with neurons found to respond to stimuli seen through the mirror. Responses were also modulated by self-related or non-self-related stimuli. These observations corroborate that vision is an important aspect of SII activity, with electrophysiological evidence of mirror self-recognition at the neuronal level, even when such an ability is not innate. We also show that the SII may be involved in distinguishing self and non-self. Together, these results point to the involvement of the SII in the establishment of bodily self-consciousness.
Collapse
Affiliation(s)
- Rafael Bretas
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Miki Taoka
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Sayaka Hihara
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Axel Cleeremans
- Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research, Toronto, ON M5G 1M1, Canada;
- Consciousness, Cognition, and Computation Group (CO3), Centre for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), B-1050 Brussels, Belgium
| | - Atsushi Iriki
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
- Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research, Toronto, ON M5G 1M1, Canada;
- Correspondence:
| |
Collapse
|
32
|
Oh SW, Son SJ, Morris JA, Choi JH, Lee C, Rah JC. Comprehensive Analysis of Long-Range Connectivity from and to the Posterior Parietal Cortex of the Mouse. Cereb Cortex 2021; 31:356-378. [PMID: 32901251 DOI: 10.1093/cercor/bhaa230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 06/27/2020] [Accepted: 07/27/2020] [Indexed: 11/14/2022] Open
Abstract
The posterior parietal cortex (PPC) is a major multimodal association cortex implicated in a variety of higher order cognitive functions, such as visuospatial perception, spatial attention, categorization, and decision-making. The PPC is known to receive inputs from a collection of sensory cortices as well as various subcortical areas and integrate those inputs to facilitate the execution of functions that require diverse information. Although many recent works have been performed with the mouse as a model system, a comprehensive understanding of long-range connectivity of the mouse PPC is scarce, preventing integrative interpretation of the rapidly accumulating functional data. In this study, we conducted a detailed neuroanatomic and bioinformatic analysis of the Allen Mouse Brain Connectivity Atlas data to summarize afferent and efferent connections to/from the PPC. Then, we analyzed variability between subregions of the PPC, functional/anatomical modalities, and species, and summarized the organizational principle of the mouse PPC. Finally, we confirmed key results by using additional neurotracers. A comprehensive survey of the connectivity will provide an important future reference to comprehend the function of the PPC and allow effective paths forward to various studies using mice as a model system.
Collapse
Affiliation(s)
| | - Sook Jin Son
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea
| | | | - Joon Ho Choi
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Jong-Cheol Rah
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea.,Department of Brain and Cognitive Sciences, DGIST, Daegu 42988, Korea
| |
Collapse
|
33
|
Avila E, Lakshminarasimhan KJ, DeAngelis GC, Angelaki DE. Visual and Vestibular Selectivity for Self-Motion in Macaque Posterior Parietal Area 7a. Cereb Cortex 2020; 29:3932-3947. [PMID: 30365011 DOI: 10.1093/cercor/bhy272] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 09/05/2018] [Indexed: 01/12/2023] Open
Abstract
We examined the responses of neurons in posterior parietal area 7a to passive rotational and translational self-motion stimuli, while systematically varying the speed of visually simulated (optic flow cues) or actual (vestibular cues) self-motion. Contrary to a general belief that responses in area 7a are predominantly visual, we found evidence for a vestibular dominance in self-motion processing. Only a small fraction of neurons showed multisensory convergence of visual/vestibular and linear/angular self-motion cues. These findings suggest possibly independent neuronal population codes for visual versus vestibular and linear versus angular self-motion. Neural responses scaled with self-motion magnitude (i.e., speed) but temporal dynamics were diverse across the population. Analyses of laminar recordings showed a strong distance-dependent decrease for correlations in stimulus-induced (signal correlation) and stimulus-independent (noise correlation) components of spike-count variability, supporting the notion that neurons are spatially clustered with respect to their sensory representation of motion. Single-unit and multiunit response patterns were also correlated, but no other systematic dependencies on cortical layers or columns were observed. These findings describe a likely independent multimodal neural code for linear and angular self-motion in a posterior parietal area of the macaque brain that is connected to the hippocampal formation.
Collapse
Affiliation(s)
- Eric Avila
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.,Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
34
|
Abstract
Several types of neurons involved in spatial navigation and memory encode the distance and direction (that is, the vector) between an agent and items in its environment. Such vectorial information provides a powerful basis for spatial cognition by representing the geometric relationships between the self and the external world. Here, we review the explicit encoding of vectorial information by neurons in and around the hippocampal formation, far from the sensory periphery. The parahippocampal, retrosplenial and parietal cortices, as well as the hippocampal formation and striatum, provide a plethora of examples of vector coding at the single neuron level. We provide a functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work. The responses of these neurons may provide the fundamental neural basis for the (bottom-up) representation of environmental layout and (top-down) memory-guided generation of visuospatial imagery and navigational planning.
Collapse
|
35
|
Ruggiero G, Ruotolo F, Orti R, Rauso B, Iachini T. Egocentric metric representations in peripersonal space: A bridge between motor resources and spatial memory. Br J Psychol 2020; 112:433-454. [PMID: 32710656 DOI: 10.1111/bjop.12467] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 06/18/2020] [Indexed: 11/29/2022]
Abstract
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.
Collapse
Affiliation(s)
- Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| | - Renato Orti
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| | - Barbara Rauso
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| |
Collapse
|
36
|
Ayhan I, Ozbagci D. Action-induced changes in the perceived temporal features of visual events. Vision Res 2020; 175:1-13. [PMID: 32623245 DOI: 10.1016/j.visres.2020.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 05/18/2020] [Accepted: 05/24/2020] [Indexed: 11/28/2022]
Abstract
Perceived duration can be subject to deviations around the time of a voluntary action. Whether the mechanisms underlying action-induced visual duration effects are effector-specific or require a more generalized action-linked multimodal calibration with the transient visual system, however, is a question yet to be answered. Here, we investigate this using dynamic visual stimuli presented as contingent upon the execution of an arbitrarily associated voluntary manual response. Our results demonstrate that the duration of intervals with arbitrarily associated keypress-visual event pair is perceived as shorter than the duration in a pure visual condition, where the same stimuli are rather passively observed without the execution of a concurrent action. Whereas the control experiments show that motor memory and attention cannot explain the action-induced changes in perceived temporal features, action-induced changes in perceived speed are dissociated from those in perceived duration, and that the duration compression disappears using isoluminant or static stimuli, which together provide evidence that these two effects can be modulated in the motion-processing units, although via separate neural mechanisms.
Collapse
Affiliation(s)
- Inci Ayhan
- Department of Psychology, Bogazici University, Istanbul, Turkey; Cognitive Science Program, Bogazici University, Istanbul, Turkey.
| | - Duygu Ozbagci
- Cognitive Science Program, Bogazici University, Istanbul, Turkey.
| |
Collapse
|
37
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
38
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
39
|
Preservation of Partially Mixed Selectivity in Human Posterior Parietal Cortex across Changes in Task Context. eNeuro 2020; 7:ENEURO.0222-19.2019. [PMID: 31969321 PMCID: PMC7070450 DOI: 10.1523/eneuro.0222-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 12/17/2019] [Indexed: 02/08/2023] Open
Abstract
Recent studies in posterior parietal cortex (PPC) have found multiple effectors and cognitive strategies represented within a shared neural substrate in a structure termed "partially mixed selectivity" (Zhang et al., 2017). In this study, we examine whether the structure of these representations is preserved across changes in task context and is thus a robust and generalizable property of the neural population. Specifically, we test whether the structure is conserved from an open-loop motor imagery task (training) to a closed-loop cortical control task (online), a change that has led to substantial changes in neural behavior in prior studies in motor cortex. Recording from a 4 × 4 mm electrode array implanted in PPC of a human tetraplegic patient participating in a brain-machine interface (BMI) clinical trial, we studied the representations of imagined/attempted movements of the left/right hand and compare their individual BMI control performance using a one-dimensional cursor control task. We found that the structure of the representations is largely maintained between training and online control. Our results demonstrate for the first time that the structure observed in the context of an open-loop motor imagery task is maintained and accessible in the context of closed-loop BMI control. These results indicate that it is possible to decode the mixed variables found from a small patch of cortex in PPC and use them individually for BMI control. Furthermore, they show that the structure of the mixed representations is maintained and robust across changes in task context.
Collapse
|
40
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
41
|
Jin W, Qin H, Zhang K, Chen X. Spatial Navigation. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1284:63-90. [PMID: 32852741 DOI: 10.1007/978-981-15-7086-5_7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The hippocampus is critical for spatial navigation. In this review, we focus on the role of the hippocampus in three basic strategies used for spatial navigation: path integration, stimulus-response association, and map-based navigation. First, the hippocampus is not required for path integration unless the path of path integration is too long and complex. The hippocampus provides mnemonic support when involved in the process of path integration. Second, the hippocampus's involvement in stimulus-response association is dependent on how the strategy is conducted. The hippocampus is not required for the habit form of stimulus-response association. Third, while the hippocampus is fully engaged in map-based navigation, the shared characteristics of place cells, grid cells, head direction cells, and other spatial encoding cells, which are detected in the hippocampus and associated areas, offer a possibility that there is a stand-alone allocentric space perception (or mental representation) of the environment outside and independent of the hippocampus, and the spatially specific firing patterns of these spatial encoding cells are the unfolding of the intermediate stages of the processing of this allocentric spatial information when conveyed into the hippocampus for information storage or retrieval. Furthermore, the presence of all the spatially specific firing patterns in the hippocampus and the related neural circuits during the path integration and map-based navigation support such a notion that in essence, path integration is the same allocentric space perception provided with only idiothetic inputs. Taken together, the hippocampus plays a general mnemonic role in spatial navigation.
Collapse
Affiliation(s)
- Wenjun Jin
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China.
| | - Han Qin
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Kuan Zhang
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Xiaowei Chen
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| |
Collapse
|
42
|
Liu S, Yu Q, Tse PU, Cavanagh P. Neural Correlates of the Conscious Perception of Visual Location Lie Outside Visual Cortex. Curr Biol 2019; 29:4036-4044.e4. [PMID: 31761706 DOI: 10.1016/j.cub.2019.10.033] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Revised: 09/23/2019] [Accepted: 10/17/2019] [Indexed: 11/19/2022]
Abstract
When perception differs from the physical stimulus, as it does for visual illusions and binocular rivalry, the opportunity arises to localize where perception emerges in the visual processing hierarchy. Representations prior to that stage differ from the eventual conscious percept even though they provide input to it. Here, we investigate where and how a remarkable misperception of position emerges in the brain. This "double-drift" illusion causes a dramatic mismatch between retinal and perceived location, producing a perceived motion path that can differ from its physical path by 45° or more. The deviations in the perceived trajectory can accumulate over at least a second, whereas other motion-induced position shifts accumulate over 80-100 ms before saturating. Using fMRI and multivariate pattern analysis, we find that the illusory path does not share activity patterns with a matched physical path in any early visual areas. In contrast, a whole-brain searchlight analysis reveals a shared representation in anterior regions of the brain. These higher-order areas would have the longer time constants required to accumulate the small moment-to-moment position offsets that presumably originate in early visual cortical areas and then transform these sensory inputs into a final conscious percept. The dissociation between perception and the activity in early sensory cortex suggests that consciously perceived position does not emerge in what is traditionally regarded as the visual system but instead emerges at a higher level.
Collapse
Affiliation(s)
- Sirui Liu
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Qing Yu
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA; Department of Psychiatry, University of Wisconsin-Madison, Madison, WI 53719, USA
| | - Peter U Tse
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Patrick Cavanagh
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA; Department of Psychology, Glendon College, Toronto, ON M4N 3M6, Canada
| |
Collapse
|
43
|
Rolls ET. Spatial coordinate transforms linking the allocentric hippocampal and egocentric parietal primate brain systems for memory, action in space, and navigation. Hippocampus 2019; 30:332-353. [PMID: 31697002 DOI: 10.1002/hipo.23171] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 10/05/2019] [Accepted: 10/09/2019] [Indexed: 01/03/2023]
Abstract
A theory and model of spatial coordinate transforms in the dorsal visual system through the parietal cortex that enable an interface via posterior cingulate and related retrosplenial cortex to allocentric spatial representations in the primate hippocampus is described. First, a new approach to coordinate transform learning in the brain is proposed, in which the traditional gain modulation is complemented by temporal trace rule competitive network learning. It is shown in a computational model that the new approach works much more precisely than gain modulation alone, by enabling neurons to represent the different combinations of signal and gain modulator more accurately. This understanding may have application to many brain areas where coordinate transforms are learned. Second, a set of coordinate transforms is proposed for the dorsal visual system/parietal areas that enables a representation to be formed in allocentric spatial view coordinates. The input stimulus is merely a stimulus at a given position in retinal space, and the gain modulation signals needed are eye position, head direction, and place, all of which are present in the primate brain. Neurons that encode the bearing to a landmark are involved in the coordinate transforms. Part of the importance here is that the coordinates of the allocentric view produced in this model are the same as those of spatial view cells that respond to allocentric view recorded in the primate hippocampus and parahippocampal cortex. The result is that information from the dorsal visual system can be used to update the spatial input to the hippocampus in the appropriate allocentric coordinate frame, including providing for idiothetic update to allow for self-motion. It is further shown how hippocampal spatial view cells could be useful for the transform from hippocampal allocentric coordinates to egocentric coordinates useful for actions in space and for navigation.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
44
|
Abstract
This article proposes that biologically plausible theories of behavior can be constructed by following a method of "phylogenetic refinement," whereby they are progressively elaborated from simple to complex according to phylogenetic data on the sequence of changes that occurred over the course of evolution. It is argued that sufficient data exist to make this approach possible, and that the result can more effectively delineate the true biological categories of neurophysiological mechanisms than do approaches based on definitions of putative functions inherited from psychological traditions. As an example, the approach is used to sketch a theoretical framework of how basic feedback control of interaction with the world was elaborated during vertebrate evolution, to give rise to the functional architecture of the mammalian brain. The results provide a conceptual taxonomy of mechanisms that naturally map to neurophysiological and neuroanatomical data and that offer a context for defining putative functions that, it is argued, are better grounded in biology than are some of the traditional concepts of cognitive science.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada.
| |
Collapse
|
45
|
Abstract
In this article, we challenge the usefulness of "attention" as a unitary construct and/or neural system. We point out that the concept has too many meanings to justify a single term, and that "attention" is used to refer to both the explanandum (the set of phenomena in need of explanation) and the explanans (the set of processes doing the explaining). To illustrate these points, we focus our discussion on visual selective attention. It is argued that selectivity in processing has emerged through evolution as a design feature of a complex multi-channel sensorimotor system, which generates selective phenomena of "attention" as one of many by-products. Instead of the traditional analytic approach to attention, we suggest a synthetic approach that starts with well-understood mechanisms that do not need to be dedicated to attention, and yet account for the selectivity phenomena under investigation. We conclude that what would serve scientific progress best would be to drop the term "attention" as a label for a specific functional or neural system and instead focus on behaviorally relevant selection processes and the many systems that implement them.
Collapse
Affiliation(s)
- Bernhard Hommel
- Institute of Psychology, Cognitive Psychology Unit and Leiden Institute for Brain and Cognition, Leiden University, Leiden, the Netherlands
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Paul Cisek
- Department of Neuroscience, University of Montreal, Montreal, Quebec, Canada
| | - Heather F Neyedli
- School of Health and Human Performance, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Timothy N Welsh
- Centre for Motor Control, Faculty of Kinesiology and Physical Education, University of Toronto, 55 Harbord Street, Toronto, ON, M5S 2W6, Canada.
| |
Collapse
|
46
|
No single, stable 3D representation can explain pointing biases in a spatial updating task. Sci Rep 2019; 9:12578. [PMID: 31467296 PMCID: PMC6715735 DOI: 10.1038/s41598-019-48379-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 07/26/2019] [Indexed: 11/23/2022] Open
Abstract
People are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer’s location but the form this might take is debated. We tested the accuracy and reliability of observers’ estimates of the visual direction of previously-viewed targets. Participants viewed four objects from one location, with binocular vision and small head movements then, without any further sight of the targets, they walked to another location and pointed towards them. All conditions were tested in an immersive virtual environment and some were also carried out in a real scene. Participants made large, consistent pointing errors that are poorly explained by any stable 3D representation. Any explanation based on a 3D representation would have to posit a different layout of the remembered scene depending on the orientation of the obscuring wall at the moment the participant points. Our data show that the mechanisms for updating visual direction of unseen targets are not based on a stable 3D model of the scene, even a distorted one.
Collapse
|
47
|
Edvardsen V, Bicanski A, Burgess N. Navigating with grid and place cells in cluttered environments. Hippocampus 2019; 30:220-232. [PMID: 31408264 PMCID: PMC8641373 DOI: 10.1002/hipo.23147] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 06/26/2019] [Accepted: 07/19/2019] [Indexed: 11/20/2022]
Abstract
Hippocampal formation contains several classes of neurons thought to be involved in navigational processes, in particular place cells and grid cells. Place cells have been associated with a topological strategy for navigation, while grid cells have been suggested to support metric vector navigation. Grid cell‐based vector navigation can support novel shortcuts across unexplored territory by providing the direction toward the goal. However, this strategy is insufficient in natural environments cluttered with obstacles. Here, we show how navigation in complex environments can be supported by integrating a grid cell‐based vector navigation mechanism with local obstacle avoidance mediated by border cells and place cells whose interconnections form an experience‐dependent topological graph of the environment. When vector navigation and object avoidance fail (i.e., the agent gets stuck), place cell replay events set closer subgoals for vector navigation. We demonstrate that this combined navigation model can successfully traverse environments cluttered by obstacles and is particularly useful where the environment is underexplored. Finally, we show that the model enables the simulated agent to successfully navigate experimental maze environments from the animal literature on cognitive mapping. The proposed model is sufficiently flexible to support navigation in different environments, and may inform the design of experiments to relate different navigational abilities to place, grid, and border cell firing.
Collapse
Affiliation(s)
- Vegard Edvardsen
- Department of Computer Science, NTNU-Norwegian University of Science and Technology, Trondheim, Norway
| | - Andrej Bicanski
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK
| |
Collapse
|
48
|
Karnath HO, Kriechel I, Tesch J, Mohler BJ, Mölbert SC. Caloric vestibular stimulation has no effect on perceived body size. Sci Rep 2019; 9:11411. [PMID: 31388079 PMCID: PMC6684593 DOI: 10.1038/s41598-019-47897-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 07/25/2019] [Indexed: 12/04/2022] Open
Abstract
It has been suggested that the vestibular system not only plays a role for our sense of balance and postural control but also might modulate higher-order body representations, such as the perceived shape and size of our body. Recent findings using virtual reality (VR) to realistically manipulate the length of whole extremities of first person biometric avatars under vestibular stimulation did not support this assumption. It has been discussed that these negative findings were due to the availability of visual feedback on the subjects' virtual arms and legs. The present study tested this hypothesis by excluding the latter information. A newly recruited group of healthy subjects had to adjust the position of blocks in 3D space of a VR scenario such that they had the feeling that they could just touch them with their left/right hand/heel. Caloric vestibular stimulation did not alter perceived size of own extremities. Findings suggest that vestibular signals do not serve to scale the internal representation of (large parts of) our body's metric properties. This is in obvious contrast to the egocentric representation of our body midline which allows us to perceive and adjust the position of our body with respect to the surroundings. These two qualia appear to belong to different systems of body representation in humans.
Collapse
Affiliation(s)
- Hans-Otto Karnath
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
- Department of Psychology, University of South Carolina, Columbia, SC, 29208, USA.
| | - Isabel Kriechel
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Joachim Tesch
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Technical University Darmstadt, Institute of Sports Science, Darmstadt, Germany
| | - Simone Claire Mölbert
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Medical University Hospital Tübingen, Dept. of Psychosomatic Medicine and Psychotherapy, University of Tübingen, Tübingen, Germany
| |
Collapse
|
49
|
Visceral Signals Shape Brain Dynamics and Cognition. Trends Cogn Sci 2019; 23:488-509. [DOI: 10.1016/j.tics.2019.03.007] [Citation(s) in RCA: 96] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 03/22/2019] [Accepted: 03/27/2019] [Indexed: 01/17/2023]
|
50
|
Ugolini G, Prevosto V, Graf W. Ascending vestibular pathways to parietal areas MIP and LIPv and efference copy inputs from the medial reticular formation: Functional frameworks for body representations updating and online movement guidance. Eur J Neurosci 2019; 50:2988-3013. [DOI: 10.1111/ejn.14426] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/25/2019] [Accepted: 04/04/2019] [Indexed: 11/28/2022]
Affiliation(s)
- Gabriella Ugolini
- Paris‐Saclay Institute of Neuroscience (UMR9197) CNRS ‐ Université Paris‐Sud Université Paris‐Saclay Gif‐sur‐Yvette France
| | - Vincent Prevosto
- Paris‐Saclay Institute of Neuroscience (UMR9197) CNRS ‐ Université Paris‐Sud Université Paris‐Saclay Gif‐sur‐Yvette France
- Department of Biomedical Engineering Pratt School of Engineering Durham North Carolina
- Department of Neurobiology Duke School of Medicine Duke University Durham North Carolina
| | - Werner Graf
- Department of Physiology and Biophysics Howard University Washington District of Columbia
| |
Collapse
|