1
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques. Nat Commun 2024; 15:5738. [PMID: 38982106 PMCID: PMC11233555 DOI: 10.1038/s41467-024-50203-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/02/2024] [Indexed: 07/11/2024] Open
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA.
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA.
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
- Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
2
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
3
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
4
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Gao W, Shen J, Lin Y, Wang K, Lin Z, Tang H, Chen X. Sequential sparse autoencoder for dynamic heading representation in ventral intraparietal area. Comput Biol Med 2023; 163:107114. [PMID: 37329620 DOI: 10.1016/j.compbiomed.2023.107114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/12/2023] [Accepted: 05/30/2023] [Indexed: 06/19/2023]
Abstract
To navigate in space, it is important to predict headings in real-time from neural responses in the brain to vestibular and visual signals, and the ventral intraparietal area (VIP) is one of the critical brain areas. However, it remains unexplored in the population level how the heading perception is represented in VIP. And there are no commonly used methods suitable for decoding the headings from the population responses in VIP, given the large spatiotemporal dynamics and heterogeneity in the neural responses. Here, responses were recorded from 210 VIP neurons in three rhesus monkeys when they were performing a heading perception task. And by specifically and separately modelling the both dynamics with sparse representation, we built a sequential sparse autoencoder (SSAE) to do the population decoding on the recorded dataset and tried to maximize the decoding performance. The SSAE relies on a three-layer sparse autoencoder to extract temporal and spatial heading features in the dataset via unsupervised learning, and a softmax classifier to decode the headings. Compared with other population decoding methods, the SSAE achieves a leading accuracy of 96.8% ± 2.1%, and shows the advantages of robustness, low storage and computing burden for real-time prediction. Therefore, our SSAE model performs well in learning neurobiologically plausible features comprising dynamic navigational information.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Kejun Wang
- School of Software Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou, 310009, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China.
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China.
| |
Collapse
|
6
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.30.551169. [PMID: 37577498 PMCID: PMC10418097 DOI: 10.1101/2023.07.30.551169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
7
|
Zhao B, Wang R, Zhu Z, Yang Q, Chen A. The computational rules of cross-modality suppression in the visual posterior sylvian area. iScience 2023; 26:106973. [PMID: 37378331 PMCID: PMC10291470 DOI: 10.1016/j.isci.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
The macaque visual posterior sylvian area (VPS) is an area with neurons responding selectively to heading direction in both visual and vestibular modalities, but how VPS neurons combined these two sensory signals is still unknown. In contrast to the subadditive characteristics in the medial superior temporal area (MSTd), responses in VPS were dominated by vestibular signals, with approximately a winner-take-all competition. The conditional Fisher information analysis shows that VPS neural population encodes information from distinct sensory modalities under large and small offset conditions, which differs from MSTd whose neural population contains more information about visual stimuli in both conditions. However, the combined responses of single neurons in both areas can be well fit by weighted linear sums of unimodal responses. Furthermore, a normalization model captured most vestibular and visual interaction characteristics for both VPS and MSTd, indicating the divisive normalization mechanism widely exists in the cortex.
Collapse
Affiliation(s)
- Bin Zhao
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Rong Wang
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Zhihua Zhu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qianli Yang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| |
Collapse
|
8
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
9
|
Lakshminarasimhan KJ, Avila E, Pitkow X, Angelaki DE. Dynamical latent state computation in the male macaque posterior parietal cortex. Nat Commun 2023; 14:1832. [PMID: 37005470 PMCID: PMC10067966 DOI: 10.1038/s41467-023-37400-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 03/15/2023] [Indexed: 04/04/2023] Open
Abstract
Success in many real-world tasks depends on our ability to dynamically track hidden states of the world. We hypothesized that neural populations estimate these states by processing sensory history through recurrent interactions which reflect the internal model of the world. To test this, we recorded brain activity in posterior parietal cortex (PPC) of monkeys navigating by optic flow to a hidden target location within a virtual environment, without explicit position cues. In addition to sequential neural dynamics and strong interneuronal interactions, we found that the hidden state - monkey's displacement from the goal - was encoded in single neurons, and could be dynamically decoded from population activity. The decoded estimates predicted navigation performance on individual trials. Task manipulations that perturbed the world model induced substantial changes in neural interactions, and modified the neural representation of the hidden state, while representations of sensory and motor variables remained stable. The findings were recapitulated by a task-optimized recurrent neural network model, suggesting that task demands shape the neural interactions in PPC, leading them to embody a world model that consolidates information and tracks task-relevant hidden states.
Collapse
Affiliation(s)
| | - Eric Avila
- Center for Neural Science, New York University, New York City, NY, USA
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Electrical & Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
- Department of Mechanical and Aerospace Engineering, New York University, New York City, NY, USA
| |
Collapse
|
10
|
Mildren RL, Cullen KE. Vestibular Contributions to Primate Neck Postural Muscle Activity during Natural Motion. J Neurosci 2023; 43:2326-2337. [PMID: 36801822 PMCID: PMC10072293 DOI: 10.1523/jneurosci.1831-22.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 01/10/2023] [Accepted: 02/13/2023] [Indexed: 02/19/2023] Open
Abstract
To maintain stable posture of the head and body during our everyday activities, the brain integrates information across multiple sensory systems. Here, we examined how the primate vestibular system, independently and in combination with visual sensory input, contributes to the sensorimotor control of head posture across the range of dynamic motion experienced during daily life. We recorded activity of single motor units in the splenius capitis and sternocleidomastoid muscles in rhesus monkeys during yaw rotations spanning the physiological range of self-motion (up to 20 Hz) in darkness. Splenius capitis motor unit responses continued to increase with frequency up to 16 Hz in normal animals, and were strikingly absent following bilateral peripheral vestibular loss. To determine whether visual information modulated these vestibular-driven neck muscle responses, we experimentally controlled the correspondence between visual and vestibular cues of self-motion. Surprisingly, visual information did not influence motor unit responses in normal animals, nor did it substitute for absent vestibular feedback following bilateral peripheral vestibular loss. A comparison of muscle activity evoked by broadband versus sinusoidal head motion further revealed that low-frequency responses were attenuated when low- and high-frequency self-motion were experienced concurrently. Finally, we found that vestibular-evoked responses were enhanced by increased autonomic arousal, quantified via pupil size. Together, our findings directly establish the vestibular system's contribution to the sensorimotor control of head posture across the dynamic motion range experienced during everyday activities, as well as how vestibular, visual, and autonomic inputs are integrated for postural control.SIGNIFICANCE STATEMENT Our sensory systems enable us to maintain control of our posture and balance as we move through the world. Notably, the vestibular system senses motion of the head and sends motor commands, via vestibulospinal pathways, to axial and limb muscles to stabilize posture. By recording the activity of single motor units, here we show, for the first time, that the vestibular system contributes to the sensorimotor control of head posture across the dynamic motion range experienced during everyday activities. Our results further establish how vestibular, autonomic, and visual inputs are integrated for postural control. This information is essential for understanding both the mechanisms underlying the control of posture and balance, and the impact of the loss of sensory function.
Collapse
Affiliation(s)
- Robyn L Mildren
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205
| | - Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205
| |
Collapse
|
11
|
Rolls ET, Deco G, Huang CC, Feng J. The human posterior parietal cortex: effective connectome, and its relation to function. Cereb Cortex 2023; 33:3142-3170. [PMID: 35834902 PMCID: PMC10401905 DOI: 10.1093/cercor/bhac266] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 01/04/2023] Open
Abstract
The effective connectivity between 21 regions in the human posterior parietal cortex, and 360 cortical regions was measured in 171 Human Connectome Project (HCP) participants using the HCP atlas, and complemented with functional connectivity and diffusion tractography. Intraparietal areas LIP, VIP, MIP, and AIP have connectivity from early cortical visual regions, and to visuomotor regions such as the frontal eye fields, consistent with functions in eye saccades and tracking. Five superior parietal area 7 regions receive from similar areas and from the intraparietal areas, but also receive somatosensory inputs and connect with premotor areas including area 6, consistent with functions in performing actions to reach for, grasp, and manipulate objects. In the anterior inferior parietal cortex, PFop, PFt, and PFcm are mainly somatosensory, and PF in addition receives visuo-motor and visual object information, and is implicated in multimodal shape and body image representations. In the posterior inferior parietal cortex, PFm and PGs combine visuo-motor, visual object, and reward input and connect with the hippocampal system. PGi in addition provides a route to motion-related superior temporal sulcus regions involved in social interactions. PGp has connectivity with intraparietal regions involved in coordinate transforms and may be involved in idiothetic update of hippocampal visual scene representations.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain
- Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, Institute of Brain and Education Innovation, East China Normal University, Shanghai 200602, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
12
|
Rolls ET, Wirth S, Deco G, Huang C, Feng J. The human posterior cingulate, retrosplenial, and medial parietal cortex effective connectome, and implications for memory and navigation. Hum Brain Mapp 2023; 44:629-655. [PMID: 36178249 PMCID: PMC9842927 DOI: 10.1002/hbm.26089] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
The human posterior cingulate, retrosplenial, and medial parietal cortex are involved in memory and navigation. The functional anatomy underlying these cognitive functions was investigated by measuring the effective connectivity of these Posterior Cingulate Division (PCD) regions in the Human Connectome Project-MMP1 atlas in 171 HCP participants, and complemented with functional connectivity and diffusion tractography. First, the postero-ventral parts of the PCD (31pd, 31pv, 7m, d23ab, and v23ab) have effective connectivity with the temporal pole, inferior temporal visual cortex, cortex in the superior temporal sulcus implicated in auditory and semantic processing, with the reward-related vmPFC and pregenual anterior cingulate cortex, with the inferior parietal cortex, and with the hippocampal system. This connectivity implicates it in hippocampal episodic memory, providing routes for "what," reward and semantic schema-related information to access the hippocampus. Second, the antero-dorsal parts of the PCD (especially 31a and 23d, PCV, and also RSC) have connectivity with early visual cortical areas including those that represent spatial scenes, with the superior parietal cortex, with the pregenual anterior cingulate cortex, and with the hippocampal system. This connectivity implicates it in the "where" component for hippocampal episodic memory and for spatial navigation. The dorsal-transitional-visual (DVT) and ProStriate regions where the retrosplenial scene area is located have connectivity from early visual cortical areas to the parahippocampal scene area, providing a ventromedial route for spatial scene information to reach the hippocampus. These connectivities provide important routes for "what," reward, and "where" scene-related information for human hippocampal episodic memory and navigation. The midcingulate cortex provides a route from the anterior dorsal parts of the PCD and the supracallosal part of the anterior cingulate cortex to premotor regions.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229CNRS and University of LyonBronFrance
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Brain and CognitionPompeu Fabra UniversityBarcelonaSpain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA)Universitat Pompeu FabraBarcelonaSpain
| | - Chu‐Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive ScienceEast China Normal UniversityShanghaiChina
| | - Jianfeng Feng
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| |
Collapse
|
13
|
Ruehl RM, Flanagin VL, Ophey L, Raiser TM, Seiderer K, Ertl M, Conrad J, Zu Eulenburg P. The human egomotion network. Neuroimage 2022; 264:119715. [PMID: 36334557 DOI: 10.1016/j.neuroimage.2022.119715] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/18/2022] [Accepted: 10/25/2022] [Indexed: 11/07/2022] Open
Abstract
All volitional movement in a three-dimensional space requires multisensory integration, in particular of visual and vestibular signals. Where and how the human brain processes and integrates self-motion signals remains enigmatic. Here, we applied visual and vestibular self-motion stimulation using fast and precise whole-brain neuroimaging to delineate and characterize the entire cortical and subcortical egomotion network in a substantial cohort (n=131). Our results identify a core egomotion network consisting of areas in the cingulate sulcus (CSv, PcM/pCi), the cerebellum (uvula), and the temporo-parietal cortex including area VPS and an unnamed region in the supramarginal gyrus. Based on its cerebral connectivity pattern and anatomical localization, we propose that this region represents the human homologue of macaque area 7a. Whole-brain connectivity and gradient analyses imply an essential role of the connections between the cingulate sulcus and the cerebellar uvula in egomotion perception. This could be via feedback loops involved updating visuo-spatial and vestibular information. The unique functional connectivity patterns of PcM/pCi hint at central role in multisensory integration essential for the perception of self-referential spatial awareness. All cortical egomotion hubs showed modular functional connectivity with other visual, vestibular, somatosensory and higher order motor areas, underlining their mutual function in general sensorimotor integration.
Collapse
Affiliation(s)
- Ria Maxine Ruehl
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany.
| | - Virginia L Flanagin
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany
| | - Leoni Ophey
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Theresa Marie Raiser
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Katharina Seiderer
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Matthias Ertl
- Institute of Psychology and Inselspital, Fabrikstrasse 8, 3012 Bern, University of Bern, Switzerland
| | - Julian Conrad
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Department of Neurology, Theodor-Kutze Ufer 1-3, 68167 Mannheim, Medical Faculty Mannheim, University of Heidelberg, Germany
| | - Peter Zu Eulenburg
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany; Institute for Neuroradiology, University Hospital Munich, Marchionini Str. 15, 81377 Munich, Ludwig-Maximilians-University Munich, Germany
| |
Collapse
|
14
|
Noel JP, Balzani E, Avila E, Lakshminarasimhan KJ, Bruni S, Alefantis P, Savin C, Angelaki DE. Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation. eLife 2022; 11:e80280. [PMID: 36282071 PMCID: PMC9668339 DOI: 10.7554/elife.80280] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 10/24/2022] [Indexed: 11/13/2022] Open
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Edoardo Balzani
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Eric Avila
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Kaushik J Lakshminarasimhan
- Center for Neural Science, New York UniversityNew York CityUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
| | - Stefania Bruni
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Panos Alefantis
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Cristina Savin
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew York CityUnited States
| |
Collapse
|
15
|
Rolls ET, Deco G, Huang CC, Feng J. Prefrontal and somatosensory-motor cortex effective connectivity in humans. Cereb Cortex 2022; 33:4939-4963. [PMID: 36227217 DOI: 10.1093/cercor/bhac391] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 11/12/2022] Open
Abstract
Effective connectivity, functional connectivity, and tractography were measured between 57 cortical frontal and somatosensory regions and the 360 cortical regions in the Human Connectome Project (HCP) multimodal parcellation atlas for 171 HCP participants. A ventral somatosensory stream connects from 3b and 3a via 1 and 2 and then via opercular and frontal opercular regions to the insula, which then connects to inferior parietal PF regions. This stream is implicated in "what"-related somatosensory processing of objects and of the body and in combining with visual inputs in PF. A dorsal "action" somatosensory stream connects from 3b and 3a via 1 and 2 to parietal area 5 and then 7. Inferior prefrontal regions have connectivity with the inferior temporal visual cortex and orbitofrontal cortex, are implicated in working memory for "what" processing streams, and provide connectivity to language systems, including 44, 45, 47l, TPOJ1, and superior temporal visual area. The dorsolateral prefrontal cortex regions that include area 46 have connectivity with parietal area 7 and somatosensory inferior parietal regions and are implicated in working memory for actions and planning. The dorsal prefrontal regions, including 8Ad and 8Av, have connectivity with visual regions of the inferior parietal cortex, including PGs and PGi, and are implicated in visual and auditory top-down attention.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain.,Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China.,Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
16
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
17
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
18
|
Alefantis P, Lakshminarasimhan K, Avila E, Noel JP, Pitkow X, Angelaki DE. Sensory Evidence Accumulation Using Optic Flow in a Naturalistic Navigation Task. J Neurosci 2022; 42:5451-5462. [PMID: 35641186 PMCID: PMC9270913 DOI: 10.1523/jneurosci.2203-21.2022] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 04/01/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022] Open
Abstract
Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks.SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.
Collapse
Affiliation(s)
- Panos Alefantis
- Center for Neural Science, New York University, New York, New York 10003
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, New York 10003
| | - Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York 10003
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005-1892
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas 77030
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York 10003
- Tandon School of Engineering, New York University, New York, New York 11201
| |
Collapse
|
19
|
Raffi M, Trofè A, Meoni A, Gallelli L, Piras A. Optic Flow Speed and Retinal Stimulation Influence Microsaccades. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19116765. [PMID: 35682346 PMCID: PMC9180672 DOI: 10.3390/ijerph19116765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 05/28/2022] [Accepted: 05/29/2022] [Indexed: 02/04/2023]
Abstract
Microsaccades are linked with extraretinal mechanisms that significantly alter spatial perception before the onset of eye movements. We sought to investigate whether microsaccadic activity is modulated by the speed of radial optic flow stimuli. Experiments were performed in the dark on 19 subjects who stood in front of a screen covering 135 × 107° of the visual field. Subjects were instructed to fixate on a central fixation point while optic flow stimuli were presented in full field, in the foveal, and in the peripheral visual field at different dot speeds (8, 11, 14, 17, and 20°/s). Fixation in the dark was used as a control stimulus. For almost all tested speeds, the stimulation of the peripheral retina evoked the highest microsaccade rate. We also found combined effects of optic flow speed and the stimulated retinal region (foveal, peripheral, and full field) for microsaccade latency. These results show that optic flow speed modulates microsaccadic activity when presented in specific retinal portions, suggesting that eye movement generation is strictly dependent on the stimulated retinal regions.
Collapse
Affiliation(s)
- Milena Raffi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (A.M.); (A.P.)
- Correspondence:
| | - Aurelio Trofè
- Department of Quality of Life, University of Bologna, 47921 Rimini, Italy;
| | - Andrea Meoni
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (A.M.); (A.P.)
| | - Luca Gallelli
- Department of Health Science, School of Medicine, University of Catanzaro, 88100 Catanzaro, Italy;
- Clinical Pharmacology and Pharmacovigilance Unit, Mater Domini Hospital Catanzaro, 88100 Catanzaro, Italy
| | - Alessandro Piras
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (A.M.); (A.P.)
| |
Collapse
|
20
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
21
|
Rolls ET, Deco G, Huang CC, Feng J. The Effective Connectivity of the Human Hippocampal Memory System. Cereb Cortex 2022; 32:3706-3725. [PMID: 35034120 DOI: 10.1093/cercor/bhab442] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/05/2021] [Accepted: 11/08/2021] [Indexed: 02/04/2023] Open
Abstract
Effective connectivity measurements in the human hippocampal memory system based on the resting-state blood oxygenation-level dependent signal were made in 172 participants in the Human Connectome Project to reveal the directionality and strength of the connectivity. A ventral "what" hippocampal stream involves the temporal lobe cortex, perirhinal and parahippocampal TF cortex, and entorhinal cortex. A dorsal "where" hippocampal stream connects parietal cortex with posterior and retrosplenial cingulate cortex, and with parahippocampal TH cortex, which, in turn, project to the presubiculum, which connects to the hippocampus. A third stream involves the orbitofrontal and ventromedial-prefrontal cortex with effective connectivity with the hippocampal, entorhinal, and perirhinal cortex. There is generally stronger forward connectivity to the hippocampus than backward. Thus separate "what," "where," and "reward" streams can converge in the hippocampus, from which back projections return to the sources. However, unlike the simple dual stream hippocampal model, there is a third stream related to reward value; there is some cross-connectivity between these systems before the hippocampus is reached; and the hippocampus has some effective connectivity with earlier stages of processing than the entorhinal cortex and presubiculum. These findings complement diffusion tractography and provide a foundation for new concepts on the operation of the human hippocampal memory system.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Gustavo Deco
- Department of Information and Communication Technologies, Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Barcelona 08018, Spain
- Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200433, China
| |
Collapse
|
22
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
23
|
Hennestad E, Witoelar A, Chambers AR, Vervaeke K. Mapping vestibular and visual contributions to angular head velocity tuning in the cortex. Cell Rep 2021; 37:110134. [PMID: 34936869 PMCID: PMC8721284 DOI: 10.1016/j.celrep.2021.110134] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 09/21/2021] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
Neurons that signal the angular velocity of head movements (AHV cells) are important for processing visual and spatial information. However, it has been challenging to isolate the sensory modality that drives them and to map their cortical distribution. To address this, we develop a method that enables rotating awake, head-fixed mice under a two-photon microscope in a visual environment. Starting in layer 2/3 of the retrosplenial cortex, a key area for vision and navigation, we find that 10% of neurons report angular head velocity (AHV). Their tuning properties depend on vestibular input with a smaller contribution of vision at lower speeds. Mapping the spatial extent, we find AHV cells in all cortical areas that we explored, including motor, somatosensory, visual, and posterior parietal cortex. Notably, the vestibular and visual contributions to AHV are area dependent. Thus, many cortical circuits have access to AHV, enabling a diverse integration with sensorimotor and cognitive information.
Collapse
Affiliation(s)
- Eivind Hennestad
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Aree Witoelar
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Anna R Chambers
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Koen Vervaeke
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway.
| |
Collapse
|
24
|
Ma Q, Rolls ET, Huang CC, Cheng W, Feng J. Extensive cortical functional connectivity of the human hippocampal memory system. Cortex 2021; 147:83-101. [PMID: 35026557 DOI: 10.1016/j.cortex.2021.11.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/12/2021] [Accepted: 11/22/2021] [Indexed: 01/09/2023]
Abstract
The cortical connections of the human hippocampal memory system are fundamental to understanding its operation in health and disease, especially in the context of the great development of the human cortex. The functional connectivity of the human hippocampal system was analyzed in 172 participants imaged at 7T in the Human Connectome Project. The human hippocampus has high functional connectivity not only with the entorhinal cortex, but also with areas that are more distant in the ventral 'what' stream including the perirhinal cortex and temporal cortical visual areas. Parahippocampal gyrus TF in humans has connectivity with this ventral 'what' subsystem. Correspondingly for the dorsal stream, the hippocampus has high functional connectivity not only with the presubiculum, but also with areas more distant, the medial parahippocampal cortex TH which includes the parahippocampal place or scene area, the posterior cingulate including retrosplenial cortex, and the parietal cortex. Further, there is considerable cross connectivity between the ventral and dorsal streams with the hippocampus. The findings are supported by anatomical connections, which together provide an unprecedented and quantitative overview of the extensive cortical connectivity of the human hippocampal system that goes beyond hierarchically organised and segregated pathways connecting the hippocampus and neocortex, and leads to new concepts on the operation of the hippocampal memory system in humans.
Collapse
Affiliation(s)
- Qing Ma
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China
| | - Edmund T Rolls
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Department of Computer Science, University of Warwick, Coventry, UK; Oxford Centre for Computational Neuroscience, Oxford, UK.
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Wei Cheng
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China; Fudan ISTBI-ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China.
| | - Jianfeng Feng
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, China; Department of Computer Science, University of Warwick, Coventry, UK; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China; Fudan ISTBI-ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China.
| |
Collapse
|
25
|
Takamuku S, Gomi H. Vision-based speedometer regulates human walking. iScience 2021; 24:103390. [PMID: 34841229 PMCID: PMC8605357 DOI: 10.1016/j.isci.2021.103390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 08/23/2021] [Accepted: 10/28/2021] [Indexed: 11/15/2022] Open
Abstract
Can we recover self-motion from vision? This basic issue remains unsolved since, while the human visual system is known to estimate the direction of self-motion from optic flow, it remains unclear whether it also estimates the speed. Importantly, the latter requires disentangling self-motion speed and depths of objects in the scene as retinal velocity depends on both. Here we show that our automatic regulator of walking speed based on vision, which estimates and maintains the speed to its preferred range by adjusting stride length, is robust to changes in the depths. The robustness was not explained by temporal-frequency-based speed coding previously suggested to underlie depth-invariant object-motion perception. Meanwhile, it broke down, not only when the interocular distance was virtually manipulated but also when monocular depth cues were deceptive. These observations suggest that our visuomotor system embeds a speedometer that calculates self-motion speed from vision by integrating monocular/binocular depth and motion cues. Changes in optic flow speed triggers implicit adjustments of walking speed The response is invariant with respect to the depths of objects in the scene The invariance is not explained by temporal-frequency-based speed coding Both binocular and monocular depth cues contribute to the invariance
Collapse
Affiliation(s)
- Shinya Takamuku
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1 Morinosato-Wakamiya, Atsugishi 243-0198, Kanagawa, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1 Morinosato-Wakamiya, Atsugishi 243-0198, Kanagawa, Japan
| |
Collapse
|
26
|
Smith AT. Cortical visual area CSv as a cingulate motor area: a sensorimotor interface for the control of locomotion. Brain Struct Funct 2021; 226:2931-2950. [PMID: 34240236 PMCID: PMC8541968 DOI: 10.1007/s00429-021-02325-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/17/2021] [Indexed: 12/26/2022]
Abstract
The response properties, connectivity and function of the cingulate sulcus visual area (CSv) are reviewed. Cortical area CSv has been identified in both human and macaque brains. It has similar response properties and connectivity in the two species. It is situated bilaterally in the cingulate sulcus close to an established group of medial motor/premotor areas. It has strong connectivity with these areas, particularly the cingulate motor areas and the supplementary motor area, suggesting that it is involved in motor control. CSv is active during visual stimulation but only if that stimulation is indicative of self-motion. It is also active during vestibular stimulation and connectivity data suggest that it receives proprioceptive input. Connectivity with topographically organized somatosensory and motor regions strongly emphasizes the legs over the arms. Together these properties suggest that CSv provides a key interface between the sensory and motor systems in the control of locomotion. It is likely that its role involves online control and adjustment of ongoing locomotory movements, including obstacle avoidance and maintaining the intended trajectory. It is proposed that CSv is best seen as part of the cingulate motor complex. In the human case, a modification of the influential scheme of Picard and Strick (Picard and Strick, Cereb Cortex 6:342-353, 1996) is proposed to reflect this.
Collapse
Affiliation(s)
- Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK.
| |
Collapse
|
27
|
Foster C, Sheng WA, Heed T, Ben Hamed S. The macaque ventral intraparietal area has expanded into three homologue human parietal areas. Prog Neurobiol 2021; 209:102185. [PMID: 34775040 DOI: 10.1016/j.pneurobio.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/27/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The macaque ventral intraparietal area (VIP) in the fundus of the intraparietal sulcus has been implicated in a diverse range of sensorimotor and cognitive functions such as motion processing, multisensory integration, processing of head peripersonal space, defensive behavior, and numerosity coding. Here, we exhaustively review macaque VIP function, cytoarchitectonics, and anatomical connectivity and integrate it with human studies that have attempted to identify a potential human VIP homologue. We show that human VIP research has consistently identified three, rather than one, bilateral parietal areas that each appear to subsume some, but not all, of the macaque area's functionality. Available evidence suggests that this human "VIP complex" has evolved as an expansion of the macaque area, but that some precursory specialization within macaque VIP has been previously overlooked. The three human areas are dominated, roughly, by coding the head or self in the environment, visual heading direction, and the peripersonal environment around the head, respectively. A unifying functional principle may be best described as prediction in space and time, linking VIP to state estimation as a key parietal sensorimotor function. VIP's expansive differentiation of head and self-related processing may have been key in the emergence of human bodily self-consciousness.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Wei-An Sheng
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany; Department of Psychology, University of Salzburg, Salzburg, Austria; Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France.
| |
Collapse
|
28
|
Orban GA, Sepe A, Bonini L. Parietal maps of visual signals for bodily action planning. Brain Struct Funct 2021; 226:2967-2988. [PMID: 34508272 PMCID: PMC8541987 DOI: 10.1007/s00429-021-02378-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/01/2021] [Indexed: 12/24/2022]
Abstract
The posterior parietal cortex (PPC) has long been understood as a high-level integrative station for computing motor commands for the body based on sensory (i.e., mostly tactile and visual) input from the outside world. In the last decade, accumulating evidence has shown that the parietal areas not only extract the pragmatic features of manipulable objects, but also subserve sensorimotor processing of others’ actions. A paradigmatic case is that of the anterior intraparietal area (AIP), which encodes the identity of observed manipulative actions that afford potential motor actions the observer could perform in response to them. On these bases, we propose an AIP manipulative action-based template of the general planning functions of the PPC and review existing evidence supporting the extension of this model to other PPC regions and to a wider set of actions: defensive and locomotor actions. In our model, a hallmark of PPC functioning is the processing of information about the physical and social world to encode potential bodily actions appropriate for the current context. We further extend the model to actions performed with man-made objects (e.g., tools) and artifacts, because they become integral parts of the subject’s body schema and motor repertoire. Finally, we conclude that existing evidence supports a generally conserved neural circuitry that transforms integrated sensory signals into the variety of bodily actions that primates are capable of preparing and performing to interact with their physical and social world.
Collapse
Affiliation(s)
- Guy A Orban
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| | - Alessia Sepe
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy
| | - Luca Bonini
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| |
Collapse
|
29
|
Zhao B, Zhang Y, Chen A. Encoding of vestibular and optic flow cues to self-motion in the posterior superior temporal polysensory area. J Physiol 2021; 599:3937-3954. [PMID: 34192812 DOI: 10.1113/jp281913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 06/28/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Neurons in the posterior superior temporal polysensory area (STPp) showed significant directional selectivity in response to vestibular, optic flow and combined visual-vestibular stimuli. By comparison to the dorsal medial superior temporal area, the visual latency was slower in STPp but the vestibular latency was faster. Heading preferences under combined stimulation in STPp were usually dominated by visual signals. Cross-modal enhancement was observed in STPp when both vestibular and visual cues were presented together at their heading preferences. ABSTRACT Human neuroimaging data implicated that the superior temporal polysensory area (STP) might be involved in vestibular-visual interaction during heading computations, but the heading selectivity has not been examined in the macaque. Here, we investigated the convergence of optic flow and vestibular signals in macaque STP by using a virtual-reality system and found that 6.3% of STP neurons showed multisensory responses, with visual and vestibular direction preferences either congruent or opposite in roughly equal proportion. The percentage of vestibular-tuned cells (18.3%) was much smaller than that of visual-tuned cells (30.4%) in STP. The vestibular tuning strength was usually weaker than the visual condition. The visual latency was significantly slower in STPp than in the dorsal medial superior temporal area (MSTd), but the vestibular latency was significantly faster than in MSTd. During the bimodal condition, STP cells' response was dominated by visual signals, with the visual heading preference not affected by the vestibular signals but the response amplitudes modulated by vestibular signals in a subadditive way.
Collapse
Affiliation(s)
- Bin Zhao
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Yi Zhang
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Aihua Chen
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| |
Collapse
|
30
|
Liu B, Tian Q, Gu Y. Robust vestibular self-motion signals in macaque posterior cingulate region. eLife 2021; 10:e64569. [PMID: 33827753 PMCID: PMC8032402 DOI: 10.7554/elife.64569] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/29/2021] [Indexed: 11/13/2022] Open
Abstract
Self-motion signals, distributed ubiquitously across parietal-temporal lobes, propagate to limbic hippocampal system for vector-based navigation via hubs including posterior cingulate cortex (PCC) and retrosplenial cortex (RSC). Although numerous studies have indicated posterior cingulate areas are involved in spatial tasks, it is unclear how their neurons represent self-motion signals. Providing translation and rotation stimuli to macaques on a 6-degree-of-freedom motion platform, we discovered robust vestibular responses in PCC. A combined three-dimensional spatiotemporal model captured data well and revealed multiple temporal components including velocity, acceleration, jerk, and position. Compared to PCC, RSC contained moderate vestibular temporal modulations and lacked significant spatial tuning. Visual self-motion signals were much weaker in both regions compared to the vestibular signals. We conclude that macaque posterior cingulate region carries vestibular-dominant self-motion signals with plentiful temporal components that could be useful for path integration.
Collapse
Affiliation(s)
- Bingyu Liu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Qingyang Tian
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| |
Collapse
|
31
|
Rolls ET. Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus 2021; 31:593-611. [PMID: 33760309 DOI: 10.1002/hipo.23324] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 03/01/2021] [Accepted: 03/13/2021] [Indexed: 01/11/2023]
Abstract
A new theory is proposed of mechanisms of navigation in primates including humans in which spatial view cells found in the primate hippocampus and parahippocampal gyrus are used to guide the individual from landmark to landmark. The navigation involves approach to each landmark in turn (taxis), using spatial view cells to identify the next landmark in the sequence, and does not require a topological map. Two other cell types found in primates, whole body motion cells, and head direction cells, can be utilized in the spatial view cell navigational mechanism, but are not essential. If the landmarks become obscured, then the spatial view representations can be updated by self-motion (idiothetic) path integration using spatial coordinate transform mechanisms in the primate dorsal visual system to transform from egocentric to allocentric spatial view coordinates. A continuous attractor network or time cells or working memory is used in this approach to navigation to encode and recall the spatial view sequences involved. I also propose how navigation can be performed using a further type of neuron found in primates, allocentric-bearing-to-a-landmark neurons, in which changes of direction are made when a landmark reaches a particular allocentric bearing. This is useful if a landmark cannot be approached. The theories are made explicit in models of navigation, which are then illustrated by computer simulations. These types of navigation are contrasted with triangulation, which requires a topological map. It is proposed that the first strategy utilizing spatial view cells is used frequently in humans, and is relatively simple because primates have spatial view neurons that respond allocentrically to locations in spatial scenes. An advantage of this approach to navigation is that hippocampal spatial view neurons are also useful for episodic memory, and for imagery.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
32
|
Raffi M, Trofè A, Perazzolo M, Meoni A, Piras A. Sensory Input Modulates Microsaccades during Heading Perception. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:2865. [PMID: 33799672 PMCID: PMC8000400 DOI: 10.3390/ijerph18062865] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 03/04/2021] [Accepted: 03/07/2021] [Indexed: 11/21/2022]
Abstract
Microsaccades are small eye movements produced during attempted fixation. During locomotion, the eyes scan the environment; the gaze is not always directed to the focus of expansion of the optic flow field. We sought to investigate whether the microsaccadic activity was modulated by eye position during the view of radial optic flow stimuli, and if the presence or lack of a proprioceptive input signal may influence the microsaccade characteristics during self-motion perception. We recorded the oculomotor activity when subjects were either standing or sitting in front of a screen during the view of optic flow stimuli that simulated specific heading directions with different gaze positions. We recorded five trials of each stimulus. Results showed that microsaccade duration, peak velocity, and rate were significantly modulated by optic flow stimuli and trial sequence. We found that the microsaccade rate increased in each condition from trial 1 to trial 5. Microsaccade peak velocity and duration were significantly different across trials. The analysis of the microsaccade directions showed that the different combinations of optic flow and eye position evoked non-uniform directions of microsaccades in standing condition with mean vectors in the upper-left quadrant of the visual field, uncorrelated with optic flow directions and eye positions. In sitting conditions, all stimuli evoked uniform directions of microsaccades. Present results indicate that the proprioceptive signals when the subjects stand up creates a different input that could alter the eye-movement characteristics during heading perceptions.
Collapse
Affiliation(s)
- Milena Raffi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (M.P.); (A.M.); (A.P.)
| | - Aurelio Trofè
- Department of Quality of Life, University of Bologna, 47921 Rimini, Italy;
| | - Monica Perazzolo
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (M.P.); (A.M.); (A.P.)
| | - Andrea Meoni
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (M.P.); (A.M.); (A.P.)
| | - Alessandro Piras
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (M.P.); (A.M.); (A.P.)
| |
Collapse
|
33
|
The role of cognitive factors and personality traits in the perception of illusory self-motion (vection). Atten Percept Psychophys 2021; 83:1804-1817. [PMID: 33409903 PMCID: PMC8084801 DOI: 10.3758/s13414-020-02228-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2020] [Indexed: 01/22/2023]
Abstract
Vection is a perceptual phenomenon that describes the visually induced subjective sensation of self-motion in the absence of physical motion. Previous research has discussed the potential involvement of top-down cognitive mechanisms on vection. Here, we quantified how cognitive manipulations such as contextual information (i.e., expectation) and plausibility (i.e., chair configuration) alter vection. We also explored how individual traits such as field dependence, depersonalization, anxiety, and social desirability might be related to vection. Fifty-one healthy adults were exposed to an optic flow stimulus that consisted of horizontally moving black-and-white bars presented on three adjacent monitors to generate circular vection. Participants were divided into three groups and given experimental instructions designed to induce either strong, weak, or no expectation with regard to the intensity of vection. In addition, the configuration of the chair (rotatable or fixed) was modified during the experiment. Vection onset time, duration, and intensity were recorded. Results showed that expectation altered vection intensity, but only when the chair was in the rotatable configuration. Positive correlations for vection measures with field dependence and depersonalization, but no sex-related effects were found. Our results show that vection can be altered by cognitive factors and that individual traits can affect the perception of vection, suggesting that vection is not a purely perceptual phenomenon, but can also be affected by top-down mechanisms.
Collapse
|
34
|
Lakshminarasimhan KJ, Avila E, Neyhart E, DeAngelis GC, Pitkow X, Angelaki DE. Tracking the Mind's Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics. Neuron 2020; 106:662-674.e5. [PMID: 32171388 PMCID: PMC7323886 DOI: 10.1016/j.neuron.2020.02.023] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/24/2019] [Accepted: 02/19/2020] [Indexed: 01/02/2023]
Abstract
To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task. Humans and monkeys navigated to a remembered goal location in a virtual environment that provided optic flow but lacked explicit position cues. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. These results suggest that gaze dynamics play a key role in action selection during challenging visuomotor behaviors and may possibly serve as a window into the subject's dynamically evolving internal beliefs.
Collapse
Affiliation(s)
- Kaushik J Lakshminarasimhan
- Center for Neural Science, New York University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY, USA
| | - Erin Neyhart
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
35
|
Rolls ET. Spatial coordinate transforms linking the allocentric hippocampal and egocentric parietal primate brain systems for memory, action in space, and navigation. Hippocampus 2019; 30:332-353. [PMID: 31697002 DOI: 10.1002/hipo.23171] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 10/05/2019] [Accepted: 10/09/2019] [Indexed: 01/03/2023]
Abstract
A theory and model of spatial coordinate transforms in the dorsal visual system through the parietal cortex that enable an interface via posterior cingulate and related retrosplenial cortex to allocentric spatial representations in the primate hippocampus is described. First, a new approach to coordinate transform learning in the brain is proposed, in which the traditional gain modulation is complemented by temporal trace rule competitive network learning. It is shown in a computational model that the new approach works much more precisely than gain modulation alone, by enabling neurons to represent the different combinations of signal and gain modulator more accurately. This understanding may have application to many brain areas where coordinate transforms are learned. Second, a set of coordinate transforms is proposed for the dorsal visual system/parietal areas that enables a representation to be formed in allocentric spatial view coordinates. The input stimulus is merely a stimulus at a given position in retinal space, and the gain modulation signals needed are eye position, head direction, and place, all of which are present in the primate brain. Neurons that encode the bearing to a landmark are involved in the coordinate transforms. Part of the importance here is that the coordinates of the allocentric view produced in this model are the same as those of spatial view cells that respond to allocentric view recorded in the primate hippocampus and parahippocampal cortex. The result is that information from the dorsal visual system can be used to update the spatial input to the hippocampus in the appropriate allocentric coordinate frame, including providing for idiothetic update to allow for self-motion. It is further shown how hippocampal spatial view cells could be useful for the transform from hippocampal allocentric coordinates to egocentric coordinates useful for actions in space and for navigation.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
36
|
Dowsett J, Herrmann CS, Dieterich M, Taylor PCJ. Shift in lateralization during illusory self-motion: EEG responses to visual flicker at 10 Hz and frequency-specific modulation by tACS. Eur J Neurosci 2019; 51:1657-1675. [PMID: 31408562 DOI: 10.1111/ejn.14543] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 06/25/2019] [Accepted: 08/05/2019] [Indexed: 01/23/2023]
Abstract
Self-motion perception is a key aspect of higher vestibular processing, suggested to rely upon hemispheric lateralization and alpha-band oscillations. The first aim of this study was to test for any lateralization in the EEG alpha band during the illusory sense of self-movement (vection) induced by large optic flow stimuli. Visual stimuli flickered at alpha frequency (approx. 10 Hz) in order to produce steady state visually evoked potentials (SSVEPs), a robust EEG measure which allows probing the frequency-specific response of the cortex. The first main result was that differential lateralization of the alpha SSVEP response was found during vection compared with a matched random motion control condition, supporting the idea of lateralization of visual-vestibular function. Additionally, this effect was frequency-specific, not evident with lower frequency SSVEPs. The second aim of this study was to test for a causal role of the right hemisphere in producing this lateralization effect and to explore the possibility of selectively modulating the SSVEP response. Transcranial alternating current stimulation (tACS) was applied over the right hemisphere simultaneously with SSVEP recording, using a novel artefact removal strategy for combined tACS-EEG. The second main result was that tACS enhanced SSVEP amplitudes, and the effect of tACS was not confined to the right hemisphere. Subsequent control experiments showed the effect of tACS requires the flicker frequency and tACS frequency to be closely matched and tACS to be of sufficient intensity. Combined tACS-SSVEPs are a promising method for future investigation into the role of neural oscillations and for optimizing tACS.
Collapse
Affiliation(s)
- James Dowsett
- Department of Neurology, University Hospital, LMU Munich, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, LMU Munich, Munich, Germany
| | - Christoph S Herrmann
- Experimental Psychology Lab, Center for Excellence "Hearing4all", European Medical School, University of Oldenburg, Oldenburg, Germany.,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | - Marianne Dieterich
- Department of Neurology, University Hospital, LMU Munich, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, LMU Munich, Munich, Germany.,Graduate School of Systemic Neurosciences, LMU Munich, Munich, Germany.,SyNergy - Munich Cluster for Systems Neurology, Munich, Germany
| | - Paul C J Taylor
- Department of Neurology, University Hospital, LMU Munich, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, LMU Munich, Munich, Germany.,Graduate School of Systemic Neurosciences, LMU Munich, Munich, Germany
| |
Collapse
|