1
|
Uchimura M, Kumano H, Kitazawa S. Neural Transformation from Retinotopic to Background-Centric Coordinates in the Macaque Precuneus. J Neurosci 2024; 44:e0892242024. [PMID: 39406517 PMCID: PMC11604138 DOI: 10.1523/jneurosci.0892-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 10/05/2024] [Accepted: 10/08/2024] [Indexed: 11/29/2024] Open
Abstract
Visual information is initially represented in retinotopic coordinates and later in craniotopic coordinates. Psychophysical evidence suggests that visual information is further represented in more general coordinates related to the external world; however, the neural basis of nonegocentric coordinates remains elusive. This study investigates the automatic transformation from egocentric to nonegocentric coordinates in the macaque precuneus (two males, one female), identified by a functional imaging study as a key area for nonegocentric representation. We found that 6.2% of neurons in the precuneus have receptive fields (RFs) anchored to the background rather than to the retina or the head, while 16% had traditional retinotopic RFs. Notably, these two types were not exclusive: many background-centric neurons initially encode a stimulus' position in retinotopic coordinates (up to ∼90 ms from the stimulus onset) but later shift to background coordinates, peaking at ∼150 ms. Regarding retinotopic information, the stimulus dominated the initial period, whereas the background dominated the later period. In the absence of a background, there is a dramatic surge in retinotopic information about the stimulus during the later phase, clearly delineating two distinct periods of retinotopic encoding: one focusing on the figure to be attended and another on the background. These findings suggest that the initial retinotopic information of the stimulus is combined with the background retinotopic information in a subsequent stage, yielding a more stable representation of the stimulus relative to the background through time-division multiplexing.
Collapse
Affiliation(s)
- Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Hironori Kumano
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi 409-3898, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
2
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024; 34:4983-4997.e9. [PMID: 39389059 PMCID: PMC11537840 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
3
|
Jurewicz K, Sleezer BJ, Mehta PS, Hayden BY, Ebitz RB. Irrational choices via a curvilinear representational geometry for value. Nat Commun 2024; 15:6424. [PMID: 39080250 PMCID: PMC11289086 DOI: 10.1038/s41467-024-49568-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/06/2024] [Indexed: 08/02/2024] Open
Abstract
We make decisions by comparing values, but it is not yet clear how value is represented in the brain. Many models assume, if only implicitly, that the representational geometry of value is linear. However, in part due to a historical focus on noisy single neurons, rather than neuronal populations, this hypothesis has not been rigorously tested. Here, we examine the representational geometry of value in the ventromedial prefrontal cortex (vmPFC), a part of the brain linked to economic decision-making, in two male rhesus macaques. We find that values are encoded along a curved manifold in vmPFC. This curvilinear geometry predicts a specific pattern of irrational decision-making: that decision-makers will make worse choices when an irrelevant, decoy option is worse in value, compared to when it is better. We observe this type of irrational choices in behavior. Together, these results not only suggest that the representational geometry of value is nonlinear, but that this nonlinearity could impose bounds on rational decision-making.
Collapse
Affiliation(s)
- Katarzyna Jurewicz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada
- Department of Physiology, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC, Canada
| | - Brianna J Sleezer
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
| | - Priyanka S Mehta
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
- Psychology Program, Department of Human Behavior, Justice, and Diversity, University of Wisconsin, Superior, Superior, WI, USA
| | - Benjamin Y Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - R Becket Ebitz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
4
|
Nakayama R, Tanaka M, Kishi Y, Murakami I. Aftereffect of perceived motion trajectories. iScience 2024; 27:109626. [PMID: 38623326 PMCID: PMC11016753 DOI: 10.1016/j.isci.2024.109626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/07/2024] [Accepted: 03/26/2024] [Indexed: 04/17/2024] Open
Abstract
If our visual system has a distinct computational process for motion trajectories, such a process may minimize redundancy and emphasize variation in object trajectories by adapting to the current statistics. Our experiments show that after adaptation to multiple objects traveling along trajectories with a common tilt, the trajectory of an object was perceived as tilting on the repulsive side. This trajectory aftereffect occurred irrespective of whether the tilt of the adapting stimulus was physical or an illusion from motion-induced position shifts and did not differ in size across the physical and illusory conditions. Moreover, when the perceived and physical tilts competed during adaptation, the trajectory aftereffect depended on the perceived tilt. The trajectory aftereffect transferred between hemifields and was not explained by motion-insensitive orientation adaptation or attention. These findings provide evidence for a trajectory-specific adaptable process that depends on higher-order representations after the integration of position and motion signals.
Collapse
Affiliation(s)
- Ryohei Nakayama
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Mai Tanaka
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Yukino Kishi
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Ikuya Murakami
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| |
Collapse
|
5
|
Sun Q, Zhan LZ, You FH, Dong XF. Attention affects the perception of self-motion direction from optic flow. iScience 2024; 27:109373. [PMID: 38500831 PMCID: PMC10946324 DOI: 10.1016/j.isci.2024.109373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/02/2024] [Accepted: 02/27/2024] [Indexed: 03/20/2024] Open
Abstract
Many studies have demonstrated that attention affects the perception of many visual features. However, previous studies show conflicting results regarding the effect of attention on the perception of self-motion direction (i.e., heading) from optic flow. To address this question, we conducted three behavioral experiments and found that estimation accuracies of large headings (>14°) decreased with attention load, discrimination thresholds of these headings increased with attention load, and heading estimates were systematically compressed toward the focus of attention. Therefore, the current study demonstrated that attention affected heading perception from optic flow, showing that the perception is both information-driven and cognitive.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, P.R. China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P.R. China
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Fan-Huan You
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Xiao-Fei Dong
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| |
Collapse
|
6
|
Taghizadeh B, Fortmann O, Gail A. Position- and scale-invariant object-centered spatial localization in monkey frontoparietal cortex dynamically adapts to cognitive demand. Nat Commun 2024; 15:3357. [PMID: 38637493 PMCID: PMC11026390 DOI: 10.1038/s41467-024-47554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/02/2024] [Indexed: 04/20/2024] Open
Abstract
Egocentric encoding is a well-known property of brain areas along the dorsal pathway. Different to previous experiments, which typically only demanded egocentric spatial processing during movement preparation, we designed a task where two male rhesus monkeys memorized an on-the-object target position and then planned a reach to this position after the object re-occurred at variable location with potentially different size. We found allocentric (in addition to egocentric) encoding in the dorsal stream reach planning areas, parietal reach region and dorsal premotor cortex, which is invariant with respect to the position, and, remarkably, also the size of the object. The dynamic adjustment from predominantly allocentric encoding during visual memory to predominantly egocentric during reach planning in the same brain areas and often the same neurons, suggests that the prevailing frame of reference is less a question of brain area or processing stream, but more of the cognitive demands.
Collapse
Affiliation(s)
- Bahareh Taghizadeh
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran
| | - Ole Fortmann
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany
| | - Alexander Gail
- Sensorimotor Group, German Primate Center, Göttingen, Germany.
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany.
| |
Collapse
|
7
|
Thompson LW, Kim B, Rokers B, Rosenberg A. Hierarchical computation of 3D motion across macaque areas MT and FST. Cell Rep 2023; 42:113524. [PMID: 38064337 PMCID: PMC10791528 DOI: 10.1016/j.celrep.2023.113524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/25/2023] [Accepted: 11/15/2023] [Indexed: 12/30/2023] Open
Abstract
Computing behaviorally relevant representations of three-dimensional (3D) motion from two-dimensional (2D) retinal signals is critical for survival. To ascertain where and how the primate visual system performs this computation, we recorded from the macaque middle temporal (MT) area and its downstream target, the fundus of the superior temporal sulcus (area FST). Area MT is a key site of 2D motion processing, but its role in 3D motion processing is controversial. The functions of FST remain highly underexplored. To distinguish representations of 3D motion from those of 2D retinal motion, we contrast responses to multiple motion cues during a motion discrimination task. The results reveal a hierarchical transformation whereby many FST but not MT neurons are selective for 3D motion. Modeling results further show how generalized, cue-invariant representations of 3D motion in FST may be created by selectively integrating the output of 2D motion selective MT neurons.
Collapse
Affiliation(s)
- Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA.
| |
Collapse
|
8
|
Bufacchi RJ, Battaglia-Mayer A, Iannetti GD, Caminiti R. Cortico-spinal modularity in the parieto-frontal system: A new perspective on action control. Prog Neurobiol 2023; 231:102537. [PMID: 37832714 DOI: 10.1016/j.pneurobio.2023.102537] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 08/22/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023]
Abstract
Classical neurophysiology suggests that the motor cortex (MI) has a unique role in action control. In contrast, this review presents evidence for multiple parieto-frontal spinal command modules that can bypass MI. Five observations support this modular perspective: (i) the statistics of cortical connectivity demonstrate functionally-related clusters of cortical areas, defining functional modules in the premotor, cingulate, and parietal cortices; (ii) different corticospinal pathways originate from the above areas, each with a distinct range of conduction velocities; (iii) the activation time of each module varies depending on task, and different modules can be activated simultaneously; (iv) a modular architecture with direct motor output is faster and less metabolically expensive than an architecture that relies on MI, given the slow connections between MI and other cortical areas; (v) lesions of the areas composing parieto-frontal modules have different effects from lesions of MI. Here we provide examples of six cortico-spinal modules and functions they subserve: module 1) arm reaching, tool use and object construction; module 2) spatial navigation and locomotion; module 3) grasping and observation of hand and mouth actions; module 4) action initiation, motor sequences, time encoding; module 5) conditional motor association and learning, action plan switching and action inhibition; module 6) planning defensive actions. These modules can serve as a library of tools to be recombined when faced with novel tasks, and MI might serve as a recombinatory hub. In conclusion, the availability of locally-stored information and multiple outflow paths supports the physiological plausibility of the proposed modular perspective.
Collapse
Affiliation(s)
- R J Bufacchi
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai, China
| | - A Battaglia-Mayer
- Department of Physiology and Pharmacology, University of Rome, Sapienza, Italy
| | - G D Iannetti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), London, UK
| | - R Caminiti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy.
| |
Collapse
|
9
|
Grzeczkowski L, Shi Z, Rolfs M, Deubel H. Perceptual learning across saccades: Feature but not location specific. Proc Natl Acad Sci U S A 2023; 120:e2303763120. [PMID: 37844238 PMCID: PMC10614914 DOI: 10.1073/pnas.2303763120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 09/13/2023] [Indexed: 10/18/2023] Open
Abstract
Perceptual learning is the ability to enhance perception through practice. The hallmark of perceptual learning is its specificity for the trained location and stimulus features, such as orientation. For example, training in discriminating a grating's orientation improves performance only at the trained location but not in other untrained locations. Perceptual learning has mostly been studied using stimuli presented briefly while observers maintained gaze at one location. However, in everyday life, stimuli are actively explored through eye movements, which results in successive projections of the same stimulus at different retinal locations. Here, we studied perceptual learning of orientation discrimination across saccades. Observers were trained to saccade to a peripheral grating and to discriminate its orientation change that occurred during the saccade. The results showed that training led to transsaccadic perceptual learning (TPL) and performance improvements which did not generalize to an untrained orientation. Remarkably, however, for the trained orientation, we found a complete transfer of TPL to the untrained location in the opposite hemifield suggesting high flexibility of reference frame encoding in TPL. Three control experiments in which participants were trained without saccades did not show such transfer, confirming that the location transfer was contingent upon eye movements. Moreover, performance at the trained location, but not at the untrained location, was also improved in an untrained fixation task. Our results suggest that TPL has both, a location-specific component that occurs before the eye movement and a saccade-related component that involves location generalization.
Collapse
Affiliation(s)
- Lukasz Grzeczkowski
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
- Department Psychologie, Humboldt-Universität zu Berlin, Berlin12489, Germany
| | - Zhuanghua Shi
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
| | - Martin Rolfs
- Department Psychologie, Humboldt-Universität zu Berlin, Berlin12489, Germany
| | - Heiner Deubel
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
| |
Collapse
|
10
|
Rosenberg A, Thompson LW, Doudlah R, Chang TY. Neuronal Representations Supporting Three-Dimensional Vision in Nonhuman Primates. Annu Rev Vis Sci 2023; 9:337-359. [PMID: 36944312 DOI: 10.1146/annurev-vision-111022-123857] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision.
Collapse
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
11
|
Gao W, Shen J, Lin Y, Wang K, Lin Z, Tang H, Chen X. Sequential sparse autoencoder for dynamic heading representation in ventral intraparietal area. Comput Biol Med 2023; 163:107114. [PMID: 37329620 DOI: 10.1016/j.compbiomed.2023.107114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/12/2023] [Accepted: 05/30/2023] [Indexed: 06/19/2023]
Abstract
To navigate in space, it is important to predict headings in real-time from neural responses in the brain to vestibular and visual signals, and the ventral intraparietal area (VIP) is one of the critical brain areas. However, it remains unexplored in the population level how the heading perception is represented in VIP. And there are no commonly used methods suitable for decoding the headings from the population responses in VIP, given the large spatiotemporal dynamics and heterogeneity in the neural responses. Here, responses were recorded from 210 VIP neurons in three rhesus monkeys when they were performing a heading perception task. And by specifically and separately modelling the both dynamics with sparse representation, we built a sequential sparse autoencoder (SSAE) to do the population decoding on the recorded dataset and tried to maximize the decoding performance. The SSAE relies on a three-layer sparse autoencoder to extract temporal and spatial heading features in the dataset via unsupervised learning, and a softmax classifier to decode the headings. Compared with other population decoding methods, the SSAE achieves a leading accuracy of 96.8% ± 2.1%, and shows the advantages of robustness, low storage and computing burden for real-time prediction. Therefore, our SSAE model performs well in learning neurobiologically plausible features comprising dynamic navigational information.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Kejun Wang
- School of Software Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou, 310009, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China.
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China.
| |
Collapse
|
12
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
13
|
He D, Öğmen H. A neural model for vector decomposition and relative-motion perception. Vision Res 2023; 202:108142. [PMID: 36423519 DOI: 10.1016/j.visres.2022.108142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 09/22/2022] [Accepted: 10/27/2022] [Indexed: 11/22/2022]
Abstract
The perception of motion not only depends on the detection of motion signals but also on choosing and applying reference-frames according to which motion is interpreted. Here we propose a neural model that implements the common-fate principle for reference-frame selection. The model starts with a retinotopic layer of directionally-tuned motion detectors. The Gestalt common-fate principle is applied to the activities of these detectors to implement in two neural populations the direction and the magnitude (speed) of the reference-frame. The output activities of retinotopic motion-detectors are decomposed using the direction of the reference-frame. The direction and magnitude of the reference-frame are then applied to these decomposed motion-vectors to generate activities that reflect relative-motion perception, i.e., the perception of motion with respect to the prevailing reference-frame. We simulated this model for classical relative motion stimuli, viz., the three-dot, rotating-wheel, and point-walker (biological motion) paradigms and found the model performance to be close to theoretical vector decomposition values. In the three-dot paradigm, the model made the prediction of perceived curved-trajectories for the target dot when its horizontal velocity was slower or faster than the flanking dots. We tested this prediction in two psychophysical experiments and found a good qualitative and quantitative agreement between the model and the data. Our results show that a simple neural network using solely motion information can account for the perception of group and relative motion.
Collapse
Affiliation(s)
- Dongcheng He
- Laboratory of Perceptual and Cognitive Dynamics, University of Denver, Denver, CO, USA; Department of Electrical & Computer Engineering, University of Denver, Denver, CO, USA; Ritchie School of Engineering & Computer Science, University of Denver, Denver, CO, USA
| | - Haluk Öğmen
- Laboratory of Perceptual and Cognitive Dynamics, University of Denver, Denver, CO, USA; Department of Electrical & Computer Engineering, University of Denver, Denver, CO, USA; Ritchie School of Engineering & Computer Science, University of Denver, Denver, CO, USA.
| |
Collapse
|
14
|
Falconbridge M, Hewitt K, Haille J, Badcock DR, Edwards M. The induced motion effect is a high-level visual phenomenon: Psychophysical evidence. Iperception 2022; 13:20416695221118111. [PMID: 36092511 PMCID: PMC9459461 DOI: 10.1177/20416695221118111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/20/2022] [Indexed: 11/16/2022] Open
Abstract
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway where self-motion is assessed. We provide evidence for a high-level mechanism in two broad ways. Firstly, we show that the effect is insensitive to a set of low-level spatial aspects of the scene, namely, the spatial arrangement, the spatial frequency content and the orientation content of the background relative to the target. Secondly, we show that the effect is the same whether the target and background are composed of the same kind of local elements-one-dimensional (1D) or two-dimensional (2D)-or one is composed of one, and the other composed of the other. The latter finding is significant because 1D and 2D local elements are integrated by two different mechanisms so the induced motion effect is likely to be mediated in a visual motion processing area that follows the two separate integration mechanisms. Area medial superior temporal in monkeys and the equivalent in humans is suggested as a viable site. We present a simple flow-parsing-inspired model and demonstrate a good fit to our data and to data from a previous induced motion study.
Collapse
|
15
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
16
|
Alexander AS, Tung JC, Chapman GW, Conner AM, Shelley LE, Hasselmo ME, Nitz DA. Adaptive integration of self-motion and goals in posterior parietal cortex. Cell Rep 2022; 38:110504. [PMID: 35263604 PMCID: PMC9026715 DOI: 10.1016/j.celrep.2022.110504] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 12/14/2021] [Accepted: 02/14/2022] [Indexed: 02/05/2023] Open
Abstract
Rats readily switch between foraging and more complex navigational behaviors such as pursuit of other rats or prey. These tasks require vastly different tracking of multiple behaviorally significant variables including self-motion state. To explore whether navigational context modulates self-motion tracking, we examined self-motion tuning in posterior parietal cortex neurons during foraging versus visual target pursuit. Animals performing the pursuit task demonstrate predictive processing of target trajectories by anticipating and intercepting them. Relative to foraging, pursuit yields multiplicative gain modulation of self-motion tuning and enhances self-motion state decoding. Self-motion sensitivity in parietal cortex neurons is, on average, history dependent regardless of behavioral context, but the temporal window of self-motion integration extends during target pursuit. Finally, many self-motion-sensitive neurons conjunctively track the visual target position relative to the animal. Thus, posterior parietal cortex functions to integrate the location of navigationally relevant target stimuli into an ongoing representation of past, present, and future locomotor trajectories.
Collapse
Affiliation(s)
- Andrew S Alexander
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA; Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA.
| | - Janet C Tung
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| | - G William Chapman
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA
| | - Allison M Conner
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| | - Laura E Shelley
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA
| | - Douglas A Nitz
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
17
|
Fujimoto K, Ashida H. Postural adjustment as a function of scene orientation. J Vis 2022; 22:1. [PMID: 35234839 PMCID: PMC8899856 DOI: 10.1167/jov.22.4.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual orientation plays an important role in postural control, but the specific characteristics of postural response to orientation remain unknown. In this study, we investigated the relationship between postural response and the subjective visual vertical (SVV) as a function of scene orientation. We presented a virtual room including everyday objects through a head-mounted display and measured head tilt around the naso-occipital axis. The room orientation varied from 165° counterclockwise to 180° clockwise around the center of display in 15° increments. In a separate session, we also conducted a rod adjustment task to record the participant's SVV in the tilted room. We applied a weighted vector sum model to head tilt and SVV error and obtained the weight of three visual cues to orientation: frame, horizon, and polarity. We found significant contributions for all visual cues to head tilt and SVV error. For SVV error, frame cues made the largest contribution, whereas polarity contribution made the smallest. For head tilt, there was no clear difference across visual cue types, although the order of contribution was similar to the SVV. These findings suggest that multiple visual cues to orientation are involved in postural control and imply different representations of vertical orientation across postural control and perception.
Collapse
Affiliation(s)
- Kanon Fujimoto
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan.,Japan Society for the Promotion of Science, Tokyo, Japan.,
| | - Hiroshi Ashida
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan.,
| |
Collapse
|
18
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
19
|
Zaidel A, Laurens J, DeAngelis GC, Angelaki DE. Supervised Multisensory Calibration Signals Are Evident in VIP But Not MSTd. J Neurosci 2021; 41:10108-10119. [PMID: 34716232 PMCID: PMC8660052 DOI: 10.1523/jneurosci.0135-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 08/23/2021] [Accepted: 09/17/2021] [Indexed: 11/21/2022] Open
Abstract
Multisensory plasticity enables our senses to dynamically adapt to each other and the external environment, a fundamental operation that our brain performs continuously. We searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) in 2 male rhesus macaques using a paradigm of supervised calibration. We report little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. In contrast, neural correlates of plasticity are found in higher-level multisensory VIP, an area with strong decision-related activity. Accordingly, we observed systematic shifts of VIP tuning curves, which were reflected in the choice-related component of the population response. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions. These results lay the foundation for understanding multisensory neural plasticity, applicable broadly to maintaining accuracy for sensorimotor tasks.SIGNIFICANCE STATEMENT Multisensory plasticity is a fundamental and continual function of the brain that enables our senses to adapt dynamically to each other and to the external environment. Yet, very little is known about the neuronal mechanisms of multisensory plasticity. In this study, we searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) using a paradigm of supervised calibration. We found little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. By contrast, neural correlates of plasticity were found in VIP, a higher-level multisensory area with strong decision-related activity. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Jean Laurens
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt 60528, Germany
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester 14627, New York
| | - Dora E Angelaki
- Center for Neural Science and Tandon School of Engineering, New York University, New York 10003, New York
| |
Collapse
|
20
|
Thompson LW, Kim B, Zhu Z, Rokers B, Rosenberg A. Perspective Cues Make Eye-specific Contributions to 3-D Motion Perception. J Cogn Neurosci 2021; 34:192-208. [PMID: 34813655 PMCID: PMC8692976 DOI: 10.1162/jocn_a_01781] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Robust 3-D visual perception is achieved by integrating stereoscopic and perspective cues. The canonical model describing the integration of these cues assumes that perspective signals sensed by the left and right eyes are indiscriminately pooled into a single representation that contributes to perception. Here, we show that this model fails to account for 3-D motion perception. We measured the sensitivity of male macaque monkeys to 3-D motion signaled by left-eye perspective cues, right-eye perspective cues, stereoscopic cues, and all three cues combined. The monkeys exhibited idiosyncratic differences in their biases and sensitivities for each cue, including left- and right-eye perspective cues, suggesting that the signals undergo at least partially separate neural processing. Importantly, sensitivity to combined cue stimuli was greater than predicted by the canonical model, which previous studies found to account for the perception of 3-D orientation in both humans and monkeys. Instead, 3-D motion sensitivity was best explained by a model in which stereoscopic cues were integrated with left- and right-eye perspective cues whose representations were at least partially independent. These results indicate that the integration of perspective and stereoscopic cues is a shared computational strategy across 3-D processing domains. However, they also reveal a fundamental difference in how left- and right-eye perspective signals are represented for 3-D orientation versus motion perception. This difference results in more effective use of available sensory information in the processing of 3-D motion than orientation and may reflect the temporal urgency of avoiding and intercepting moving objects.
Collapse
|
21
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
22
|
Abekawa N, Gomi H, Diedrichsen J. Gaze control during reaching is flexibly modulated to optimize task outcome. J Neurophysiol 2021; 126:816-826. [PMID: 34320845 DOI: 10.1152/jn.00134.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When reaching for an object with the hand, the gaze is usually directed at the target. In a laboratory setting, fixation is strongly maintained at the reach target until the reaching is completed, a phenomenon known as "gaze anchoring." While conventional accounts of such tight eye-hand coordination have often emphasized the internal synergetic linkage between both motor systems, more recent optimal control theories regard motor coordination as the adaptive solution to task requirements. We here investigated to what degree gaze control during reaching is modulated by task demands. We adopted a gaze-anchoring paradigm in which participants had to reach for a target location. During the reach, they additionally had to make a saccadic eye movement to a salient visual cue presented at locations other than the target. We manipulated the task demands by independently changing reward contingencies for saccade reaction time (RT) and reaching accuracy. On average, both saccade RTs and reach error varied systematically with reward condition, with reach accuracy improving when the saccade was delayed. The distribution of the saccade RTs showed two types of eye movements: fast saccades with short RTs, and voluntary saccade with longer RTs. Increased reward for high reach accuracy reduced the probability of fast saccades but left their latency unchanged. The results suggest that gaze anchoring acts through a suppression of fast saccades, a mechanism that can be adaptively adjusted to the current task demands.NEW & NOTEWORTHY During visually guided reaching, our eyes usually fixate the target and saccades elsewhere are delayed ("gaze anchoring"). We here show that the degree of gaze anchoring is flexibly modulated by the reward contingencies of saccade latency and reach accuracy. Reach error became larger when saccades occurred earlier. These results suggest that early saccades are costly for reaching and the brain modulates inhibitory online coordination from the hand to the eye system depending on task requirements.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan
| | - Jörn Diedrichsen
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
23
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|