1
|
Zhu Z, Kim B, Doudlah R, Chang TY, Rosenberg A. Differential clustering of visual and choice- and saccade-related activity in macaque V3A and CIP. J Neurophysiol 2024; 131:709-722. [PMID: 38478896 PMCID: PMC11305645 DOI: 10.1152/jn.00285.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 04/11/2024] Open
Abstract
Neurons in sensory and motor cortices tend to aggregate in clusters with similar functional properties. Within the primate dorsal ("where") pathway, an important interface between three-dimensional (3-D) visual processing and motor-related functions consists of two hierarchically organized areas: V3A and the caudal intraparietal (CIP) area. In these areas, 3-D visual information, choice-related activity, and saccade-related activity converge, often at the single-neuron level. Characterizing the clustering of functional properties in areas with mixed selectivity, such as these, may help reveal organizational principles that support sensorimotor transformations. Here we quantified the clustering of visual feature selectivity, choice-related activity, and saccade-related activity by performing correlational and parametric comparisons of the responses of well-isolated, simultaneously recorded neurons in macaque monkeys. Each functional domain showed statistically significant clustering in both areas. However, there were also domain-specific differences in the strength of clustering across the areas. Visual feature selectivity and saccade-related activity were more strongly clustered in V3A than in CIP. In contrast, choice-related activity was more strongly clustered in CIP than in V3A. These differences in clustering may reflect the areas' roles in sensorimotor processing. Stronger clustering of visual and saccade-related activity in V3A may reflect a greater role in within-domain processing, as opposed to cross-domain synthesis. In contrast, stronger clustering of choice-related activity in CIP may reflect a greater role in synthesizing information across functional domains to bridge perception and action.NEW & NOTEWORTHY The occipital and parietal cortices of macaque monkeys are bridged by hierarchically organized areas V3A and CIP. These areas support 3-D visual transformations, carry choice-related activity during 3-D perceptual tasks, and possess saccade-related activity. This study quantifies the functional clustering of neuronal response properties within V3A and CIP for each of these domains. The findings reveal domain-specific cross-area differences in clustering that may reflect the areas' roles in sensorimotor processing.
Collapse
Affiliation(s)
- Zikang Zhu
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| |
Collapse
|
2
|
Talluri BC, Kang I, Lazere A, Quinn KR, Kaliss N, Yates JL, Butts DA, Nienborg H. Activity in primate visual cortex is minimally driven by spontaneous movements. Nat Neurosci 2023; 26:1953-1959. [PMID: 37828227 PMCID: PMC10620084 DOI: 10.1038/s41593-023-01459-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 09/08/2023] [Indexed: 10/14/2023]
Abstract
Organisms process sensory information in the context of their own moving bodies, an idea referred to as embodiment. This idea is important for developmental neuroscience, robotics and systems neuroscience. The mechanisms supporting embodiment are unknown, but a manifestation could be the observation in mice of brain-wide neuromodulation, including in the primary visual cortex, driven by task-irrelevant spontaneous body movements. We tested this hypothesis in macaque monkeys (Macaca mulatta), a primate model for human vision, by simultaneously recording visual cortex activity and facial and body movements. We also sought a direct comparison using an analogous approach to those used in mouse studies. Here we found that activity in the primate visual cortex (V1, V2 and V3/V3A) was associated with the animals' own movements, but this modulation was largely explained by the impact of the movements on the retinal image, that is, by changes in visual input. These results indicate that visual cortex in primates is minimally driven by spontaneous movements and may reflect species-specific sensorimotor strategies.
Collapse
Affiliation(s)
- Bharath Chandra Talluri
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Incheol Kang
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Adam Lazere
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katrina R Quinn
- Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - Nicholas Kaliss
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Jacob L Yates
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA, USA
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Hendrikje Nienborg
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
3
|
Rosenberg A, Thompson LW, Doudlah R, Chang TY. Neuronal Representations Supporting Three-Dimensional Vision in Nonhuman Primates. Annu Rev Vis Sci 2023; 9:337-359. [PMID: 36944312 DOI: 10.1146/annurev-vision-111022-123857] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision.
Collapse
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
4
|
Baltaretu BR, Stevens WD, Freud E, Crawford JD. Occipital and parietal cortex participate in a cortical network for transsaccadic discrimination of object shape and orientation. Sci Rep 2023; 13:11628. [PMID: 37468709 DOI: 10.1038/s41598-023-38554-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023] Open
Abstract
Saccades change eye position and interrupt vision several times per second, necessitating neural mechanisms for continuous perception of object identity, orientation, and location. Neuroimaging studies suggest that occipital and parietal cortex play complementary roles for transsaccadic perception of intrinsic versus extrinsic spatial properties, e.g., dorsomedial occipital cortex (cuneus) is sensitive to changes in spatial frequency, whereas the supramarginal gyrus (SMG) is modulated by changes in object orientation. Based on this, we hypothesized that both structures would be recruited to simultaneously monitor object identity and orientation across saccades. To test this, we merged two previous neuroimaging protocols: 21 participants viewed a 2D object and then, after sustained fixation or a saccade, judged whether the shape or orientation of the re-presented object changed. We, then, performed a bilateral region-of-interest analysis on identified cuneus and SMG sites. As hypothesized, cuneus showed both saccade and feature (i.e., object orientation vs. shape change) modulations, and right SMG showed saccade-feature interactions. Further, the cuneus activity time course correlated with several other cortical saccade/visual areas, suggesting a 'functional network' for feature discrimination. These results confirm the involvement of occipital/parietal cortex in transsaccadic vision and support complementary roles in spatial versus identity updating.
Collapse
Affiliation(s)
- B R Baltaretu
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - W Dale Stevens
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - E Freud
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - J D Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
- School of Kinesiology and Health Sciences, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
5
|
Lu X, Wang Q, Li X, Wang G, Chen Y, Li X, Li H. Connectivity reveals homology between the visual systems of the human and macaque brains. Front Neurosci 2023; 17:1207340. [PMID: 37476839 PMCID: PMC10354265 DOI: 10.3389/fnins.2023.1207340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 06/20/2023] [Indexed: 07/22/2023] Open
Abstract
The visual systems of humans and nonhuman primates share many similarities in both anatomical and functional organization. Understanding the homology and differences between the two systems can provide important insights into the neural basis of visual perception and cognition. This research aims to investigate the homology between human and macaque visual systems based on connectivity, using diffusion tensor imaging and resting-state functional magnetic resonance imaging to construct structural and functional connectivity fingerprints of the visual systems in humans and macaques, and quantitatively analyze the connectivity patterns. By integrating multimodal magnetic resonance imaging, this research explored the homology and differences between the two systems. The results showed that 9 brain regions in the macaque visual system formed highly homologous mapping relationships with 11 brain regions in the human visual system, and the related brain regions between the two species showed highly structure homologous, with their functional organization being essentially conserved across species. Finally, this research generated a homology information map of the visual system for humans and macaques, providing a new perspective for subsequent cross-species analysis.
Collapse
Affiliation(s)
- Xia Lu
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Qianshan Wang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Xiaowen Li
- Shanxi Technology and Business College, Taiyuan, China
| | - Guolan Wang
- Shanxi Technology and Business College, Taiyuan, China
| | - Yifei Chen
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Xueqi Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Haifang Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
- Shanxi Technology and Business College, Taiyuan, China
| |
Collapse
|
6
|
Meng L, Ge K. Decoding Visual fMRI Stimuli from Human Brain Based on Graph Convolutional Neural Network. Brain Sci 2022; 12:brainsci12101394. [PMID: 36291327 PMCID: PMC9599823 DOI: 10.3390/brainsci12101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/12/2022] [Accepted: 10/14/2022] [Indexed: 11/21/2022] Open
Abstract
Brain decoding is to predict the external stimulus information from the collected brain response activities, and visual information is one of the most important sources of external stimulus information. Decoding functional magnetic resonance imaging (fMRI) based on visual stimulation is helpful in understanding the working mechanism of the brain visual function regions. Traditional brain decoding algorithms cannot accurately extract stimuli features from fMRI. To address these shortcomings, this paper proposed a brain decoding algorithm based on a graph convolution network (GCN). Firstly, 11 regions of interest (ROI) were selected according to the human brain visual function regions, which can avoid the noise interference of the non-visual regions of the human brain; then, a deep three-dimensional convolution neural network was specially designed to extract the features of these 11 regions; next, the GCN was used to extract the functional correlation features between the different human brain visual regions. Furthermore, to avoid the problem of gradient disappearance when there were too many layers of graph convolutional neural network, the residual connections were adopted in our algorithm, which helped to integrate different levels of features in order to improve the accuracy of the proposed GCN. The proposed algorithm was tested on the public dataset, and the recognition accuracy reached 98.67%. Compared with the other state-of-the-art algorithms, the proposed algorithm performed the best.
Collapse
|
7
|
Doudlah R, Chang TY, Thompson LW, Kim B, Sunkara A, Rosenberg A. Parallel processing, hierarchical transformations, and sensorimotor associations along the 'where' pathway. eLife 2022; 11:78712. [PMID: 35950921 PMCID: PMC9439678 DOI: 10.7554/elife.78712] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/10/2022] [Indexed: 11/13/2022] Open
Abstract
Visually guided behaviors require the brain to transform ambiguous retinal images into object-level spatial representations and implement sensorimotor transformations. These processes are supported by the dorsal ‘where’ pathway. However, the specific functional contributions of areas along this pathway remain elusive due in part to methodological differences across studies. We previously showed that macaque caudal intraparietal (CIP) area neurons possess robust 3D visual representations, carry choice- and saccade-related activity, and exhibit experience-dependent sensorimotor associations (Chang et al., 2020b). Here, we used a common experimental design to reveal parallel processing, hierarchical transformations, and the formation of sensorimotor associations along the ‘where’ pathway by extending the investigation to V3A, a major feedforward input to CIP. Higher-level 3D representations and choice-related activity were more prevalent in CIP than V3A. Both areas contained saccade-related activity that predicted the direction/timing of eye movements. Intriguingly, the time course of saccade-related activity in CIP aligned with the temporally integrated V3A output. Sensorimotor associations between 3D orientation and saccade direction preferences were stronger in CIP than V3A, and moderated by choice signals in both areas. Together, the results explicate parallel representations, hierarchical transformations, and functional associations of visual and saccade-related signals at a key juncture in the ‘where’ pathway.
Collapse
Affiliation(s)
- Raymond Doudlah
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Ting-Yu Chang
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Lowell W Thompson
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Byounghoon Kim
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | | | - Ari Rosenberg
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| |
Collapse
|
8
|
Leszczynski M, Chaieb L, Staudigl T, Enkirch SJ, Fell J, Schroeder CE. Neural activity in the human anterior thalamus during natural vision. Sci Rep 2021; 11:17480. [PMID: 34471183 PMCID: PMC8410783 DOI: 10.1038/s41598-021-96588-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 08/11/2021] [Indexed: 12/23/2022] Open
Abstract
In natural vision humans and other primates explore environment by active sensing, using saccadic eye movements to relocate the fovea and sample different bits of information multiple times per second. Saccades induce a phase reset of ongoing neuronal oscillations in primary and higher-order visual cortices and in the medial temporal lobe. As a result, neuron ensembles are shifted to a common state at the time visual input propagates through the system (i.e., just after fixation). The extent of the brain’s circuitry that is modulated by saccades is not yet known. Here, we evaluate the possibility that saccadic phase reset impacts the anterior nuclei of the thalamus (ANT). Using recordings in the human thalamus of three surgical patients during natural vision, we found that saccades and visual stimulus onset both modulate neural activity, but with distinct field potential morphologies. Specifically, we found that fixation-locked field potentials had a component that preceded saccade onset. It was followed by an early negativity around 50 ms after fixation onset which is significantly faster than any response to visual stimulus presentation. The timing of these events suggests that the ANT is predictively modulated before the saccadic eye movement. We also found oscillatory phase concentration, peaking at 3–4 Hz, coincident with suppression of Broadband High-frequency Activity (BHA; 80–180 Hz), both locked to fixation onset supporting the idea that neural oscillations in these nuclei are reorganized to a low excitability state right after fixation onset. These findings show that during real-world natural visual exploration neural dynamics in the human ANT is influenced by visual and oculomotor events, which supports the idea that ANT, apart from their contribution to episodic memory, also play a role in natural vision.
Collapse
Affiliation(s)
- Marcin Leszczynski
- Department of Psychiatry, College of Physicians and Surgeons, Columbia University Medical Center, 1051 Riverside Drive Kolb Annex Rm 561, New York, NY, 10032, USA. .,Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA.
| | - Leila Chaieb
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | | | - Juergen Fell
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
| | - Charles E Schroeder
- Department of Psychiatry, College of Physicians and Surgeons, Columbia University Medical Center, 1051 Riverside Drive Kolb Annex Rm 561, New York, NY, 10032, USA.,Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
| |
Collapse
|
9
|
Abstract
Remapping is a property of some cortical and subcortical neurons that update their responses around the time of an eye movement to account for the shift of stimuli on the retina due to the saccade. Physiologically, remapping is traditionally tested by briefly presenting a single stimulus around the time of the saccade and looking at the onset of the response and the locations in space to which the neuron is responsive. Here we suggest that a better way to understand the functional role of remapping is to look at the time at which the neural signal emerges when saccades are made across a stable scene. Based on data obtained using this approach, we suggest that remapping in the lateral intraparietal area is sufficient to play a role in maintaining visual stability across saccades, whereas in the frontal eye field, remapped activity carries information that affects future saccadic choices and, in a separate subset of neurons, is used to maintain a map of locations in the scene that have been previously fixated.
Collapse
Affiliation(s)
- James W Bisley
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.,Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.,Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, CA, USA
| | - Koorosh Mirpour
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Yelda Alkan
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
10
|
Roelke A, Vorstius C, Radach R, Hofmann MJ. Fixation-related NIRS indexes retinotopic occipital processing of parafoveal preview during natural reading. Neuroimage 2020; 215:116823. [PMID: 32289457 DOI: 10.1016/j.neuroimage.2020.116823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/23/2020] [Accepted: 03/28/2020] [Indexed: 11/19/2022] Open
Abstract
While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements.
Collapse
Affiliation(s)
- Andre Roelke
- General and Biological Psychology, University of Wuppertal, Max-Horkheimer-Str. 20, D-42119, Wuppertal, Germany.
| | - Christian Vorstius
- General and Biological Psychology, University of Wuppertal, Max-Horkheimer-Str. 20, D-42119, Wuppertal, Germany
| | - Ralph Radach
- General and Biological Psychology, University of Wuppertal, Max-Horkheimer-Str. 20, D-42119, Wuppertal, Germany
| | - Markus J Hofmann
- General and Biological Psychology, University of Wuppertal, Max-Horkheimer-Str. 20, D-42119, Wuppertal, Germany
| |
Collapse
|
11
|
Chang TY, Doudlah R, Kim B, Sunkara A, Thompson LW, Lowe ME, Rosenberg A. Functional links between sensory representations, choice activity, and sensorimotor associations in parietal cortex. eLife 2020; 9:57968. [PMID: 33078705 PMCID: PMC7641584 DOI: 10.7554/elife.57968] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/19/2020] [Indexed: 02/02/2023] Open
Abstract
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
Collapse
Affiliation(s)
- Ting-Yu Chang
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | | | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Meghan E Lowe
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| |
Collapse
|
12
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
13
|
The neglected medial part of macaque area PE: segregated processing of reach depth and direction. Brain Struct Funct 2019; 224:2537-2557. [DOI: 10.1007/s00429-019-01923-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 07/13/2019] [Indexed: 11/26/2022]
|
14
|
Abstract
Our vision depends upon shifting our high-resolution fovea to objects of interest in the visual field. Each saccade displaces the image on the retina, which should produce a chaotic scene with jerks occurring several times per second. It does not. This review examines how an internal signal in the primate brain (a corollary discharge) contributes to visual continuity across saccades. The article begins with a review of evidence for a corollary discharge in the monkey and evidence from inactivation experiments that it contributes to perception. The next section examines a specific neuronal mechanism for visual continuity, based on corollary discharge that is referred to as visual remapping. Both the basic characteristics of this anticipatory remapping and the factors that control it are enumerated. The last section considers hypotheses relating remapping to the perceived visual continuity across saccades, including remapping's contribution to perceived visual stability across saccades.
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892-4435, USA;
| |
Collapse
|
15
|
Choice-Related Activity during Visual Slant Discrimination in Macaque CIP But Not V3A. eNeuro 2019; 6:eN-NWR-0248-18. [PMID: 30923736 PMCID: PMC6437654 DOI: 10.1523/eneuro.0248-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 02/02/2023] Open
Abstract
Creating three-dimensional (3D) representations of the world from two-dimensional retinal images is fundamental to visually guided behaviors including reaching and grasping. A critical component of this process is determining the 3D orientation of objects. Previous studies have shown that neurons in the caudal intraparietal area (CIP) of the macaque monkey represent 3D planar surface orientation (i.e., slant and tilt). Here we compare the responses of neurons in areas V3A (which is implicated in 3D visual processing and precedes CIP in the visual hierarchy) and CIP to 3D-oriented planar surfaces. We then examine whether activity in these areas correlates with perception during a fine slant discrimination task in which the monkeys report if the top of a surface is slanted toward or away from them. Although we find that V3A and CIP neurons show similar sensitivity to planar surface orientation, significant choice-related activity during the slant discrimination task is rare in V3A but prominent in CIP. These results implicate both V3A and CIP in the representation of 3D surface orientation, and suggest a functional dissociation between the areas based on slant-related choice signals.
Collapse
|
16
|
Goard MJ, Pho GN, Woodson J, Sur M. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions. eLife 2016; 5. [PMID: 27490481 PMCID: PMC4974053 DOI: 10.7554/elife.13764] [Citation(s) in RCA: 130] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Accepted: 07/18/2016] [Indexed: 12/17/2022] Open
Abstract
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI:http://dx.doi.org/10.7554/eLife.13764.001
Collapse
Affiliation(s)
- Michael J Goard
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, United States.,Department of Molecular, Cellular, Developmental Biology, University of California, Santa Barbara, Santa Barbara, United States.,Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, United States
| | - Gerald N Pho
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, United States
| | - Jonathan Woodson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, United States
| | - Mriganka Sur
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
17
|
Lescroart MD, Kanwisher N, Golomb JD. No Evidence for Automatic Remapping of Stimulus Features or Location Found with fMRI. Front Syst Neurosci 2016; 10:53. [PMID: 27378866 PMCID: PMC4904027 DOI: 10.3389/fnsys.2016.00053] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 05/27/2016] [Indexed: 11/21/2022] Open
Abstract
The input to our visual system shifts every time we move our eyes. To maintain a stable percept of the world, visual representations must be updated with each saccade. Near the time of a saccade, neurons in several visual areas become sensitive to the regions of visual space that their receptive fields occupy after the saccade. This process, known as remapping, transfers information from one set of neurons to another, and may provide a mechanism for visual stability. However, it is not clear whether remapping transfers information about stimulus features in addition to information about stimulus location. To investigate this issue, we recorded blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI) responses while human subjects viewed images of faces and houses (two visual categories with many feature differences). Immediately after some image presentations, subjects made a saccade that moved the previously stimulated location to the opposite side of the visual field. We then used a combination of univariate analyses and multivariate pattern analyses to test whether information about stimulus location and stimulus features were remapped to the ipsilateral hemisphere after the saccades. We found no reliable indication of stimulus feature remapping in any region. However, we also found no reliable indication of stimulus location remapping, despite the fact that our paradigm was highly similar to previous fMRI studies of remapping. The absence of location remapping in our study precludes strong conclusions regarding feature remapping. However, these results also suggest that measurement of location remapping with fMRI depends strongly on the details of the experimental paradigm used. We highlight differences in our approach from the original fMRI studies of remapping, discuss potential reasons for the failure to generalize prior location remapping results, and suggest directions for future research.
Collapse
Affiliation(s)
- Mark D Lescroart
- Helen Wills Neuroscience Institute, University of California Berkeley, CA, USA
| | - Nancy Kanwisher
- McGovern Center for Brain Research, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive and Brain Sciences, Ohio State University Columbus, OH, USA
| |
Collapse
|
18
|
Grossberg S. How Does the Cerebral Cortex Work? Development, Learning, Attention, and 3-D Vision by Laminar Circuits of Visual Cortex. ACTA ACUST UNITED AC 2016; 2:47-76. [PMID: 17715598 DOI: 10.1177/1534582303002001003] [Citation(s) in RCA: 81] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress toward explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sublamina. Here it is proposed how these layered circuits help to realize processes of development, learning, perceptual grouping, attention, and 3-D vision through a combination of bottom-up, horizontal, and top-down interactions. A main theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical development, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.
Collapse
|
19
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
20
|
Single-neuron activity and eye movements during human REM sleep and awake vision. Nat Commun 2015; 6:7884. [PMID: 26262924 PMCID: PMC4866865 DOI: 10.1038/ncomms8884] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 06/23/2015] [Indexed: 11/08/2022] Open
Abstract
Are rapid eye movements (REMs) in sleep associated with visual-like activity, as during wakefulness? Here we examine single-unit activities (n=2,057) and intracranial electroencephalography across the human medial temporal lobe (MTL) and neocortex during sleep and wakefulness, and during visual stimulation with fixation. During sleep and wakefulness, REM onsets are associated with distinct intracranial potentials, reminiscent of ponto-geniculate-occipital waves. Individual neurons, especially in the MTL, exhibit reduced firing rates before REMs as well as transient increases in firing rate immediately after, similar to activity patterns observed upon image presentation during fixation without eye movements. Moreover, the selectivity of individual units is correlated with their response latency, such that units activated after a small number of images or REMs exhibit delayed increases in firing rates. Finally, the phase of theta oscillations is similarly reset following REMs in sleep and wakefulness, and after controlled visual stimulation. Our results suggest that REMs during sleep rearrange discrete epochs of visual-like processing as during wakefulness. Since the discovery of rapid eye movements (REMs), a critical question endures as to whether they represent time points at which visual-like processing is updated. Here the authors demonstrate that cortical activity during sleep REMs shares many properties with that observed during saccades and vision.
Collapse
|
21
|
Mender BMW, Stringer SM. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system. Front Comput Neurosci 2015; 9:17. [PMID: 25717301 PMCID: PMC4324147 DOI: 10.3389/fncom.2015.00017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 01/30/2015] [Indexed: 11/13/2022] Open
Abstract
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.
Collapse
Affiliation(s)
- Bedeho M W Mender
- Department of Experimental Psychology, Centre for Theoretical Neuroscience and Artificial Intelligence, University of Oxford Oxford, UK
| | - Simon M Stringer
- Department of Experimental Psychology, Centre for Theoretical Neuroscience and Artificial Intelligence, University of Oxford Oxford, UK
| |
Collapse
|
22
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
23
|
Chang HC, Grossberg S, Cao Y. Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene. Front Integr Neurosci 2014; 8:43. [PMID: 24987339 PMCID: PMC4060746 DOI: 10.3389/fnint.2014.00043] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Accepted: 05/02/2014] [Indexed: 11/13/2022] Open
Abstract
The Where's Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).
Collapse
Affiliation(s)
- Hung-Cheng Chang
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | - Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | - Yongqiang Cao
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| |
Collapse
|
24
|
Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements. Proc Natl Acad Sci U S A 2014; 111:7825-30. [PMID: 24821778 DOI: 10.1073/pnas.1401370111] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.
Collapse
|
25
|
Gu XJ, Hu M, Li B, Hu XT. The role of contrast adaptation in saccadic suppression in humans. PLoS One 2014; 9:e86542. [PMID: 24466142 PMCID: PMC3899276 DOI: 10.1371/journal.pone.0086542] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 12/12/2013] [Indexed: 11/18/2022] Open
Abstract
The idea of retinal and ex-retinal sources of saccadic suppression has long been established in previous studies. However, how they are implemented in local circuit remains unknown. Researchers have suggested that saccadic suppression was probably achieved by contrast gain control, but this possibility has never been directly tested. In this study, we manipulated contrast gain control by contrast-adapting observers with sinusoidal gratings of different contrasts. Presaccadic and fixational contrast thresholds were measured and compared to give estimates of saccadic suppression at different adaptation states. Our results reconfirmed the selective saccadic suppression in achromatic condition, and further showed that, achromatic saccadic suppression diminished as contrast adaptation was accentuated, whereas no significant chromatic saccadic suppression was induced by greater contrast adaptation. Our data provided evidence for the involvement of contrast gain control in saccadic suppression in achromatic channel. We also discussed how the negative correlation between contrast adaptation and saccadic suppression could be interpreted with contrast gain control.
Collapse
Affiliation(s)
- Xiao-Jing Gu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Ming Hu
- Department of Neurobiology and Anatomy, University of Texas–Houston Medical School, Houston, Texas, United States of America
| | - Bing Li
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- * E-mail: (BL); (X-TH)
| | - Xin-Tian Hu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Key Laboratory of Animal Models and Human Disease Mechanisms, Kunming Institute of Zoology, Chinese Academy of Science, Kunming, Yunnan, China
- * E-mail: (BL); (X-TH)
| |
Collapse
|
26
|
Suzuki M, Yamazaki Y. Predictive adjustment of the perceived direction of gaze during saccadic eye movements. Cogn Neurodyn 2013; 6:547-52. [PMID: 24294338 DOI: 10.1007/s11571-011-9190-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Revised: 11/30/2011] [Accepted: 12/30/2011] [Indexed: 11/27/2022] Open
Abstract
When we look at a stationary object, the perceived direction of gaze (where we are looking) is aligned with the physical direction of eyes (where our eyes are oriented) by which the object is foveated. However, this alignment may not hold in a dynamic situation. Our experiments assessed the perceived locations of two brief stimuli (1 ms) simultaneously displayed at two different physical locations during a saccade. The first stimulus was in the instantaneous location to which the eyes were oriented and the second one was always in the same location as the initial fixation point. When the timing of these stimuli was changed intra-saccadically, their perceived locations were dissociated. The first stimuli were consistently perceived near the target that will be foveated at saccade termination. The second stimuli once perceived near the target location, shifted in the direction opposite to that of saccades, as its latency from saccades increased. These results suggested an independent adjustment of gaze orientation from the physical orientation of eyes during saccades. The spatial dissociation of two stimuli may reflect sensorimotor control of gaze during saccades.
Collapse
Affiliation(s)
- Masataka Suzuki
- Department of Psychology, Kinjo Gakuin University, Omori 2-1723, Moriyama, Nagoya, 463-8521 Japan
| | | |
Collapse
|
27
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Bosco A, Galletti C, Fattori P. Common Neural Substrate for Processing Depth and Direction Signals for Reaching in the Monkey Medial Posterior Parietal Cortex. Cereb Cortex 2013; 24:1645-57. [DOI: 10.1093/cercor/bht021] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
28
|
Foley NC, Grossberg S, Mingolla E. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding. Cogn Psychol 2012; 65:77-117. [PMID: 22425615 DOI: 10.1016/j.cogpsych.2012.02.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2011] [Revised: 01/07/2012] [Accepted: 02/02/2012] [Indexed: 11/18/2022]
Abstract
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.
Collapse
Affiliation(s)
- Nicholas C Foley
- Center for Adaptive Systems, Department of Cognitive and Neural Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
29
|
Reorganization of Oscillatory Activity in Human Parietal Cortex during Spatial Updating. Cereb Cortex 2012; 23:508-19. [DOI: 10.1093/cercor/bhr387] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
30
|
Eye position encoding in three-dimensional space: integration of version and vergence signals in the medial posterior parietal cortex. J Neurosci 2012; 32:159-69. [PMID: 22219279 DOI: 10.1523/jneurosci.4028-11.2012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Eye position signals are pivotal in the visuomotor transformations performed by the posterior parietal cortex (PPC), but to date there are few studies addressing the influence of vergence angle upon single PPC neurons. In the present study, we investigated the influence on single neurons of the medial PPC area V6A of vergence and version signals. Single-unit activity was recorded from V6A in two Macaca fascicularis fixating real targets in darkness. The fixation targets were placed at eye level and at different vergence and version angles within the peripersonal space. Few neurons were modulated by version or vergence only, while the majority of cells were affected by both signals. We advance here the hypothesis that gaze-modulated V6A cells are able to encode gazed positions in the three-dimensional space. In single cells, version and vergence influenced the discharge with variable time course. In several cases, the two gaze variables influence neural discharges during only a part of the fixation time, but, more often, their influence persisted through large parts of it. Cells discharging for the first 400-500 ms of fixation could signal the arrival of gaze (and/or of spotlight of attention) in a new position in the peripersonal space. Cells showing a more sustained activity during the fixation period could better signal the location in space of the gazed objects. Both signals are critical for the control of upcoming or ongoing arm movements, such as those needed to reach and grasp objects located in the peripersonal space.
Collapse
|
31
|
Tanaka M, Kunimatsu J. Contribution of the central thalamus to the generation of volitional saccades. Eur J Neurosci 2011; 33:2046-57. [PMID: 21645100 DOI: 10.1111/j.1460-9568.2011.07699.x] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Lesions in the motor thalamus can cause deficits in somatic movements. However, the involvement of the thalamus in the generation of eye movements has only recently been elucidated. In this article, we review recent advances into the role of the thalamus in eye movements. Anatomically, the anterior group of the intralaminar nuclei and paralaminar portion of the ventrolateral, ventroanterior and mediodorsal nuclei of the thalamus send massive projections to the frontal eye field and supplementary eye field. In addition, these parts of the thalamus, collectively known as the 'oculomotor thalamus', receive inputs from the cerebellum, the basal ganglia and virtually all stages of the saccade-generating pathways in the brainstem. In their pioneering work in the 1980s, Schlag and Schlag-Rey found a variety of eye movement-related neurons in the oculomotor thalamus, and proposed that this region might constitute a 'central controller' playing a role in monitoring eye movements and generating self-paced saccades. This hypothesis has been evaluated by recent experiments in non-human primates and by clinical observations of subjects with thalamic lesions. In addition, several recent studies have also addressed the involvement of the oculomotor thalamus in the generation of anti-saccades and the selection of targets for saccades. These studies have revealed the impact of subcortical signals on the higher-order cortical processing underlying saccades, and suggest the possibility of future studies using the oculomotor system as a model to explore the neural mechanisms of global cortico-subcortical loops and the neural basis of a local network between the thalamus and cortex.
Collapse
Affiliation(s)
- Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo 060-8638, Japan.
| | | |
Collapse
|
32
|
Attentional modulation in visual cortex is modified during perceptual learning. Neuropsychologia 2011; 49:3898-907. [PMID: 22019773 DOI: 10.1016/j.neuropsychologia.2011.10.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2011] [Revised: 10/03/2011] [Accepted: 10/07/2011] [Indexed: 11/21/2022]
Abstract
Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity in visual cortex. Theoretical treatments tend to invoke neuroplasticity that refines early sensory processing. An alternative possibility is that performance is improved because of an altered attentional strategy and that the changes in early visual areas reflect locally altered top-down attentional modulation. To test this idea, we have used functional MRI to examine changes in attentional modulation in visual cortex while participants learn an orientation discrimination task. By examining activity in visual cortex during the preparatory period when the participant has been cued to attend to an upcoming stimulus, we isolated the top-down modulatory signal received by the visual cortex. We show that this signal changes as learning progresses, possibly reflecting gradual automation of the task. By manipulating task difficulty, we show that the change mirrors performance, occurring most quickly for easier stimuli. The effects were seen only at the retinal locus of the stimulus, ruling out a generalized change in alertness. The results suggest that spatial attention changes during perceptual learning and that this may account for some of the concomitant changes seen in visual cortex.
Collapse
|
33
|
A lack of anticipatory remapping of retinotopic receptive fields in the middle temporal area. J Neurosci 2011; 31:10432-6. [PMID: 21775588 DOI: 10.1523/jneurosci.5589-10.2011] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The middle temporal (MT) area has traditionally been thought to be a retinotopic area. However, recent functional magnetic resonance imaging and psychophysical evidence have suggested that human MT may have some spatiotopic processing. To gain an understanding of the neural mechanisms underlying this process, we recorded neurons from area MT in awake behaving animals performing a simple saccade task in which a spatially stable moving dot stimulus was presented for 500 ms in one of two locations: the presaccadic receptive field or the postsaccadic receptive field. MT neurons responded as if their receptive fields were purely retinotopic. When the stimulus was placed in the presaccadic receptive field, the response was elevated until the saccade took the stimulus out of the receptive field. When the stimulus was placed in the postsaccadic receptive field, the neuron only began its response after the end of the saccade. No evidence of presaccadic or anticipatory remapping was found. We conclude that gain fields are most likely to be responsible for the spatiotopic signal seen in area MT.
Collapse
|
34
|
Cosmelli D, López V, Lachaux JP, López-Calderón J, Renault B, Martinerie J, Aboitiz F. Shifting visual attention away from fixation is specifically associated with alpha band activity over ipsilateral parietal regions. Psychophysiology 2011; 48:312-22. [PMID: 20663090 DOI: 10.1111/j.1469-8986.2010.01066.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
We studied brain activity during the displacement of attention in a modified visuo-spatial orienting paradigm. Using a behaviorally relevant no-shift condition as a control, we asked whether ipsi- or contralateral parietal alpha band activity is specifically related to covert shifts of attention. Cue-related event-related potentials revealed an attention directing anterior negativity (ADAN) contralateral to the shift of attention and P3 and contingent negative variation waveforms that were enhanced in both shift conditions as compared to the no-shift task. When attention was shifted away from fixation, alpha band activity over parietal regions ipsilateral to the attended hemifield was enhanced relative to the control condition, albeit with different dynamics in the upper and lower alpha subbands. Contralateral-to-attended parietal alpha band activity was indistinguishable from the no-shift task.
Collapse
Affiliation(s)
- Diego Cosmelli
- Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile Centro Interdisciplinario de Neurociencia, Pontifica Universidad Católica de Chile, Santiago, Chile.
| | | | | | | | | | | | | |
Collapse
|
35
|
Prime SL, Vesia M, Crawford JD. Cortical mechanisms for trans-saccadic memory and integration of multiple object features. Philos Trans R Soc Lond B Biol Sci 2011; 366:540-53. [PMID: 21242142 DOI: 10.1098/rstb.2010.0184] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP.
Collapse
Affiliation(s)
- Steven L Prime
- Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada, R3T 2N2
| | | | | |
Collapse
|
36
|
Pola J. An explanation of perisaccadic compression of visual space. Vision Res 2011; 51:424-34. [DOI: 10.1016/j.visres.2010.12.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2009] [Revised: 12/15/2010] [Accepted: 12/21/2010] [Indexed: 11/30/2022]
|
37
|
Molecular mechanisms of working memory. Behav Brain Res 2011; 219:329-41. [PMID: 21232555 DOI: 10.1016/j.bbr.2010.12.039] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2010] [Accepted: 12/29/2010] [Indexed: 11/22/2022]
Abstract
Working memory is a process for temporary active maintenance of information and the role of prefrontal cortex in this memory has been known since the pioneering experiments of Fulton in the early 20th century. Sustained firing of prefrontal neurons during the delay period is considered the neural correlate of working memory. Evidence in literature suggests the involvement of areas beyond the frontal lobe and illustrate that working memory involves parallel, distributed neuronal networks. Prefrontal cortex is part of a complex neural circuit that includes both cortical and subcortical components and many of these regions play vital roles in working memory function. In this article, we review the current understanding of the neural mechanisms of memory maintenance in the brain.
Collapse
|
38
|
Kilintari M, Raos V, Savaki HE. Grasping in the dark activates early visual cortices. Cereb Cortex 2010; 21:949-63. [PMID: 20833697 DOI: 10.1093/cercor/bhq175] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
We have previously demonstrated that the primary motor and somatosensory cortices of monkeys are somatotopically activated for action-observation as are for action-generation, indicating that the recruitment of learned somatosensory-motor representations underlies the perception of others' actions. Here we examined the effects of seen and unseen actions on the early visual cortices, to determine whether stored visual representations are employed in addition to the somatosensory-motor ones. We used the quantitative (14)C-deoxyglucose method to map the activity throughout the cortex of the occipital operculum, lunate, and inferior occipital sulci of "rhesus monkeys" who reached to grasp a 3D object either in the light or in the dark or who observed the same action executed by another subject. In all cases, the extrastriate areas V3d and V3A displayed marked activation. We suggest that these activations reflect processing of visuospatial information useful for the reaching component of action, and 3D object-related information useful for the grasping part. We suggest that a memorized visual representation of the action supports action-recognition, as well as action-execution in complete darkness when the object and its environment are invisible. Accordingly, the internal representation that serves action-cognition is not purely somatosensory-motor but also includes a visual component.
Collapse
Affiliation(s)
- Marina Kilintari
- Department of Basic Sciences, Faculty of Medicine, School of Health Sciences, University of Crete, Crete, 71003 Greece
| | | | | |
Collapse
|
39
|
Schroeder CE, Wilson DA, Radman T, Scharfman H, Lakatos P. Dynamics of Active Sensing and perceptual selection. Curr Opin Neurobiol 2010; 20:172-6. [PMID: 20307966 DOI: 10.1016/j.conb.2010.02.010] [Citation(s) in RCA: 384] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2010] [Accepted: 02/23/2010] [Indexed: 11/15/2022]
Abstract
Sensory processing is often regarded as a passive process in which biological receptors like photoreceptors and mechanoreceptors transduce physical energy into a neural code. Recent findings, however, suggest that: first, most sensory processing is active, and largely determined by motor/attentional sampling routines; second, owing to rhythmicity in the motor routine, as well as to its entrainment of ambient rhythms in sensory regions, sensory inflow tends to be rhythmic; third, attentional manipulation of rhythms in sensory pathways is instrumental to perceptual selection. These observations outline the essentials of an Active Sensing paradigm, and argue for increased emphasis on the study of sensory processes as specific to the dynamic motor/attentional context in which inputs are acquired.
Collapse
Affiliation(s)
- Charles E Schroeder
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute for Psychiatric Research, USA.
| | | | | | | | | |
Collapse
|
40
|
Ogawa T, Komatsu H. Differential temporal storage capacity in the baseline activity of neurons in macaque frontal eye field and area V4. J Neurophysiol 2010; 103:2433-45. [PMID: 20220072 DOI: 10.1152/jn.01066.2009] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have suggested that spontaneous fluctuations in neuronal activity reflect intrinsic functional brain architecture. Inspired by these findings, we analyzed baseline neuronal activity in the monkey frontal eye field (FEF; a visuomotor area) and area V4 (a visual area) during the fixation period of a cognitive behavioral task in the absence of any task-specific stimuli or behaviors. Specifically, we examined the temporal storage capacity of the instantaneous discharge rate in FEF and V4 neurons by calculating the correlation of the spike count in a bin with that in another bin during the baseline activity of a trial. We found that most FEF neurons fired significantly more (or less) in one bin if they fired more (or less) in another bin within a trial, even when these two time bins were separated by hundreds of milliseconds. By contrast, similar long time-lag correlations were observed in only a small fraction of V4 neurons, indicating that temporal correlations were considerably stronger in FEF compared with those in V4 neurons. Additional analyses revealed that the findings were not attributable to other task-related variables or ongoing behavioral performance, suggesting that the differences in temporal correlation strength reflect differences in intrinsic structural and functional architecture between visual and visuomotor areas. Thus FEF neurons probably play a greater role than V4 neurons in neural circuits responsible for temporal storage in activity.
Collapse
Affiliation(s)
- Tadashi Ogawa
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan.
| | | |
Collapse
|
41
|
Abstract
Both space and time are grossly distorted during saccades. Here we show that the two distortions are strongly linked, and that both could be a consequence of the transient remapping mechanisms that affect visual neurons perisaccadically. We measured perisaccadic spatial and temporal distortions simultaneously by asking subjects to report both the perceived spatial location of a perisaccadic vertical bar (relative to a remembered ruler), and its perceived timing (relative to two sounds straddling the bar). During fixation and well before or after saccades, bars were localized veridically in space and in time. In different epochs of the perisaccadic interval, temporal perception was subject to different biases. At about the time of the saccadic onset, bars were temporally mislocalized 50-100 ms later than their actual presentation and spatially mislocalized toward the saccadic target. Importantly, the magnitude of the temporal distortions co-varied with the spatial localization bias and the two phenomena had similar dynamics. Within a brief period about 50 ms before saccadic onset, stimuli were perceived with shorter latencies than at other delays relative to saccadic onset, suggesting that the perceived passage of time transiently inverted its direction. Based on this result we could predict the inversion of perceived temporal order for two briefly flashed visual stimuli. We developed a model that simulates the perisaccadic transient change of neuronal receptive fields predicting well the reported temporal distortions. The key aspects of the model are the dynamics of the "remapped" activity and the use of decoder operators that are optimal during fixation, but are not updated perisaccadically.
Collapse
|
42
|
Richard A, Churan J, Guitton DE, Pack CC. The geometry of perisaccadic visual perception. J Neurosci 2009; 29:10160-70. [PMID: 19675250 PMCID: PMC6664982 DOI: 10.1523/jneurosci.0511-09.2009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Revised: 07/04/2009] [Accepted: 07/11/2009] [Indexed: 11/21/2022] Open
Abstract
Our ability to explore our surroundings requires a combination of high-resolution vision and frequent rotations of the visual axis toward objects of interest. Such gaze shifts are themselves a source of powerful retinal stimulation, and so the visual system appears to have evolved mechanisms to maintain perceptual stability during movements of the eyes in space. The mechanisms underlying this perceptual stability can be probed in the laboratory by briefly presenting a stimulus around the time of a saccadic eye movement and asking subjects to report its position. Under such conditions, there is a systematic misperception of the probes toward the saccade end point. This perisaccadic compression of visual space has been the subject of much research, but few studies have attempted to relate it to specific brain mechanisms. Here, we show that the magnitude of perceptual compression for a wide variety of probe stimuli and saccade amplitudes is quantitatively predicted by a simple heuristic model based on the geometry of retinotopic representations in the primate brain. Specifically, we propose that perisaccadic compression is determined by the distance between the probe and saccade end point on a map that has a logarithmic representation of visual space, similar to those found in numerous cortical and subcortical visual structures. Under this assumption, the psychophysical data on perisaccadic compression can be appreciated intuitively by imagining that, around the time of a saccade, the brain confounds nearby oculomotor and sensory signals while attempting to localize the position of objects in visual space.
Collapse
Affiliation(s)
- Alby Richard
- Montreal Neurological Institute, McGill University School of Medicine, Quebec, Canada.
| | | | | | | |
Collapse
|
43
|
Macaluso E. Orienting of spatial attention and the interplay between the senses. Cortex 2009; 46:282-97. [PMID: 19540475 DOI: 10.1016/j.cortex.2009.05.010] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2009] [Revised: 04/27/2009] [Accepted: 05/14/2009] [Indexed: 11/30/2022]
Abstract
Many everyday situations require combining complex sensory signals about the external world with ongoing goals and expectations. Here I examine the role of attention in this process and consider the underlying neural substrates. First, mechanisms of spatial attention in the visual modality are reviewed, emphasising the involvement of fronto-parietal cortex. Spatial attention takes into account endogenous factors, e.g., information about behavioural relevance, as well as signals arising from the external world (stimulus-driven control). Stimulus-driven control is thought to take place automatically and independently from endogenous factors. However, recent findings demonstrate that endogenous and stimulus-driven mechanisms co-operate, jointly contributing for the selection of the relevant spatial location. Next, I will turn to studies of multisensory spatial attention. These have shown that attention control in fronto-parietal cortex operates supramodally. Supramodal control exerts top-down influences onto sensory-specific areas, enhancing the processing of stimuli at the attended location irrespective of modality. Unlike unimodal visual attention, but in line with traditional views of multisensory integration, multisensory attention can operate in a fully automatic manner regardless of relevance and task-set. I discuss these findings in relation to functional/anatomical pathways that may mediate multisensory attention control, highlighting possible links between spatial attention and multisensory integration of space.
Collapse
Affiliation(s)
- Emiliano Macaluso
- Neuroimaging Laboratory, Santa Lucia Foundation, via Ardeatina 306, Rome, Italy.
| |
Collapse
|
44
|
Geng JJ, Ruff CC, Driver J. Saccades to a remembered location elicit spatially specific activation in human retinotopic visual cortex. J Cogn Neurosci 2009; 21:230-45. [PMID: 18510442 DOI: 10.1162/jocn.2008.21025] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The possible impact upon human visual cortex from saccades to remembered target locations was investigated using functional magnetic resonance imaging (fMRI). A specific location in the upper-right or upper-left visual quadrant served as the saccadic target. After a delay of 2,400 msec, an auditory signal indicated whether to execute a saccade to that location (go trial) or to cancel the saccade and remain centrally fixated (no-go). Group fMRI analysis revealed activation specific to the remembered target location for executed saccades, in the contralateral lingual gyrus. No-go trials produced similar, albeit significantly reduced, effects. Individual retinotopic mapping confirmed that on go trials, quadrant-specific activations arose in those parts of ventral V1, V2, and V3 that coded the target location for the saccade, whereas on no-go trials, only the corresponding parts of V2 and V3 were significantly activated. These results indicate that a spatial-motor saccadic task (i.e., making an eye movement to a remembered location) is sufficient to activate retinotopic visual cortex spatially corresponding to the target location, and that this activation is also present (though reduced) when no saccade is executed. We discuss the implications of finding that saccades to remembered locations can affect early visual cortex, not just those structures conventionally associated with eye movements, in relation to recent ideas about attention, spatial working memory, and the notion that recently activated representations can be "refreshed" when needed.
Collapse
Affiliation(s)
- Joy J Geng
- UCL Institute of Cognitive Neuroscience, UK.
| | | | | |
Collapse
|
45
|
View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cogn Psychol 2009; 58:1-48. [DOI: 10.1016/j.cogpsych.2008.05.001] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2007] [Accepted: 05/06/2008] [Indexed: 11/22/2022]
|
46
|
Lee J, Lee C. Changes in orientation discrimination at the time of saccadic eye movements. Vision Res 2008; 48:2213-23. [PMID: 18625267 DOI: 10.1016/j.visres.2008.06.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2007] [Revised: 06/10/2008] [Accepted: 06/19/2008] [Indexed: 10/21/2022]
Abstract
Perceptual performance has been known to change around the time of saccadic eye movement. In the current study, we measured the accuracy and sensitivity of orientation discrimination of bar stimuli presented during fixation and before saccadic eye movements. Human participants compared the orientations of the test and reference bar stimuli with the head erect in a two-interval forced choice task. For the targets presented during steady fixation, the accuracy and sensitivity of orientation discrimination were better near the cardinal than oblique axes, a perceptual anisotropy known as the oblique effect. For the targets presented during the 100 ms interval immediately before a saccade was executed, the anisotropy decreased mainly due to reduction in sensitivity for cardinal orientations. Directing attention to the goal location of the impending saccade emulated the saccadic effects on orientation discrimination for the targets at saccadic goal, suggesting that the saccadic effects on orientation discrimination are partly mediated by the shift of spatial attention that accompanies the saccade. These results were in line with the anti-oblique effect that perceptual judgment of motion direction along the oblique angle becomes relatively accurate for motion targets presented before saccadic eye movements [Lee, J., & Lee, C. (2005). Changes in visual motion perception before saccadic eye movements. Vision Research, 45(11), 1447-1457].
Collapse
Affiliation(s)
- Jungah Lee
- Department of Psychology, Seoul National University, Kwanak, Seoul 151-742, Republic of Korea
| | | |
Collapse
|
47
|
Berman R, Colby C. Attention and active vision. Vision Res 2008; 49:1233-48. [PMID: 18627774 DOI: 10.1016/j.visres.2008.06.017] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2007] [Revised: 06/11/2008] [Accepted: 06/14/2008] [Indexed: 11/27/2022]
Abstract
Visual perception results from the interaction of incoming sensory signals and top down cognitive and motor signals. Here we focus on the representation of attended locations in parietal cortex and in earlier visual cortical areas. We review evidence that these spatial representations are modulated not only by selective attention but also by the intention to move the eyes. We describe recent experiments in monkey and human that elucidate the mechanisms and circuitry involved in updating, or remapping, the representations of salient stimuli. Two central ideas emerge. First, selective attention and remapping are closely intertwined, and together contribute to the percept of spatial stability. Second, remapping is accomplished not by a single area but by the participation of parietal, frontal and extrastriate cortex as well as subcortical structures. This neural circuitry is distinguished by significant redundancy and plasticity, suggesting that the updating of salient stimuli is fundamental for spatial stability and visuospatial behavior. We conclude that multiple processes and pathways contribute to active vision in the primate brain.
Collapse
Affiliation(s)
- Rebecca Berman
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| | | |
Collapse
|
48
|
Asymmetry of anticipatory activity in visual cortex predicts the locus of attention and perception. J Neurosci 2008; 27:14424-33. [PMID: 18160650 DOI: 10.1523/jneurosci.3759-07.2007] [Citation(s) in RCA: 91] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Humans can use advance information to direct spatial attention before stimulus presentation and respond more accurately to stimuli at the attended location compared with unattended locations. Likewise, spatially directed attention is associated with anticipatory activity in the portion of visual cortex representing the attended location. It is unknown, however, whether and how anticipatory signals predict the locus of spatial attention and perception. Here, we show that prestimulus, preparatory activity is highly correlated across regions representing attended and unattended locations. Comparing activity representing attended versus unattended locations, rather than measuring activity for only one location, dramatically improves the accuracy with which preparatory signals predict the locus of attention, largely by removing this positive correlation common across locations. In V3A, moreover, only the difference in activity between attended and unattended locations predicts whether upcoming visual stimuli will be accurately perceived. These results suggest that the locus of attention is coded in visual cortex by an asymmetry of anticipatory activity between attended and unattended locations and that this asymmetry predicts the accuracy of perception. This coding strategy may bias activity in downstream brain regions to represent the stimulus at the attended location.
Collapse
|
49
|
Iaria G, Fox CJ, Chen JK, Petrides M, Barton JJS. Detection of unexpected events during spatial navigation in humans: bottom-up attentional system and neural mechanisms. Eur J Neurosci 2008; 27:1017-25. [DOI: 10.1111/j.1460-9568.2008.06060.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
50
|
Pola J. A model of the mechanism for the perceived location of a single flash and two successive flashes presented around the time of a saccade. Vision Res 2007; 47:2798-813. [PMID: 17767942 DOI: 10.1016/j.visres.2007.07.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2006] [Revised: 07/03/2007] [Indexed: 10/22/2022]
Abstract
According to current accounts, the perceived location of a target flash presented in the dark around the time of a saccade comes largely from an extraretinal signal that begins to change before, and continues to change during and following the saccade. Opposed to this view, this study offers a model suggesting that the perception of a single flash or two successive flashes in association with a saccade is the result of the combined effects of flash retinal signal persistence and an extraretinal signal that begins concurrent with or shortly after the saccade. For a single flash, the retinal signal persistence interacting with the extraretinal signal is responsible for the perceived location of the flash. In the case of two flashes with a short inter-flash-interval, the temporal overlap of the first flash persistence with the second flash persistence is a major factor in determining the perceived location of both of the flashes, and as a consequence, the perceived separation between them.
Collapse
Affiliation(s)
- Jordan Pola
- Department of Vision Sciences, State University of New York, State College of Optometry, New York, NY 10036, USA.
| |
Collapse
|