1
|
Fang W, Wang K, Zhang K, Qian J. Spatial attention based on 2D location and relative depth order modulates visual working memory in a 3D environment. Br J Psychol 2023; 114:112-131. [PMID: 36161427 DOI: 10.1111/bjop.12599] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 06/22/2022] [Accepted: 08/31/2022] [Indexed: 01/11/2023]
Abstract
The attentional effect on visual working memory (VWM) has been a heated research topic in the past two decades. Studies show that VWM performance for an attended memory item can be improved by cueing its two-dimensional (2D) spatial location during retention. However, few studies have investigated the effect of attentional selection on VWM in a three-dimensional setting, and it remains unknown whether depth information can produce beneficial attentional effects on 2D visual representations similar to 2D spatial information. Here we conducted four experiments, displaying memory items at various stereoscopic depth planes, and examined the retro-cue effects of four types of cues - a cue would either indicate the 2D or depth location of a memory item, and either in the form of physical (directly pointing to a location) or symbolic (numerically mapping onto a location) cues. We found that retro-cue benefits were only observed for cues directly pointing to a 2D location, whereas a null effect was observed for cues directly pointing to a depth location. However, there was a retro-cue effect when cueing the relative depth order, though the effect was weaker than that for cueing the 2D location. The selective effect on VWM based on 2D spatial attention is different from depth-based attention, and the divergence suggests that an object representation is primarily bound with its 2D spatial location, weakly bound with its depth order but not with its metric depth location. This indicates that attentional selection based on memory for depth, particularly metric depth, is ineffective.
Collapse
Affiliation(s)
- Wei Fang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China.,Departments of Biomedical Sciences and Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Kaiyue Wang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Ke Zhang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Jiehui Qian
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
2
|
|
3
|
Scholl B, Tepohl C, Ryan MA, Thomas CI, Kamasawa N, Fitzpatrick D. A binocular synaptic network supports interocular response alignment in visual cortical neurons. Neuron 2022; 110:1573-1584.e4. [PMID: 35123654 PMCID: PMC9081247 DOI: 10.1016/j.neuron.2022.01.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/13/2021] [Accepted: 01/19/2022] [Indexed: 10/19/2022]
Abstract
In visual cortex, signals from the two eyes merge to form a coherent binocular representation. Here we investigate the synaptic interactions underlying the binocular representation of stimulus orientation in ferret visual cortex with in vivo calcium imaging of layer 2/3 neurons and their dendritic spines. Individual neurons with aligned somatic responses received a mixture of monocular and binocular synaptic inputs. Surprisingly, monocular pathways alone could not account for somatic alignment because ipsilateral monocular inputs poorly matched somatic preference. Binocular inputs exhibited different degrees of interocular alignment, and those with a high degree of alignment (congruent) had greater selectivity and somatic specificity. While congruent inputs were similar to others in measures of strength, simulations show that the number of active congruent inputs predicts aligned somatic output. Our study suggests that coherent binocular responses derive from connectivity biases that support functional amplification of aligned signals within a heterogeneous binocular intracortical network.
Collapse
Affiliation(s)
- Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| | - Clara Tepohl
- Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL, USA
| | - Melissa A Ryan
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL, USA
| | - Connon I Thomas
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL, USA
| | - Naomi Kamasawa
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL, USA
| | - David Fitzpatrick
- Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL, USA
| |
Collapse
|
4
|
Alvarez I, Hurley SA, Parker AJ, Bridge H. Human primary visual cortex shows larger population receptive fields for binocular disparity-defined stimuli. Brain Struct Funct 2021; 226:2819-2838. [PMID: 34347164 PMCID: PMC8541985 DOI: 10.1007/s00429-021-02351-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/22/2021] [Indexed: 11/26/2022]
Abstract
The visual perception of 3D depth is underpinned by the brain's ability to combine signals from the left and right eyes to produce a neural representation of binocular disparity for perception and behaviour. Electrophysiological studies of binocular disparity over the past 2 decades have investigated the computational role of neurons in area V1 for binocular combination, while more recent neuroimaging investigations have focused on identifying specific roles for different extrastriate visual areas in depth perception. Here we investigate the population receptive field properties of neural responses to binocular information in striate and extrastriate cortical visual areas using ultra-high field fMRI. We measured BOLD fMRI responses while participants viewed retinotopic mapping stimuli defined by different visual properties: contrast, luminance, motion, correlated and anti-correlated stereoscopic disparity. By fitting each condition with a population receptive field model, we compared quantitatively the size of the population receptive field for disparity-specific stimulation. We found larger population receptive fields for disparity compared with contrast and luminance in area V1, the first stage of binocular combination, which likely reflects the binocular integration zone, an interpretation supported by modelling of the binocular energy model. A similar pattern was found in region LOC, where it may reflect the role of disparity as a cue for 3D shape. These findings provide insight into the binocular receptive field properties underlying processing for human stereoscopic vision.
Collapse
Affiliation(s)
- Ivan Alvarez
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
| | - Samuel A Hurley
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
- Department of Radiology, University of Wisconsin, Madison, WI, 53705, USA
| | - Andrew J Parker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, UK
- Institut für Biologie, Otto-von-Guericke Universität, 39120, Magdeburg, Germany
| | - Holly Bridge
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.
| |
Collapse
|
5
|
Waz S, Liu Z. Evidence for strictly monocular processing in visual motion opponency and Glass pattern perception. Vision Res 2021; 186:103-111. [PMID: 34082396 DOI: 10.1016/j.visres.2021.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 03/11/2021] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
When presented with locally paired dots moving in opposite directions, motion selective neurons in the middle temporal cortex (MT) reduce firing while neurons in V1 are unaffected. This physiological effect is known as motion opponency. The current study used psychophysics to investigate the neural circuit underlying motion opponency. We asked whether opposing motion signals could arrive from different eyes into the receptive field of a binocular neuron while still maintaining motion opponency. We took advantage of prior findings that orientation discrimination of the motion axis (along which paired dots oscillate) is harder when dots move counter-phase than in-phase, an effect associated with motion opponency. We found that such an effect disappeared when paired dots originated from different eyes. This suggests that motion opponency, at some point, involves strictly monocular processing. This does not mean that motion opponency is entirely monocular. Further, we found that the effect of a Glass pattern disappeared under similar viewing conditions, suggesting that Glass pattern perception also involves some strictly monocular processing.
Collapse
Affiliation(s)
- Sebastian Waz
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA 92697, USA; Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA.
| | - Zili Liu
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
6
|
Supple JA, Pinto-Benito D, Khoo C, Wardill TJ, Fabian ST, Liu M, Pusdekar S, Galeano D, Pan J, Jiang S, Wang Y, Liu L, Peng H, Olberg RM, Gonzalez-Bellido PT. Binocular Encoding in the Damselfly Pre-motor Target Tracking System. Curr Biol 2020; 30:645-656.e4. [DOI: 10.1016/j.cub.2019.12.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Revised: 10/16/2019] [Accepted: 12/10/2019] [Indexed: 12/29/2022]
|
7
|
Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex. eNeuro 2019; 6:ENEURO.0362-18.2019. [PMID: 31285275 PMCID: PMC6709213 DOI: 10.1523/eneuro.0362-18.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 06/24/2019] [Accepted: 06/26/2019] [Indexed: 11/21/2022] Open
Abstract
Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (z) and the horizontal (x) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A.
Collapse
|
8
|
de Best PB, Raz N, Dumoulin SO, Levin N. How Ocular Dominance and Binocularity Are Reflected by the Population Receptive Field Properties. Invest Ophthalmol Vis Sci 2018; 59:5301-5311. [PMID: 30398621 DOI: 10.1167/iovs.18-24161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The neural substrate of binocularity and sighting ocular dominance in humans is not clear. By utilizing the population receptive field (pRF) modeling technique, we explored whether these phenomena are associated with amplitude and pRF size differences. Methods The visual field maps of 13 subjects were scanned (3-T Skyra) while viewing drifting bar stimuli. Both eyes (binocular condition), the dominant eye and the nondominant eye (two monocular conditions) were stimulated in separate sessions. For each condition, pRF size and amplitude were assessed. Binocular summation ratios were calculated by dividing binocular by mean monocular values (amplitude and pRF size). Results No differences in pRF size were seen between the viewing conditions within each region, that is, either between binocular and monocular or between dominant and nondominant viewing conditions. Binocular amplitudes were higher than the monocular amplitudes, but similar among the dominant and nondominant eyes. Binocular summation ratios derived from amplitudes were significantly higher than one (∼1.2), while those ratios derived from pRF size were not. These effects were found in all studied areas along the visual hierarchy, starting in V1. Conclusions Neither the amplitude nor the pRF size show intereye difference and therefore cannot explain the different roles of the dominant and the nondominant eyes. Binocular, as compared to monocular vision, resulted in higher amplitudes, while receptive fields' sizes were similar, suggesting increased binocular response intensity as the basis for the binocular summation phenomenon. Our results could be applicable in imaging studies of monocular disease and studies that deal with nondisparity binocularity effects.
Collapse
Affiliation(s)
- Pieter B de Best
- fMRI lab, Neurology Department, Hadassah Hebrew University Medical Center Jerusalem, Israel
| | - Noa Raz
- fMRI lab, Neurology Department, Hadassah Hebrew University Medical Center Jerusalem, Israel
| | | | - Netta Levin
- fMRI lab, Neurology Department, Hadassah Hebrew University Medical Center Jerusalem, Israel
| |
Collapse
|
9
|
Turski J. Binocular system with asymmetric eyes. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:1180-1191. [PMID: 30110311 DOI: 10.1364/josaa.35.001180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 05/24/2018] [Indexed: 06/08/2023]
Abstract
I elaborate binocular geometry with a novel eye model that incorporates the fovea's temporalward displacement and the cornea and the lens' misalignment. The formulated binocular correspondence results in longitudinal horopters that are conic sections resembling empirical horopters. When the eye model's asymmetry parameters' range is that which is observed in healthy eyes, abathic distance also falls within its experimentally observed range. This range in abathic distance is similar to that of the vergence resting position distance. Further, the conic's orientation is specified by the eyes' version angle, integrating binocular geometry with eye movement. This integration presents the possibility for modeling 3D perceptual stability during physiological eye movements.
Collapse
|
10
|
Ultra-high field MRI: Advancing systems neuroscience towards mesoscopic human brain function. Neuroimage 2018; 168:345-357. [DOI: 10.1016/j.neuroimage.2017.01.028] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Revised: 11/06/2016] [Accepted: 01/12/2017] [Indexed: 01/26/2023] Open
|
11
|
Abstract
The visual system must recover important properties of the external environment if its host is to survive. Because the retinae are effectively two-dimensional but the world is three-dimensional (3D), the patterns of stimulation both within and across the eyes must be used to infer the distal stimulus-the environment-in all three dimensions. Moreover, animals and elements in the environment move, which means the input contains rich temporal information. Here, in addition to reviewing the literature, we discuss how and why prior work has focused on purported isolated systems (e.g., stereopsis) or cues (e.g., horizontal disparity) that do not necessarily map elegantly on to the computations and complex patterns of stimulation that arise when visual systems operate within the real world. We thus also introduce the binoptic flow field (BFF) as a description of the 3D motion information available in realistic environments, which can foster the use of ecologically valid yet well-controlled stimuli. Further, it can help clarify how future studies can more directly focus on the computations and stimulus properties the visual system might use to support perception and behavior in a dynamic 3D world.
Collapse
Affiliation(s)
| | | | - Jonas Knöll
- The University of Texas at Austin, Texas 78757;
| | | |
Collapse
|
12
|
Qian CS, Brascamp JW. How to Build a Dichoptic Presentation System That Includes an Eye Tracker. J Vis Exp 2017:56033. [PMID: 28930987 PMCID: PMC5752173 DOI: 10.3791/56033] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
The presentation of different stimuli to the two eyes, dichoptic presentation, is essential for studies involving 3D vision and interocular suppression. There is a growing literature on the unique experimental value of pupillary and oculomotor measures, especially for research on interocular suppression. Although obtaining eye-tracking measures would thus benefit studies that use dichoptic presentation, the hardware essential for dichoptic presentation (e.g. mirrors) often interferes with high-quality eye tracking, especially when using a video-based eye tracker. We recently described an experimental setup that combines a standard dichoptic presentation system with an infrared eye tracker by using infrared-transparent mirrors1. The setup is compatible with standard monitors and eye trackers, easy to implement, and affordable (on the order of US$1,000). Relative to existing methods it has the benefits of not requiring special equipment and posing few limits on the nature and quality of the visual stimulus. Here we provide a visual guide to the construction and use of our setup.
Collapse
Affiliation(s)
- Cheng S Qian
- Department of Psychology, Michigan State University;
| | - Jan W Brascamp
- Department of Psychology, Michigan State University; Neuroscience Program, Michigan State University
| |
Collapse
|
13
|
Abstract
Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias (Golomb, Kupitz, & Thiemann, 2014), where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object's 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing.
Collapse
Affiliation(s)
- Nonie J. Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| | | |
Collapse
|
14
|
Higgins NC, McLaughlin SA, Da Costa S, Stecker GC. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex. Front Syst Neurosci 2017; 11:35. [PMID: 28588457 PMCID: PMC5440574 DOI: 10.3389/fnsys.2017.00035] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 05/08/2017] [Indexed: 11/13/2022] Open
Abstract
Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Hearing and Speech Sciences, Vanderbilt University School of MedicineNashville, TN, United States
| | - Susan A McLaughlin
- Institute for Learning and Brain Sciences, University of WashingtonSeattle, WA, United States
| | - Sandra Da Costa
- Biomedical Imaging Research Center (CIBM), School of Basic Sciences, Ecole Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - G Christopher Stecker
- Department of Hearing and Speech Sciences, Vanderbilt University School of MedicineNashville, TN, United States
| |
Collapse
|
15
|
Finlayson NJ, Zhang X, Golomb JD. Differential patterns of 2D location versus depth decoding along the visual hierarchy. Neuroimage 2016; 147:507-516. [PMID: 28039760 DOI: 10.1016/j.neuroimage.2016.12.039] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Revised: 11/27/2016] [Accepted: 12/14/2016] [Indexed: 11/25/2022] Open
Abstract
Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy.
Collapse
Affiliation(s)
- Nonie J Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| | - Xiaoli Zhang
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| |
Collapse
|
16
|
Finlayson NJ, Golomb JD. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Res 2016; 127:49-56. [PMID: 27468654 PMCID: PMC5035601 DOI: 10.1016/j.visres.2016.07.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 07/01/2016] [Accepted: 07/05/2016] [Indexed: 11/29/2022]
Abstract
A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location.
Collapse
Affiliation(s)
- Nonie J Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
17
|
|
18
|
Chrastil ER, Sherrill KR, Hasselmo ME, Stern CE. Which way and how far? Tracking of translation and rotation information for human path integration. Hum Brain Mapp 2016; 37:3636-55. [PMID: 27238897 DOI: 10.1002/hbm.23265] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Revised: 05/03/2016] [Accepted: 05/13/2016] [Indexed: 12/22/2022] Open
Abstract
Path integration, the constant updating of the navigator's knowledge of position and orientation during movement, requires both visuospatial knowledge and memory. This study aimed to develop a systems-level understanding of human path integration by examining the basic building blocks of path integration in humans. To achieve this goal, we used functional imaging to examine the neural mechanisms that support the tracking and memory of translational and rotational components of human path integration. Critically, and in contrast to previous studies, we examined movement in translation and rotation tasks with no defined end-point or goal. Navigators accumulated translational and rotational information during virtual self-motion. Activity in hippocampus, retrosplenial cortex (RSC), and parahippocampal cortex (PHC) increased during both translation and rotation encoding, suggesting that these regions track self-motion information during path integration. These results address current questions regarding distance coding in the human brain. By implementing a modified delayed match to sample paradigm, we also examined the encoding and maintenance of path integration signals in working memory. Hippocampus, PHC, and RSC were recruited during successful encoding and maintenance of path integration information, with RSC selective for tasks that required processing heading rotation changes. These data indicate distinct working memory mechanisms for translation and rotation, which are essential for updating neural representations of current location. The results provide evidence that hippocampus, PHC, and RSC flexibly track task-relevant translation and rotation signals for path integration and could form the hub of a more distributed network supporting spatial navigation. Hum Brain Mapp 37:3636-3655, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Elizabeth R Chrastil
- Department of Psychological and Brain Sciences and Center for Memory and Brain, Boston University, Boston, Massachusetts.,Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging
| | - Katherine R Sherrill
- Department of Psychological and Brain Sciences and Center for Memory and Brain, Boston University, Boston, Massachusetts.,Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging
| | - Michael E Hasselmo
- Department of Psychological and Brain Sciences and Center for Memory and Brain, Boston University, Boston, Massachusetts
| | - Chantal E Stern
- Department of Psychological and Brain Sciences and Center for Memory and Brain, Boston University, Boston, Massachusetts.,Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging
| |
Collapse
|
19
|
There and Back Again: Hippocampus and Retrosplenial Cortex Track Homing Distance during Human Path Integration. J Neurosci 2016; 35:15442-52. [PMID: 26586830 DOI: 10.1523/jneurosci.1209-15.2015] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
UNLABELLED Path integration, the updating of position and orientation during movement, often involves tracking a home location. Here, we examine processes that could contribute to successful location tracking in humans. In particular, we investigate a homing vector model of path integration, whereby a navigator continuously tracks a trajectory back to the home location. To examine this model, we developed a loop task for fMRI, in which participants viewed movement that circled back to a home location in a sparse virtual environment. In support of a homing vector system, hippocampus, retrosplenial cortex, and parahippocampal cortex were responsive to Euclidean distance from home. These results provide the first evidence of a constantly maintained homing signal in the human brain. In addition, hippocampus, retrosplenial cortex, and parahippocampal cortex, as well as medial prefrontal cortex, were recruited during successful path integration. These findings suggest that dynamic processes recruit hippocampus, retrosplenial cortex, and parahippocampal cortex in support of path integration, including a homing vector system that tracks movement relative to home. SIGNIFICANCE STATEMENT Path integration is the continual updating of position and orientation during navigation. Animal studies have identified place cells and grid cells as important for path integration, but underlying models of path integration in humans have rarely been studied. The results of our novel loop closure task are the first to suggest that a homing vector tracks Euclidean distance from the home location, supported by the hippocampus, retrosplenial cortex, and parahippocampal cortex. These findings suggest a potential homing vector mechanism supporting path integration, which recruits hippocampus and retrosplenial cortex to track movement relative to home. These results provide new avenues for computational and animal models by directing attention to homing vector models of path integration, which differ from current movement-tracking models.
Collapse
|
20
|
Abstract
To provide a unitary view of the external world, signals from the two eyes must be combined: a new study pinpoints the location in the human brain where the requisite combination occurs.
Collapse
Affiliation(s)
- Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK.
| |
Collapse
|