26
|
Yazdanbakhsh A, Gagliardi C. Human egocentric position estimation. J Vis 2015. [DOI: 10.1167/15.12.955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
27
|
Gagliardi C, Yazdanbakhsh A. Eye Gaze Position before, during and after Percept Switching of Bistable Visual Stimului. J Vis 2015. [DOI: 10.1167/15.12.206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
28
|
Qian J, Yazdanbakhsh A. A Neural Model of Distance-Dependent Percept of Object Size Constancy. PLoS One 2015; 10:e0129377. [PMID: 26132106 PMCID: PMC4489391 DOI: 10.1371/journal.pone.0129377] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Accepted: 05/05/2015] [Indexed: 11/19/2022] Open
Abstract
Size constancy is one of the well-known visual phenomena that demonstrates perceptual stability to account for the effect of viewing distance on retinal image size. Although theories involving distance scaling to achieve size constancy have flourished based on psychophysical studies, its underlying neural mechanisms remain unknown. Single cell recordings show that distance-dependent size tuned cells are common along the ventral stream, originating from V1, V2, and V4 leading to IT. In addition, recent research employing fMRI demonstrates that an object's perceived size, associated with its perceived egocentric distance, modulates its retinotopic representation in V1. These results suggest that V1 contributes to size constancy, and its activity is possibly regulated by feedback of distance information from other brain areas. Here, we propose a neural model based on these findings. First, we construct an egocentric distance map in LIP by integrating horizontal disparity and vergence through gain-modulated MT neurons. Second, LIP neurons send modulatory feedback of distance information to size tuned cells in V1, resulting in a spread of V1 cortical activity. This process provides V1 with distance-dependent size representations. The model supports that size constancy is preserved by scaling retinal image size to compensate for changes in perceived distance, and suggests a possible neural circuit capable of implementing this process.
Collapse
|
29
|
Díaz-Santos M, Cao B, Yazdanbakhsh A, Norton DJ, Neargarder S, Cronin-Golomb A. Perceptual, cognitive, and personality rigidity in Parkinson's disease. Neuropsychologia 2015; 69:183-93. [PMID: 25640973 PMCID: PMC4344854 DOI: 10.1016/j.neuropsychologia.2015.01.044] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2014] [Revised: 01/27/2015] [Accepted: 01/29/2015] [Indexed: 12/18/2022]
Abstract
Parkinson's disease (PD) is associated with motor and non-motor rigidity symptoms (e.g., cognitive and personality). The question is raised as to whether rigidity in PD also extends to perception, and if so, whether perceptual, cognitive, and personality rigidities are correlated. Bistable stimuli were presented to 28 non-demented individuals with PD and 26 normal control adults (NC). Necker cube perception and binocular rivalry were examined during passive viewing, and the Necker cube was additionally used for two volitional-control conditions: Hold one percept in front, and Switch between the two percepts. Relative to passive viewing, PD were significantly less able than NC to reduce dominance durations in the Switch condition, indicating perceptual rigidity. Tests of cognitive flexibility and a personality questionnaire were administered to explore the association with perceptual rigidity. Cognitive flexibility was not correlated with perceptual rigidity for either group. Personality (novelty seeking) correlated with dominance durations on Necker passive viewing for PD but not NC. The results indicate the presence in mild-moderate PD of perceptual rigidity and suggest shared neural substrates with novelty seeking, but functional divergence from those supporting cognitive flexibility. The possibility is raised that perceptual rigidity may be a harbinger of cognitive inflexibility later in the disease course.
Collapse
|
30
|
Díaz-Santos M, Cao B, Mauro SA, Yazdanbakhsh A, Neargarder S, Cronin-Golomb A. Effect of visual cues on the resolution of perceptual ambiguity in Parkinson's disease and normal aging. J Int Neuropsychol Soc 2015; 21:146-55. [PMID: 25765890 PMCID: PMC5433847 DOI: 10.1017/s1355617715000065] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Parkinson's disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity.
Collapse
|
31
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
|
32
|
Layton OW, Yazdanbakhsh A. A neural model of border-ownership from kinetic occlusion. Vision Res 2014; 106:64-80. [PMID: 25448117 DOI: 10.1016/j.visres.2014.11.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Revised: 10/29/2014] [Accepted: 11/04/2014] [Indexed: 11/19/2022]
Abstract
Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure-ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure-ground segregation, despite whether figure-ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background.
Collapse
|
33
|
Sojak V, Koolbergen DR, Bruggemans E, Yazdanbakhsh A, Kooij M, Hazekamp M. 031 * A SINGLE-CENTRE 37-YEAR EXPERIENCE WITH REOPERATION FOR ATRIOVENTRICULAR SEPTAL DEFECT. Interact Cardiovasc Thorac Surg 2014. [DOI: 10.1093/icvts/ivu276.31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
34
|
Layton OW, Mingolla E, Yazdanbakhsh A. Neural dynamics of feedforward and feedback processing in figure-ground segregation. Front Psychol 2014; 5:972. [PMID: 25346703 PMCID: PMC4193330 DOI: 10.3389/fpsyg.2014.00972] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 08/15/2014] [Indexed: 11/13/2022] Open
Abstract
Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.
Collapse
|
35
|
Qian J, Yazdanbakhsh A. A neural model of distance-dependent percept of object size constancy. J Vis 2014. [DOI: 10.1167/14.10.1187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
36
|
Cao B, Yazdanbakhsh A. A novel 3D/dichoptic presentation system compatible with large field eye tracking. J Vis 2014. [DOI: 10.1167/14.10.967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
37
|
Léveillé J, Myers E, Yazdanbakhsh A. Object-centered reference frames in depth as revealed by induced motion. J Vis 2014; 14:15. [PMID: 24618108 DOI: 10.1167/14.3.15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
An object-centric reference frame is a spatial representation in which objects or their parts are coded relative to others. The existence of object-centric representations is supported by the phenomenon of induced motion, in which the motion of an inducer frame in a particular direction induces motion in the opposite direction in a target dot. We report on an experiment made with an induced motion display where a degree of slant is imparted to the inducer frame using either perspective or binocular disparity depth cues. Critically, the inducer frame oscillates perpendicularly to the line of sight, rather than moving in depth. Participants matched the perceived induced motion of the target dot in depth using a 3D rotatable rod. Although the frame did not move in depth, we found that subjects perceived the dot as moving in depth, either along the slanted frame or against it, when depth was given by perspective and disparity, respectively. The presence of induced motion is thus not only due to the competition among populations of planar motion filters, but rather incorporates 3D scene constraints. We also discuss this finding in the context of the uncertainty related to various depth cues, and to the locality of representation of reference frames.
Collapse
|
38
|
Koolbergen DR, Manshanden JSJ, Yazdanbakhsh A, Bouma BJ, Blom NA, Mulder BJ, Hazekamp M. 176 * REOPERATION FOR NEO-AORTIC ROOT PATHOLOGY AFTER THE ARTERIAL SWITCH OPERATION. Interact Cardiovasc Thorac Surg 2013. [DOI: 10.1093/icvts/ivt372.176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
39
|
Yazdanbakhsh A, Rijssen LBV, Koolbergen DR, Konig AM, Hazekamp M. 307 * LONG-TERM FOLLOW-UP OF TRACHEOPLASTY USING AUTOLOGOUS PERICARDIAL PATCH AND STRIPS OF COSTAL CARTILAGE. Interact Cardiovasc Thorac Surg 2013. [DOI: 10.1093/icvts/ivt372.307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
40
|
Abstract
Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.
Collapse
|
41
|
Wurbs J, Mingolla E, Yazdanbakhsh A. Modeling a space-variant cortical representation for apparent motion. J Vis 2013; 13:13.10.2. [PMID: 23922444 DOI: 10.1167/13.10.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Collapse
|
42
|
Jia N, Yazdanbakhsh A. Perisaccadic predictive remapping: a neural model of thalamo-cortical interactions. J Vis 2013. [DOI: 10.1167/13.9.521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
43
|
Yazdanbakhsh A, Layton O. Multi-scale selectivity to figures in primate V4. J Vis 2013. [DOI: 10.1167/13.9.711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
44
|
Ruda H, Mingolla E, Grossberg S, Yazdanbakhsh A. Modeling Hyperacuity Data with a Hierarchical Neural Vision Network and Modified Hebbian Learning. J Vis 2013. [DOI: 10.1167/13.9.276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
45
|
Layton OW, Mingolla E, Yazdanbakhsh A. Dynamic coding of border-ownership in visual cortex. J Vis 2012; 12:8. [DOI: 10.1167/12.13.8] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
46
|
Yazdanbakhsh A, Layton O, Mingolla E. A neural model of border-ownership and motion in early vision. J Vis 2012. [DOI: 10.1167/12.9.759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
47
|
Srinivasan K, Grossberg S, Yazdanbakhsh A. Predictive Remapping of Binocularly Fused Images under Saccadic Eye Movements. J Vis 2012. [DOI: 10.1167/12.9.44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
48
|
Cao B, Mingolla E, Yazdanbakhsh A. The role of feedback and long-range horizontal connections in brightness-related responses in visual cortex: a computational model. J Vis 2012. [DOI: 10.1167/12.9.1220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
49
|
Wurbs J, Mingolla E, Yazdanbakhsh A. Modeling a space-variant cortical representation for motion under continuous and phi motion conditions. J Vis 2012. [DOI: 10.1167/12.9.762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
50
|
Gori S, Giora E, Yazdanbakhsh A, Mingolla E. The novelty of the "Accordion Grating Illusion". Neural Netw 2012; 39:52. [PMID: 22951095 DOI: 10.1016/j.neunet.2012.07.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2012] [Revised: 07/19/2012] [Accepted: 07/20/2012] [Indexed: 11/29/2022]
|