1
|
Huang X, Ghimire B, Chakrala AS, Wiesner S. Neural encoding of multiple motion speeds in visual cortical area MT. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.08.532456. [PMID: 37070082 PMCID: PMC10107747 DOI: 10.1101/2023.04.08.532456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/19/2023]
Abstract
Segmenting objects from each other and their background is critical for vision. The speed at which objects move provides a salient cue for segmentation. However, how the visual system represents and differentiates multiple speeds is largely unknown. Here we investigated the neural encoding of multiple speeds of overlapping stimuli in the primate visual cortex. We first characterized the perceptual capacity of human and monkey subjects to segment spatially overlapping stimuli moving at different speeds. We then determined how neurons in the motion-sensitive, middle-temporal (MT) cortex of macaque monkeys encode multiple speeds. We made a novel finding that the responses of MT neurons to two speeds of overlapping stimuli showed a robust bias toward the faster speed component when both speeds were slow (≤ 20°/s). The faster-speed bias occurred even when a neuron had a slow preferred speed and responded more strongly to the slower component than the faster component when presented alone. The faster-speed bias emerged very early in neuronal response and was robust over time and to manipulations of motion direction and attention. As the stimulus speed increased, the faster-speed bias changed to response averaging. Our finding can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons elicited by the individual speeds. Our results suggest that the neuron population, referred to as the weighting pool, includes neurons that have a broad range of speed preferences. As a result, the response weights for the speed components are determined by the stimulus speeds and invariant to the speed preferences of individual neurons. Our findings help to define the neural encoding rule of multiple stimuli and provide new insight into the underlying neural mechanisms. The faster-speed bias would benefit behavioral tasks such as figure-ground segregation if figural objects tend to move faster than the background in the natural environment.
Collapse
Affiliation(s)
- Xin Huang
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| | - Bikalpa Ghimire
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| | | | - Steven Wiesner
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| |
Collapse
|
2
|
Speed tuning properties of mirror symmetry detection mechanisms. Sci Rep 2019; 9:3431. [PMID: 30837517 PMCID: PMC6400945 DOI: 10.1038/s41598-019-39064-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 01/15/2019] [Indexed: 11/08/2022] Open
Abstract
The human visual system is often tasked with extracting image properties such as symmetry from rapidly moving objects and scenes. The extent to which motion speed and symmetry processing mechanisms interact is not known. Here we examine speed-tuning properties of symmetry detection mechanisms using dynamic dot-patterns containing varying amounts of position and local motion-direction symmetry. We measured symmetry detection thresholds for stimuli in which symmetric and noise elements either drifted with different relative speeds, were relocated at different relative temporal frequencies or were static. We also measured percentage correct responses under two stimulus conditions: a segregated condition in which symmetric and noise elements drifted at different speeds, and a non-segregated condition in which the symmetric elements drifted at two different speeds in equal proportions, as did the noise elements. We found that performance (i) improved gradually with increasing the difference in relative speed between symmetric and noise elements, but was invariant across relative temporal frequencies/lifetime duration differences between symmetric and noise elements, (ii) was higher in the segregated compared to non-segregated conditions, and in the moving compared to the static conditions. We conclude that symmetry detection mechanisms are broadly tuned to speed, with speed-selective symmetry channels combining their outputs by probability summation.
Collapse
|
3
|
Rocchi F, Ledgeway T, Webb BS. Criterion-free measurement of motion transparency perception at different speeds. J Vis 2018; 18:5. [PMID: 29614154 PMCID: PMC5886031 DOI: 10.1167/18.4.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception.
Collapse
Affiliation(s)
- Francesca Rocchi
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Timothy Ledgeway
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Ben S Webb
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
4
|
Poggel DA, Strasburger H, MacKeben M. Cueing Attention by Relative Motion in the Periphery of the Visual Field. Perception 2016; 36:955-70. [PMID: 17844962 DOI: 10.1068/p5752] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Sudden changes of visual stimulation attract attention. The observer's body motion generates retinal-flow field patterns containing information about his/her own speed and trajectory and relative motion of other objects. We investigated the effectiveness of relative motion as an attentional cue and compared it with conventional cueing by appearance of a frame in the far periphery of the visual field. In a group of ten subjects, contrast thresholds for the perception of static Gabor grating orientation [four alternative non-forced-choice (4ANFC)] task were determined at 20°, 30°, 40°, and 60° eccentricity. Subsequently, near-threshold discrimination performance of Gabor pattern orientation without versus with a ring-shaped cue was measured at the same positions. The same Gabor patterns were then presented embedded in a random-dot flow field, and uncued discrimination performance was compared with performance after presentation of a relative-motion cue (RMC), ie a small random-dot field with motion in the opposite direction of the flow field. Both the conventional ring cue and the RMC induced significantly increased discrimination performance at all test locations. With the parameters chosen for this study, the RMC was slightly less effective than the conventional cue, but its effects were somewhat more pronounced in the far periphery of the visual field. Thus, relative motion is a powerful cue to attract attention to peripheral visual objects and improves performance as effectively as a conventional ring cue. The findings have practical relevance for everyday life, in particular for tasks like driving and navigation.
Collapse
Affiliation(s)
- Dorothe A Poggel
- Generation Research Program (GRP), Human Science Center, Ludwig-Maximilian-Universität München, Germany.
| | | | | |
Collapse
|
5
|
Farrell-Whelan M, Brooks KR. Differential processing: towards a unified model of direction and speed perception. Vision Res 2013; 92:10-8. [PMID: 23994486 DOI: 10.1016/j.visres.2013.08.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 08/19/2013] [Accepted: 08/21/2013] [Indexed: 10/26/2022]
Abstract
In two experiments, we demonstrate a misperception of the velocity of a random-dot stimulus moving in the presence of a static line oriented obliquely to the direction of dot motion. As shown in previous studies, the perceived direction of the dots is shifted away from the orientation of the static line, with the size of the shift varying as a function of line orientation relative to dot direction (the statically-induced direction illusion, or 'SDI'). In addition, we report a novel effect - that perceived speed also varies as a function of relative line orientation, decreasing systematically as the angle is reduced from 90° to 0°. We propose that these illusions both stem from the differential processing of object-relative and non-object-relative component velocities, with the latter being perceptually underestimated with respect to the former by a constant ratio. Although previous proposals regarding the SDI have not allowed quantitative accounts, we present a unified formal model of perceived velocity (both direction and speed) with the magnitude of this ratio as the only free parameter. The model was successful in accounting for the angular repulsion of motion direction across line orientations, and in predicting the systematic decrease in perceived velocity as the line's angle was reduced. Although fitting for direction and speed produced different best-fit values of the ratio of underestimation of non-object-relative motion compared to object-relative motion (with the ratio for speed being larger than that for direction) this discrepancy may be due to differences in the psychophysical procedures for measuring direction and speed.
Collapse
Affiliation(s)
- Max Farrell-Whelan
- Department of Psychology, Macquarie University, Sydney, New South Wales 2109, Australia.
| | | |
Collapse
|
6
|
Raudies F, Mingolla E, Neumann H. A model of motion transparency processing with local center-surround interactions and feedback. Neural Comput 2011; 23:2868-914. [PMID: 21851277 DOI: 10.1162/neco_a_00193] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Motion transparency occurs when multiple coherent motions are perceived in one spatial location. Imagine, for instance, looking out of the window of a bus on a bright day, where the world outside the window is passing by and movements of passengers inside the bus are reflected in the window. The overlay of both motions at the window leads to motion transparency, which is challenging to process. Noisy and ambiguous motion signals can be reduced using a competition mechanism for all encoded motions in one spatial location. Such a competition, however, leads to the suppression of multiple peak responses that encode different motions, as only the strongest response tends to survive. As a solution, we suggest a local center-surround competition for population-encoded motion directions and speeds. Similar motions are supported, and dissimilar ones are separated, by representing them as multiple activations, which occurs in the case of motion transparency. Psychophysical findings, such as motion attraction and repulsion for motion transparency displays, can be explained by this local competition. Besides this local competition mechanism, we show that feedback signals improve the processing of motion transparency. A discrimination task for transparent versus opaque motion is simulated, where motion transparency is generated by superimposing large field motion patterns of either varying size or varying coherence of motion. The model's perceptual thresholds with and without feedback are calculated. We demonstrate that initially weak peak responses can be enhanced and stabilized through modulatory feedback signals from higher stages of processing.
Collapse
Affiliation(s)
- Florian Raudies
- Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215, USA.
| | | | | |
Collapse
|
7
|
Braun DI, Schütz AC, Gegenfurtner KR. Localization of speed differences of context stimuli during fixation and smooth pursuit eye movements. Vision Res 2010; 50:2740-9. [DOI: 10.1016/j.visres.2010.07.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2010] [Revised: 07/15/2010] [Accepted: 07/27/2010] [Indexed: 10/19/2022]
|
8
|
Raudies F, Neumann H. A model of neural mechanisms in monocular transparent motion perception. ACTA ACUST UNITED AC 2009; 104:71-83. [PMID: 19900543 DOI: 10.1016/j.jphysparis.2009.11.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Transparent motion is perceived when multiple motions are presented in the same part of visual space that move in different directions or with different speeds. Several psychophysical as well as physiological experiments have studied the conditions under which motion transparency occurs. Few computational mechanisms have been proposed that allow to segregate multiple motions. We present a novel neural model which investigates the necessary mechanisms underlying initial motion detection, the required representations for velocity coding, and the integration and segregation of motion stimuli to account for the perception of transparent motion. The model extends a previously developed architecture for neural computations along the dorsal pathway, particularly, in cortical areas V1, MT, and MSTd. It emphasizes the role of feedforward cascade processing and feedback from higher to earlier processing stages for selective feature enhancement and tuning. Our results demonstrate that the model reproduces several key psychophysical findings in perceptual motion transparency using random dot stimuli. Moreover, the model is able to process transparent motion as well as opaque surface motion in real-world sequences of 3-d scenes. As a main thesis, we argue that the perception of transparent motion relies on the representation of multiple velocities at one spatial location; however, this feature is necessary but not sufficient to perceive transparency. It is suggested that the activations simultaneously representing multiple activities are subsequently integrated by separate mechanisms leading to the segregation of different overlapping segments.
Collapse
Affiliation(s)
- Florian Raudies
- Institute of Neural Information Processing, University of Ulm, Germany.
| | | |
Collapse
|
9
|
Martín A, Barraza JF, Colombo EM. The effect of spatial layout on motion segmentation. Vision Res 2009; 49:1613-9. [PMID: 19336241 DOI: 10.1016/j.visres.2009.03.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2008] [Revised: 03/18/2009] [Accepted: 03/24/2009] [Indexed: 11/27/2022]
Abstract
We present a series of experiments exploring the effect of the stimulus spatial configuration on speed discrimination and two different types of segmentation, for random dot patterns. In the first experiment, we find that parsing the image produces a decrease of speed discrimination thresholds such as was first shown by Verghese and Stone [Verghese, P., & Stone, L. (1997). Spatial layout affects speed discrimination threshold. Vision Research, 37(4), 397-406; Verghese, P., & Stone, L. S. (1996). Perceived visual speed constrained by image segmentation. Nature, 381, 161-163] for sinusoidal gratings. In the second experiment, we study how the spatial configuration affects the ability of a subject in localizing an illusory contour defined by two surfaces with different speeds. Results show that the speed difference necessary to localize the contour decreases as the stimulus patches are separated. The third experiment involves transparency. Our results show a little or null effect for this condition. We explain the first and second experiment in the framework of the model of Bravo and Watamaniuk [Bravo, M., & Watamaniuk, S. (1995). Evidence for two speed signals: a coarse local signal for segregation and a precise global signal for discrimination. Vision Research, 35(12), 1691-1697] who proposed that motion computation consists in, at least, two stages: a first computation of coarse local speeds followed by an integration stage. We propose that the more precise estimate of speed obtained from the integration stage is used to produce a new refined segmentation of the image perhaps, through a feedback loop. Our data suggest that this third stage would not apply to the processing of transparency.
Collapse
Affiliation(s)
- Andrés Martín
- Departamento de Luminotecnia, Luz y Visión, FACET, Universidad Nacional de Tucumán, Av. Independencia 1800, San Miguel de Tucuman, Argentina.
| | | | | |
Collapse
|
10
|
|
11
|
Durant S, Zanker JM. Combining direction and speed for the localisation of visual motion defined contours. Vision Res 2008; 48:1053-60. [DOI: 10.1016/j.visres.2007.12.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2007] [Revised: 12/20/2007] [Accepted: 12/29/2007] [Indexed: 10/22/2022]
|
12
|
Verghese P, McKee SP. Motion grouping impairs speed discrimination. Vision Res 2006; 46:1540-6. [PMID: 16168457 DOI: 10.1016/j.visres.2005.07.029] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2005] [Revised: 07/15/2005] [Accepted: 07/21/2005] [Indexed: 11/25/2022]
Abstract
Discriminating between two speed signals is harder when they are seen as part of a single trajectory, compared to the case when they appear as distinct entities. Observers were asked to judge which half of a display had dots that were moving faster. This was done under two main conditions: when dot motion appeared to continue across the boundary between the two halves, and when it moved parallel to the boundary. Speed discrimination thresholds were elevated when motion in the two halves appeared to cross the boundary compared to the case when motion was parallel to the boundary. Extensive practice improved performance until speed discrimination in the two cases was virtually indistinguishable. The addition of noise caused the original effect to reappear, i.e., thresholds were elevated when motion continued across the border. Our results suggest that the local differences in velocity on either side of border are ignored when motion appears to cross the border. Instead the visual system seems to enforce an a priori assumption that when motion continues across a boundary it belongs to a common motion path.
Collapse
Affiliation(s)
- Preeti Verghese
- Smith Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA 94115, USA.
| | | |
Collapse
|
13
|
Masson GS. From 1D to 2D via 3D: dynamics of surface motion segmentation for ocular tracking in primates. ACTA ACUST UNITED AC 2005; 98:35-52. [PMID: 15477021 DOI: 10.1016/j.jphysparis.2004.03.017] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
In primates, tracking eye movements help vision by stabilising onto the retinas the images of a moving object of interest. This sensorimotor transformation involves several stages of motion processing, from the local measurement of one-dimensional luminance changes up to the integration of first and higher-order local motion cues into a global two-dimensional motion immune to antagonistic motions arising from the surrounding. The dynamics of this surface motion segmentation is reflected into the various components of the tracking responses and its underlying neural mechanisms can be correlated with behaviour at both single-cell and population levels. I review a series of behavioural studies which demonstrate that the neural representation driving eye movements evolves over time from a fast vector average of the outputs of linear and non-linear spatio-temporal filtering to a progressive and slower accurate solution for global motion. Because of the sensitivity of earliest ocular following to binocular disparity, antagonistic visual motion from surfaces located at different depths are filtered out. Thus, global motion integration is restricted within the depth plane of the object to be tracked. Similar dynamics were found at the level of monkey extra-striate areas MT and MST and I suggest that several parallel pathways along the motion stream are involved albeit with different latencies to build-up this accurate surface motion representation. After 200-300 ms, most of the computational problems of early motion processing (aperture problem, motion integration, motion segmentation) are solved and the eye velocity matches the global object velocity to maintain a clear and steady retinal image.
Collapse
Affiliation(s)
- Guillaume S Masson
- Institut de Neurosciences Physiologiques et Cognitives, Centre National de la Recherche Scientifique, 31 Chemin Jospeh Aiguier, 13402 Marseille cedex 20, France.
| |
Collapse
|