1
|
Salisbury JM, Palmer SE. A dynamic scale-mixture model of motion in natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.19.563101. [PMID: 37961311 PMCID: PMC10634686 DOI: 10.1101/2023.10.19.563101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Some of the most important tasks of visual and motor systems involve estimating the motion of objects and tracking them over time. Such systems evolved to meet the behavioral needs of the organism in its natural environment, and may therefore be adapted to the statistics of motion it is likely to encounter. By tracking the movement of individual points in movies of natural scenes, we begin to identify common properties of natural motion across scenes. As expected, objects in natural scenes move in a persistent fashion, with velocity correlations lasting hundreds of milliseconds. More subtly, but crucially, we find that the observed velocity distributions are heavy-tailed and can be modeled as a Gaussian scale-mixture. Extending this model to the time domain leads to a dynamic scale-mixture model, consisting of a Gaussian process multiplied by a positive scalar quantity with its own independent dynamics. Dynamic scaling of velocity arises naturally as a consequence of changes in object distance from the observer, and may approximate the effects of changes in other parameters governing the motion in a given scene. This modeling and estimation framework has implications for the neurobiology of sensory and motor systems, which need to cope with these fluctuations in scale in order to represent motion efficiently and drive fast and accurate tracking behavior.
Collapse
|
2
|
Westebbe L, Liang Y, Blaser E. The Accuracy and Precision of Memory for Natural Scenes: A Walk in the Park. Open Mind (Camb) 2024; 8:131-147. [PMID: 38435706 PMCID: PMC10898787 DOI: 10.1162/opmi_a_00122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/17/2024] [Indexed: 03/05/2024] Open
Abstract
It is challenging to quantify the accuracy and precision of scene memory because it is unclear what 'space' scenes occupy (how can we quantify error when misremembering a natural scene?). To address this, we exploited the ecologically valid, metric space in which scenes occur and are represented: routes. In a delayed estimation task, participants briefly saw a target scene drawn from a video of an outdoor 'route loop', then used a continuous report wheel of the route to pinpoint the scene. Accuracy was high and unbiased, indicating there was no net boundary extension/contraction. Interestingly, precision was higher for routes that were more self-similar (as characterized by the half-life, in meters, of a route's Multiscale Structural Similarity index), consistent with previous work finding a 'similarity advantage' where memory precision is regulated according to task demands. Overall, scenes were remembered to within a few meters of their actual location.
Collapse
Affiliation(s)
- Leo Westebbe
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Yibiao Liang
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
3
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Jonathan Matthis
- Department of Biology, Northeastern University, Boston, United States
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, United States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| |
Collapse
|
4
|
Cardelli L, Tullo MG, Galati G, Sulpizio V. Effect of optic flow on spatial updating: insight from an immersive virtual reality study. Exp Brain Res 2023; 241:865-874. [PMID: 36781456 DOI: 10.1007/s00221-023-06567-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 02/03/2023] [Indexed: 02/15/2023]
Abstract
Self-motion information is required to keep track of where we are with respect to our environment (spatial updating). Visual signals such as optic flow are relevant to provide information about self-motion, especially in the absence of vestibular and/or proprioceptive cues generated by physical movement. However, the role of optic flow on spatial updating is still debated. A virtual reality system based on a head-mounted display was used to allow participants to experience a self-motion sensation within a naturalistic environment in the absence of physical movement. We asked participants to keep track of spatial positions of a target during simulated self-motion while manipulating the availability of optic flow coming from the lower part of the environment (ground plane). In each trial, the ground could be a green lawn (optic flow ON) or covered in snow (optic flow OFF). We observed that the lack of optic flow on the ground had a detrimental effect on spatial updating. Furthermore, we explored the interaction between the optic flow availability and different characteristics of self-motion: we observed that increasing self-motion speed had a detrimental effect on spatial updating, especially in the absence of optic flow, while self-motion direction (leftward, forward, rightward) and path (translational and curvilinear) had no statically significant effect. Overall, we demonstrated that, in the absence of some idiothetic cues, the optic flow provided by the ground has a dominant role for the self-motion estimation and, hence, for the ability to update the spatial relationships between one's position and the position of the surrounding objects.
Collapse
Affiliation(s)
- Lisa Cardelli
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy
| | - Maria Giulia Tullo
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy.,Department of Translational and Precision Medicine, Sapienza University, Rome, Italy
| | - Gaspare Galati
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| |
Collapse
|
5
|
Bonnen K. Motion vision: Fish swimming to see. Curr Biol 2023; 33:R30-R32. [PMID: 36626861 DOI: 10.1016/j.cub.2022.11.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Self-motion generates optic flow, a visual motion signal used by many organisms for navigation and self-stabilization. A new study quantitatively demonstrates how environmental structure and current behavioral state explain the spatial biases observed in zebrafish optomotor responses.
Collapse
Affiliation(s)
- Kathryn Bonnen
- School of Optometry, Indiana University, 800 Atwater Avenue, Bloomington, IN 47405, USA.
| |
Collapse
|
6
|
Sedigh-Sarvestani M, Fitzpatrick D. What and Where: Location-Dependent Feature Sensitivity as a Canonical Organizing Principle of the Visual System. Front Neural Circuits 2022; 16:834876. [PMID: 35498372 PMCID: PMC9039279 DOI: 10.3389/fncir.2022.834876] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/01/2022] [Indexed: 11/13/2022] Open
Abstract
Traditionally, functional representations in early visual areas are conceived as retinotopic maps preserving ego-centric spatial location information while ensuring that other stimulus features are uniformly represented for all locations in space. Recent results challenge this framework of relatively independent encoding of location and features in the early visual system, emphasizing location-dependent feature sensitivities that reflect specialization of cortical circuits for different locations in visual space. Here we review the evidence for such location-specific encoding including: (1) systematic variation of functional properties within conventional retinotopic maps in the cortex; (2) novel periodic retinotopic transforms that dramatically illustrate the tight linkage of feature sensitivity, spatial location, and cortical circuitry; and (3) retinotopic biases in cortical areas, and groups of areas, that have been defined by their functional specializations. We propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual experience, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. Future studies are necessary to discover mechanisms underlying joint encoding of location and functional information, how this relates to behavior, emerges during development, and varies across species.
Collapse
|
7
|
Steinmetz ST, Layton OW, Powell NV, Fajen BR. A Dynamic Efficient Sensory Encoding Approach to Adaptive Tuning in Neural Models of Optic Flow Processing. Front Comput Neurosci 2022; 16:844289. [PMID: 35431848 PMCID: PMC9011806 DOI: 10.3389/fncom.2022.844289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Collapse
Affiliation(s)
- Scott T. Steinmetz
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
- *Correspondence: Scott T. Steinmetz,
| | - Oliver W. Layton
- Computer Science Department, Colby College, Waterville, ME, United States
| | - Nathaniel V. Powell
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
8
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior. We recorded the full body kinematics and binocular gaze of humans walking through real-world natural environment and estimated visual motion (optic flow) using both computational video analysis and geometric simulation. Contrary to the established theories of the role of optic flow in the control of locomotion, we found that eye-movement-free, head-centric optic flow is highly unstable due to the complex phasic trajectory of the head during natural locomotion, rendering it an unlikely candidate for heading perception. In contrast, retina-centered optic flow consisted of a regular pattern of outflowing motion centered on the fovea. Retinal optic flow contained highly consistent patterns that specified the walker’s trajectory relative to the point of fixation, which may provide powerful, retinotopic cues that may be used for the visual control of locomotion in natural environments. This examination of optic flow in real-world contexts suggest a need to re-evaluate existing theories of the role of optic flow in the visual control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
9
|
Self-motion illusions from distorted optic flow in multifocal glasses. iScience 2022; 25:103567. [PMID: 34988405 PMCID: PMC8693457 DOI: 10.1016/j.isci.2021.103567] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/03/2021] [Accepted: 12/01/2021] [Indexed: 11/24/2022] Open
Abstract
Progressive addition lenses (PALs) are ophthalmic lenses to correct presbyopia by providing improvements of near and far vision in different areas of the lens, but distorting the periphery of the wearer's field of view. Distortion-related difficulties reported by PAL wearers include unnatural self-motion perception. Visual self-motion perception is guided by optic flow, the pattern of retinal motion produced by self-motion. We tested the influence of PAL distortions on optic flow-based heading estimation using a model of heading perception and a virtual reality-based psychophysical experiment. The model predicted changes of heading estimation along a vertical axis, depending on visual field size and gaze direction. Consistent with this prediction, participants experienced upwards deviations of self-motion when gaze through the periphery of the lens was simulated, but not for gaze through the center. We conclude that PALs may lead to illusions of self-motion which could be remedied by a careful gaze strategy. Multifocal lenses impair vision of spectacle wearers with gaze-dependent distortions A model of heading perception from distorted optic flow suggest a misperception Heading perception was tested with a virtual reality-based simulation of distortions Distortions lead to gaze direction-dependent illusions in perceived vertical heading
Collapse
|
10
|
Maltempo T, Pitzalis S, Bellagamba M, Di Marco S, Fattori P, Galati G, Galletti C, Sulpizio V. Lower visual field preference for the visuomotor control of limb movements in the human dorsomedial parietal cortex. Brain Struct Funct 2021; 226:2989-3005. [PMID: 33738579 PMCID: PMC8541995 DOI: 10.1007/s00429-021-02254-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 03/03/2021] [Indexed: 11/30/2022]
Abstract
Visual cues coming from the lower visual field (VF) play an important role in the visual guidance of upper and lower limb movements. A recently described region situated in the dorsomedial parietal cortex, area hPEc (Pitzalis et al. in NeuroImage 202:116092, 2019), might have a role in integrating visually derived information with somatomotor signals to guide limb interaction with the environment. In macaque, it has been demonstrated that PEc receives visual information mostly from the lower visual field but, to date, there has been no systematic investigation of VF preference in the newly defined human homologue of macaque area PEc (hPEc). Here we examined the VF preferences of hPEc while participants performed a visuomotor task implying spatially directed delayed eye-, hand- and foot-movements towards different spatial locations within the VF. By analyzing data as a function of the different target locations towards which upcoming movements were planned (and then executed), we observed the presence of asymmetry in the vertical dimension of VF in area hPEc, being this area more strongly activated by limb movements directed towards visual targets located in the lower compared to the upper VF. This result confirms the view, first advanced in macaque monkey, that PEc is involved in processing visual information to guide body interaction with the external environment, including locomotion. We also observed a contralateral dominance for the lower VF preference in the foot selective somatomotor cortex anterior to hPEc. This result might reflect the role of this cortex (which includes areas PE and S-I) in providing highly topographically organized signals, likely useful to achieve an appropriate foot posture during locomotion.
Collapse
Affiliation(s)
- Teresa Maltempo
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Martina Bellagamba
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Sara Di Marco
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Psychology, "Sapienza" University of Rome, Via dei Marsi 78, 00185, Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Psychology, "Sapienza" University of Rome, Via dei Marsi 78, 00185, Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Valentina Sulpizio
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy. .,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy. .,Department of Psychology, "Sapienza" University of Rome, Via dei Marsi 78, 00185, Rome, Italy.
| |
Collapse
|
11
|
Durant S, Zanker JM. The combined effect of eye movements improve head centred local motion information during walking. PLoS One 2020; 15:e0228345. [PMID: 31999777 PMCID: PMC6992003 DOI: 10.1371/journal.pone.0228345] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 01/13/2020] [Indexed: 11/18/2022] Open
Abstract
Eye movements play multiple roles in human behaviour—small stabilizing movements are important for keeping the image of the scene steady during locomotion, whilst large scanning movements search for relevant information. It has been proposed that eye movement induced retinal motion interferes with the estimation of self-motion based on optic flow. We investigated the effect of eye movements on retinal motion information during walking. Observers walked towards a target, wearing eye tracking glasses that simultaneously recorded the scene ahead and tracked the movements of both eyes. By realigning the frames of the recording from the scene ahead, relative to the centre of gaze, we could mimic the input received by the retina (retinocentric coordinates) and compare this to the input received by the scene camera (head centred coordinates). We asked which of these coordinate frames resulted in the least noisy motion information. Motion noise was calculated by finding the error in between the optic flow signal and a noise-free motion expansion pattern. We found that eye movements improved the optic flow information available, even when large diversions away from target were made.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, University of London, Egham, England, United Kingdom
- * E-mail:
| | - Johannes M. Zanker
- Department of Psychology, University of London, Egham, England, United Kingdom
| |
Collapse
|
12
|
Steering Transforms the Cortical Representation of Self-Movement from Direction to Destination. J Neurosci 2016; 35:16055-63. [PMID: 26658859 DOI: 10.1523/jneurosci.2368-15.2015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Steering demands rapid responses to heading deviations and uses optic flow to redirect self-movement toward the intended destination. We trained monkeys in a naturalistic steering paradigm and recorded dorsal medial superior temporal area (MSTd) cortical neuronal responses to the visual motion and spatial location cues in optic flow. We found that neuronal responses to the initial heading direction are dominated by the optic flow's global radial pattern cue. Responses to subsequently imposed heading deviations are dominated by the local direction of motion cue. Finally, as the monkey steers its heading back to the goal location, responses are dominated by the spatial location cue, the screen location of the flow field's center of motion. We conclude that MSTd responses are not rigidly linked to specific stimuli, but rather are transformed by the task relevance of cues that guide performance in learned, naturalistic behaviors. SIGNIFICANCE STATEMENT Unplanned heading changes trigger lifesaving steering back to a goal. Conventionally, such behaviors are thought of as cortical sensory-motor reflex arcs. We find that a more reciprocal process underlies such cycles of perception and action, rapidly transforming visual processing to suit each stage of the task. When monkeys monitor their simulated self-movement, dorsal medial superior temporal area (MSTd) neurons represent their current heading direction. When monkeys steer to recover from an unplanned change in heading direction, MSTd shifts toward representing the goal location. We hypothesize that this transformation reflects the reweighting of bottom-up visual motion signals and top-down spatial location signals, reshaping MSTd's response properties through task-dependent interactions with adjacent cortical areas.
Collapse
|
13
|
Mannion DJ. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex. PeerJ 2015; 3:e1038. [PMID: 26131378 PMCID: PMC4485252 DOI: 10.7717/peerj.1038] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Accepted: 05/30/2015] [Indexed: 11/27/2022] Open
Abstract
Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze) have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7) while we used functional magnetic resonance imaging (fMRI) to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field) and the image patch source location (above or below fixation); the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.
Collapse
|
14
|
Treacherous Pavements: Paving Slab Patterns Modify Intended Walking Directions. PLoS One 2015; 10:e0130034. [PMID: 26067491 PMCID: PMC4465974 DOI: 10.1371/journal.pone.0130034] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 05/15/2015] [Indexed: 12/01/2022] Open
Abstract
Current understanding in locomotion research is that, for humans, navigating natural environments relies heavily on visual input; in contrast, walking on even ground in man-made obstacle and hazard-free environments is so highly automated that visual information derived from floor patterns should not affect locomotion and in particular have no impact on the direction of travel. The vision literature on motion perception would suggest otherwise; specifically that oblique floor patterns may induce substantial veering away from the intended direction of travel due to the so-called aperture problem. Here, we tested these contrasting predictions by letting participants walk over commonly encountered floor patterns (paving slabs) and investigating participants’ ability to walk “straight ahead” for different pattern orientations. We show that, depending on pattern orientation, participants veered considerably over the measured travel distance (up to 8% across trials), in line with predictions derived from the literature on motion perception. We argue that these findings are important to the study of locomotion, and, if also observed in real world environments, might have implications for architectural design.
Collapse
|
15
|
Temporal statistics of natural image sequences generated by movements with insect flight characteristics. PLoS One 2014; 9:e110386. [PMID: 25340761 PMCID: PMC4207754 DOI: 10.1371/journal.pone.0110386] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 09/09/2014] [Indexed: 11/19/2022] Open
Abstract
Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments.
Collapse
|
16
|
Abstract
Visual motion direction ambiguities due to edge-aperture interaction might be resolved by speed priors, but scant empirical data support this hypothesis. We measured optic flow and gaze positions of walking mothers and the infants they carried. Empirically derived motion priors for infants are vertically elongated and shifted upward relative to mothers. Skewed normal distributions fitted to estimated retinal speeds peak at values above 20°/sec.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, U.S.A.
| | | |
Collapse
|
17
|
Paradis AL, Morel S, Seriès P, Lorenceau J. Speeding up the brain: when spatial facilitation translates into latency shortening. Front Hum Neurosci 2012; 6:330. [PMID: 23267321 PMCID: PMC3525934 DOI: 10.3389/fnhum.2012.00330] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2012] [Accepted: 11/28/2012] [Indexed: 11/24/2022] Open
Abstract
Waves of activity following a focal stimulation are reliably observed to spread across the cortical tissue. The origin of these waves remains unclear and the underlying mechanisms and function are still debated. In this study, we ask whether waves of activity modulate the magnetoencephalography (MEG) signals recorded in humans during visual stimulation with Gabor patches sequentially flashed along a vertical path, eliciting a perception of vertical apparent motion. Building upon the functional properties of long-rang horizontal connections, proposed to contribute to spreading activity, we specifically probe the amplitude and latency of MEG responses as a function of Gabor contrast and orientation. The results indicate that in the left hemisphere the response amplitude is enhanced and the half height response latency is shortened for co-aligned Gabor as compared to misaligned Gabor patches at a low but not at a high contrast. Building upon these findings, we develop a biologically plausible computational model that performs a “spike time alignment” of the responses to elongated contours with varying contrast, endowing them with a phase advance relative to misaligned contours.
Collapse
Affiliation(s)
- Anne-Lise Paradis
- UPMC Univ Paris 06, UMR-S975 UMR 7225, Centre de Recherche en Neuroscience Equipe Cogimage, Paris, France ; Inserm U 975, Centre de Recherche en Neuroscience Equipe Cogimage, Paris, France ; CNRS UMR 7225, Centre de Recherche en Neuroscience Equipe Cogimage, Paris, France ; ICM Equipe Cogimage, Paris, France
| | | | | | | |
Collapse
|
18
|
Raudies F, Mingolla E, Neumann H. Active gaze control improves optic flow-based segmentation and steering. PLoS One 2012; 7:e38446. [PMID: 22719889 PMCID: PMC3375264 DOI: 10.1371/journal.pone.0038446] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 05/07/2012] [Indexed: 11/30/2022] Open
Abstract
An observer traversing an environment actively relocates gaze to fixate objects. Evidence suggests that gaze is frequently directed toward the center of an object considered as target but more likely toward the edges of an object that appears as an obstacle. We suggest that this difference in gaze might be motivated by specific patterns of optic flow that are generated by either fixating the center or edge of an object. To support our suggestion we derive an analytical model that shows: Tangentially fixating the outer surface of an obstacle leads to strong flow discontinuities that can be used for flow-based segmentation. Fixation of the target center while gaze and heading are locked without head-, body-, or eye-rotations gives rise to a symmetric expansion flow with its center at the point being approached, which facilitates steering toward a target. We conclude that gaze control incorporates ecological constraints to improve the robustness of steering and collision avoidance by actively generating flows appropriate to solve the task.
Collapse
Affiliation(s)
- Florian Raudies
- Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, Massachusetts, United States of America.
| | | | | |
Collapse
|
19
|
Durant S, Zanker JM. Variation in the local motion statistics of real-life optic flow scenes. Neural Comput 2012; 24:1781-805. [PMID: 22428592 DOI: 10.1162/neco_a_00294] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Optic flow motion patterns can be a rich source of information about our own movement and about the structure of the environment we are moving in. We investigate the information available to the brain under real operating conditions by analyzing video sequences generated by physically moving a camera through various typical human environments. We consider to what extent the motion signal maps generated by a biologically plausible, two-dimensional array of correlation-based motion detectors (2DMD) not only depend on egomotion, but also reflect the spatial setup of such environments. We analyzed the local motion outputs by extracting the relative amounts of detected directions and comparing the spatial distribution of the motion signals to that of idealized optic flow. Using a simple template matching estimation technique, we are able to extract the focus of expansion and find relatively small errors that are distributed in characteristic patterns in different scenes. This shows that all types of scenes provide suitable motion information for extracting ego motion despite the substantial levels of noise affecting the motion signal distributions, attributed to the sparse nature of optic flow and the presence of camera jitter. However, there are large differences in the shape of the direction distributions between different types of scenes; in particular, man-made office scenes are heavily dominated by directions in the cardinal axes, which is much less apparent in outdoor forest scenes. Further examination of motion magnitudes at different scales and the location of motion information in a scene revealed different patterns across different scene categories. This suggests that self-motion patterns are not only relevant for deducing heading direction and speed but also provide a rich information source for scene structure and could be important for the rapid formation of the gist of a scene under normal human locomotion.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, Royal Holloway University of London, Egham, Surrey SW116HJ, UK.
| | | |
Collapse
|
20
|
Yu CP, Page WK, Gaborski R, Duffy CJ. Receptive field dynamics underlying MST neuronal optic flow selectivity. J Neurophysiol 2010; 103:2794-807. [PMID: 20457855 DOI: 10.1152/jn.01085.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optic flow informs moving observers about their heading direction. Neurons in monkey medial superior temporal (MST) cortex show heading selective responses to optic flow and planar direction selective responses to patches of local motion. We recorded MST neuronal responses to a 90 x 90 degrees optic flow display and to a 3 x 3 array of local motion patches covering the same area. Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses. The local motion responses of each neuron were modeled as mixtures of Gaussians, combining the effects of two Gaussian response functions derived using a genetic algorithm, and then used to predict that neuron's optic flow responses. Some neurons showed good correspondence between local motion models and optic flow responses, others showed substantial differences. We used the genetic algorithm to modulate the relative strength of each local motion segment's responses to accommodate interactions between segments that might modulate their relative efficacy during co-activation by global patterns of optic flow. These gain modulated models showed uniformly better fits to the optic flow responses, suggesting that coactivation of receptive field segments alters neuronal response properties. We tested this hypothesis by simultaneously presenting local motion stimuli at two different sites. These two-segment stimuli revealed that interactions between response segments have direction and location specific effects that can account for aspects of optic flow selectivity. We conclude that MST's optic flow selectivity reflects dynamic interactions between spatially distributed local planar motion response mechanisms.
Collapse
Affiliation(s)
- Chen Ping Yu
- Department of Computer Sciences, Rochester Institute of Technology Rochester, Rochester, New York, USA
| | | | | | | |
Collapse
|
21
|
Efficient coding correlates with spatial frequency tuning in a model of V1 receptive field organization. Vis Neurosci 2009; 26:21-34. [PMID: 19203427 DOI: 10.1017/s0952523808080966] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Efficient coding has been proposed to play an essential role in early visual processing. While several approaches used an objective function to optimize a particular aspect of efficient coding, such as the minimization of mutual information or the maximization of sparseness, we here explore how different estimates of efficient coding in a model with nonlinear dynamics and Hebbian learning determine the similarity of model receptive fields to V1 data with respect to spatial tuning. Our simulation results indicate that most measures of efficient coding correlate with the similarity of model receptive field data to V1 data, that is, optimizing the estimate of efficient coding increases the similarity of the model data to experimental data. However, the degree of the correlation varies with the different estimates of efficient coding, and in particular, the variance in the firing pattern of each cell does not predict a similarity of model and experimental data.
Collapse
|
22
|
Abstract
Statistically efficient processing schemes focus the resources of a signal processing system on the range of statistically probable signals. Relying on the statistical properties of retinal motion signals during ego-motion we propose a nonlinear processing scheme for retinal flow. It maximizes the mutual information between the visual input and its neural representation, and distributes the processing load uniformly over the neural resources. We derive predictions for the receptive fields of motion sensitive neurons in the velocity space. The properties of the receptive fields are tightly connected to their position in the visual field, and to their preferred retinal velocity. The velocity tuning properties show characteristics of properties of neurons in the motion processing pathway of the primate brain.
Collapse
Affiliation(s)
- Dirk Calow
- Department of Psychology, Westfalische Wilhelms University, Munster, Germany.
| | | |
Collapse
|