1
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:e82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Jonathan Matthis
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Kathryn Bonnen
- School of Optometry, Indiana UniversityBloomingtonUnited States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| |
Collapse
|
2
|
Visual navigation: properties, acquisition and use of views. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2022:10.1007/s00359-022-01599-2. [PMID: 36515743 DOI: 10.1007/s00359-022-01599-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/20/2022] [Accepted: 11/25/2022] [Indexed: 12/15/2022]
Abstract
Panoramic views offer information on heading direction and on location to visually navigating animals. This review covers the properties of panoramic views and the information they provide to navigating animals, irrespective of image representation. Heading direction can be retrieved by alignment matching between memorized and currently experienced views, and a gradient descent in image differences can lead back to the location at which a view was memorized (positional image matching). Central place foraging insects, such as ants, bees and wasps, conduct distinctly choreographed learning walks and learning flights upon first leaving their nest that are likely to be designed to systematically collect scene memories tagged with information provided by path integration on the direction of and the distance to the nest. Equally, traveling along routes, ants have been shown to engage in scanning movements, in particular when routes are unfamiliar, again suggesting a systematic process of acquiring and comparing views. The review discusses what we know and do not know about how view memories are represented in the brain of insects, how they are acquired and how they are subsequently used for traveling along routes and for pinpointing places.
Collapse
|
3
|
Hughes AE, Griffiths D, Troscianko J, Kelley LA. The evolution of patterning during movement in a large-scale citizen science game. Proc Biol Sci 2021; 288:20202823. [PMID: 33434457 PMCID: PMC7892415 DOI: 10.1098/rspb.2020.2823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The motion dazzle hypothesis posits that high contrast geometric patterns can cause difficulties in tracking a moving target and has been argued to explain the patterning of animals such as zebras. Research to date has only tested a small number of patterns, offering equivocal support for the hypothesis. Here, we take a genetic programming approach to allow patterns to evolve based on their fitness (time taken to capture) and thus find the optimal strategy for providing protection when moving. Our ‘Dazzle Bug’ citizen science game tested over 1.5 million targets in a touch screen game at a popular visitor attraction. Surprisingly, we found that targets lost pattern elements during evolution and became closely background matching. Modelling results suggested that targets with lower motion energy were harder to catch. Our results indicate that low contrast, featureless targets offer the greatest protection against capture when in motion, challenging the motion dazzle hypothesis.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology, University of Essex, Wivenhoe House, Colchester CO4 3SQ, UK
| | | | - Jolyon Troscianko
- Centre for Life and Environmental Sciences, University of Exeter, Penryn Campus, Penryn TR10 9FE, UK
| | - Laura A Kelley
- Centre for Life and Environmental Sciences, University of Exeter, Penryn Campus, Penryn TR10 9FE, UK
| |
Collapse
|
4
|
Fu Q, Yue S. Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds. BIOLOGICAL CYBERNETICS 2020; 114:443-460. [PMID: 32623517 PMCID: PMC7554016 DOI: 10.1007/s00422-020-00841-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/19/2020] [Indexed: 06/03/2023]
Abstract
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; (2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction or null-direction translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| |
Collapse
|
5
|
Durant S, Zanker JM. The combined effect of eye movements improve head centred local motion information during walking. PLoS One 2020; 15:e0228345. [PMID: 31999777 PMCID: PMC6992003 DOI: 10.1371/journal.pone.0228345] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 01/13/2020] [Indexed: 11/18/2022] Open
Abstract
Eye movements play multiple roles in human behaviour—small stabilizing movements are important for keeping the image of the scene steady during locomotion, whilst large scanning movements search for relevant information. It has been proposed that eye movement induced retinal motion interferes with the estimation of self-motion based on optic flow. We investigated the effect of eye movements on retinal motion information during walking. Observers walked towards a target, wearing eye tracking glasses that simultaneously recorded the scene ahead and tracked the movements of both eyes. By realigning the frames of the recording from the scene ahead, relative to the centre of gaze, we could mimic the input received by the retina (retinocentric coordinates) and compare this to the input received by the scene camera (head centred coordinates). We asked which of these coordinate frames resulted in the least noisy motion information. Motion noise was calculated by finding the error in between the optic flow signal and a noise-free motion expansion pattern. We found that eye movements improved the optic flow information available, even when large diversions away from target were made.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, University of London, Egham, England, United Kingdom
- * E-mail:
| | - Johannes M. Zanker
- Department of Psychology, University of London, Egham, England, United Kingdom
| |
Collapse
|
6
|
Fu Q, Wang H, Hu C, Yue S. Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review. ARTIFICIAL LIFE 2019; 25:263-311. [PMID: 31397604 DOI: 10.1162/artl_a_00297] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
Collapse
Affiliation(s)
- Qinbing Fu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Hongxin Wang
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Cheng Hu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Shigang Yue
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| |
Collapse
|
7
|
|
8
|
Mannion DJ. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex. PeerJ 2015; 3:e1038. [PMID: 26131378 PMCID: PMC4485252 DOI: 10.7717/peerj.1038] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Accepted: 05/30/2015] [Indexed: 11/27/2022] Open
Abstract
Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze) have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7) while we used functional magnetic resonance imaging (fMRI) to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field) and the image patch source location (above or below fixation); the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.
Collapse
|
9
|
Abstract
Visual motion direction ambiguities due to edge-aperture interaction might be resolved by speed priors, but scant empirical data support this hypothesis. We measured optic flow and gaze positions of walking mothers and the infants they carried. Empirically derived motion priors for infants are vertically elongated and shifted upward relative to mothers. Skewed normal distributions fitted to estimated retinal speeds peak at values above 20°/sec.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, U.S.A.
| | | |
Collapse
|
10
|
Durant S, Zanker JM. Variation in the local motion statistics of real-life optic flow scenes. Neural Comput 2012; 24:1781-805. [PMID: 22428592 DOI: 10.1162/neco_a_00294] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Optic flow motion patterns can be a rich source of information about our own movement and about the structure of the environment we are moving in. We investigate the information available to the brain under real operating conditions by analyzing video sequences generated by physically moving a camera through various typical human environments. We consider to what extent the motion signal maps generated by a biologically plausible, two-dimensional array of correlation-based motion detectors (2DMD) not only depend on egomotion, but also reflect the spatial setup of such environments. We analyzed the local motion outputs by extracting the relative amounts of detected directions and comparing the spatial distribution of the motion signals to that of idealized optic flow. Using a simple template matching estimation technique, we are able to extract the focus of expansion and find relatively small errors that are distributed in characteristic patterns in different scenes. This shows that all types of scenes provide suitable motion information for extracting ego motion despite the substantial levels of noise affecting the motion signal distributions, attributed to the sparse nature of optic flow and the presence of camera jitter. However, there are large differences in the shape of the direction distributions between different types of scenes; in particular, man-made office scenes are heavily dominated by directions in the cardinal axes, which is much less apparent in outdoor forest scenes. Further examination of motion magnitudes at different scales and the location of motion information in a scene revealed different patterns across different scene categories. This suggests that self-motion patterns are not only relevant for deducing heading direction and speed but also provide a rich information source for scene structure and could be important for the rapid formation of the gist of a scene under normal human locomotion.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, Royal Holloway University of London, Egham, Surrey SW116HJ, UK.
| | | |
Collapse
|
11
|
Fleishman LJ, Pallus AC. Motion perception and visual signal design in Anolis lizards. Proc Biol Sci 2010; 277:3547-54. [PMID: 20591869 PMCID: PMC2982240 DOI: 10.1098/rspb.2010.0742] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2010] [Accepted: 06/04/2010] [Indexed: 11/12/2022] Open
Abstract
Anolis lizards communicate with displays consisting of motion of the head and body. Early portions of long-distance displays require movements that are effective at eliciting the attention of potential receivers. We studied signal-motion efficacy using a two-dimensional visual-motion detection (2DMD) model consisting of a grid of correlation-type elementary motion detectors. This 2DMD model has been shown to accurately predict Anolis lizard behavioural response. We tested different patterns of artificially generated motion and found that an abrupt 0.3° shift of position in less than 100 ms is optimal. We quantified motion in displays of 25 individuals from five species. Four species employ near-optimal movement patterns. We tested displays of these species using the 2DMD model on scenes with and without moderate wind. Display movements can easily be detected, even in the presence of windblown vegetation. The fifth species does not typically use the most effective display movements and display movements cannot be discerned by the 2DMD model in the presence of windblown vegetation. A number of Anolis species use abrupt up-and-down head movements approximately 10 mm in amplitude in displays, and these movements appear to be extremely effective for stimulating the receiver visual system.
Collapse
Affiliation(s)
- Leo J Fleishman
- Department of Biological Sciences, Union College, Schenectady, NY 12308, USA.
| | | |
Collapse
|
12
|
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2009; 196:1-13. [PMID: 19908049 DOI: 10.1007/s00359-009-0487-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2009] [Revised: 10/24/2009] [Accepted: 10/24/2009] [Indexed: 10/20/2022]
Abstract
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Collapse
|
13
|
Image motion environments: background noise for movement-based animal signals. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2008; 194:441-56. [DOI: 10.1007/s00359-008-0317-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2007] [Revised: 01/10/2008] [Accepted: 01/19/2008] [Indexed: 10/22/2022]
|
14
|
Abstract
Statistically efficient processing schemes focus the resources of a signal processing system on the range of statistically probable signals. Relying on the statistical properties of retinal motion signals during ego-motion we propose a nonlinear processing scheme for retinal flow. It maximizes the mutual information between the visual input and its neural representation, and distributes the processing load uniformly over the neural resources. We derive predictions for the receptive fields of motion sensitive neurons in the velocity space. The properties of the receptive fields are tightly connected to their position in the visual field, and to their preferred retinal velocity. The velocity tuning properties show characteristics of properties of neurons in the motion processing pathway of the primate brain.
Collapse
Affiliation(s)
- Dirk Calow
- Department of Psychology, Westfalische Wilhelms University, Munster, Germany.
| | | |
Collapse
|
15
|
Calow D, Lappe M. Local statistics of retinal optic flow for self-motion through natural sceneries. NETWORK (BRISTOL, ENGLAND) 2007; 18:343-374. [PMID: 18360939 DOI: 10.1080/09548980701642277] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Collapse
Affiliation(s)
- Dirk Calow
- Department of Psychology, Westf- Wilhelms University, Fliednerstr. 21, 48149 Münster, Germany.
| | | |
Collapse
|
16
|
Peters RA, Hemmi JM, Zeil J. Signaling against the Wind: Modifying Motion-Signal Structure in Response to Increased Noise. Curr Biol 2007; 17:1231-4. [PMID: 17614279 DOI: 10.1016/j.cub.2007.06.035] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2007] [Revised: 06/07/2007] [Accepted: 06/10/2007] [Indexed: 10/23/2022]
Abstract
Animal signals are optimized for particular signaling environments [1-3]. While signaling, senders often choose favorable conditions that ensure reliable detection and transmission [4-8], suggesting that they are sensitive to changes in signal efficacy. Recent evidence has also shown that animals will increase the amplitude or intensity of their acoustic signals at times of increased environmental noise [9-11]. The nature of these adjustments provides important insights into sensory processing. However, only a single piece of correlative evidence for signals defined by movement suggests that visual-signal design depends on ambient motion noise [12]. Here we show experimentally for the first time that animals communicating with movement will adjust their displays when environmental motion noise increases. Surprisingly, under sustained wind conditions, the Australian lizard Amphibolurus muricatus changed the structure and increased the duration of its introductory tail flicking, rather than increasing signaling speed. The way these lizards restructure the alerting component of their movement-based aggressive display in the presence of increased motion noise highlights the challenge we face in understanding motion-detection mechanisms under natural operating conditions.
Collapse
Affiliation(s)
- Richard A Peters
- Centre for Visual Sciences and, Research School of Biological Sciences, Australian National University, Canberra ACT 2601, Australia.
| | | | | |
Collapse
|
17
|
Begley JR, Arbib MA. Salamander locomotion-induced head movement and retinal motion sensitivity in a correlation-based motion detector model. NETWORK (BRISTOL, ENGLAND) 2007; 18:101-28. [PMID: 17852753 DOI: 10.1080/09548980701452875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
We report on a computational model of retinal motion sensitivity based on correlation-based motion detectors. We simulate object motion detection in the presence of retinal slip caused by the salamander's head movements during locomotion. Our study offers new insights into object motion sensitive ganglion cells in the salamander retina. A sigmoidal transformation of the spatially and temporally filtered retinal image substantially improves the sensitivity of the system in detecting a small target moving in place against a static natural background in the presence of comparatively large, fast simulated eye movements, but is detrimental to the direction-selectivity of the motion detector. The sigmoid has insignificant effects on detector performance in simulations of slow, high contrast laboratory stimuli. These results suggest that the sigmoid reduces the system's noise sensitivity.
Collapse
Affiliation(s)
- Jeffrey R Begley
- Computer Science Department, University of Southern California, 941 W 37th Place, Los Angeles, CA 90089-0781, USA.
| | | |
Collapse
|
18
|
Stürzl W, Zeil J. Depth, contrast and view-based homing in outdoor scenes. BIOLOGICAL CYBERNETICS 2007; 96:519-31. [PMID: 17443340 DOI: 10.1007/s00422-007-0147-3] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2006] [Accepted: 02/13/2007] [Indexed: 05/14/2023]
Abstract
Panoramic image differences can be used for view-based homing under natural outdoor conditions, because they increase smoothly with distance from a reference location (Zeil et al., J Opt Soc Am A 20(3):450-469, 2003). The particular shape, slope and depth of such image difference functions (IDFs) recorded at any one place, however, depend on a number of factors that so far have only been qualitatively identified. Here we show how the shape of difference functions depends on the depth structure and the contrast of natural scenes, by quantifying the depth- distribution of different outdoor scenes and by comparing it to the difference functions calculated with differently processed panoramic images, which were recorded at the same locations. We find (1) that IDFs and catchment areas become systematically wider as the average distance of objects increases, (2) that simple image processing operations -- like subtracting the local mean, difference-of-Gaussian filtering and local contrast normalization -- make difference functions robust against changes in illumination and the spurious effects of shadows, and (3) by comparing depth-dependent translational and depth-independent rotational difference functions, we show that IDFs of contrast-normalized snapshots are predominantly determined by the depth-structure and possibly also by occluding contours in a scene. We propose a model for the shape of IDFs as a tool for quantitative comparisons between the shapes of these functions in different scenes.
Collapse
Affiliation(s)
- Wolfgang Stürzl
- ARC Centre of Excellence in Vision Science and Centre for Visual Sciences, Research School of Biological Sciences, The Australian National University, Canberra, ACT, Australia.
| | | |
Collapse
|
19
|
Boeddeker N, Lindemann JP, Egelhaaf M, Zeil J. Responses of blowfly motion-sensitive neurons to reconstructed optic flow along outdoor flight paths. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2005; 191:1143-55. [PMID: 16133502 DOI: 10.1007/s00359-005-0038-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2005] [Revised: 06/29/2005] [Accepted: 07/02/2005] [Indexed: 11/24/2022]
Abstract
The retinal image flow a blowfly experiences in its daily life on the wing is determined by both the structure of the environment and the animal's own movements. To understand the design of visual processing mechanisms, there is thus a need to analyse the performance of neurons under natural operating conditions. To this end, we recorded flight paths of flies outdoors and reconstructed what they had seen, by moving a panoramic camera along exactly the same paths. The reconstructed image sequences were later replayed on a fast, panoramic flight simulator to identified, motion sensitive neurons of the so-called horizontal system (HS) in the lobula plate of the blowfly, which are assumed to extract self-motion parameters from optic flow. We show that under real life conditions HS-cells not only encode information about self-rotation, but are also sensitive to translational optic flow and, thus, indirectly signal information about the depth structure of the environment. These properties do not require an elaboration of the known model of these neurons, because the natural optic flow sequences generate--at least qualitatively--the same depth-related response properties when used as input to a computational HS-cell model and to real neurons.
Collapse
Affiliation(s)
- N Boeddeker
- Lehrstuhl Neurobiologie, Universität Bielefeld, Postfach 10 01 31, 33501 Bielefeld, Germany.
| | | | | | | |
Collapse
|