1
|
Wu Z, Guo A. Bioinspired figure-ground discrimination via visual motion smoothing. PLoS Comput Biol 2023; 19:e1011077. [PMID: 37083880 PMCID: PMC10155969 DOI: 10.1371/journal.pcbi.1011077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/03/2023] [Accepted: 04/04/2023] [Indexed: 04/22/2023] Open
Abstract
Flies detect and track moving targets among visual clutter, and this process mainly relies on visual motion. Visual motion is analyzed or computed with the pathway from the retina to T4/T5 cells. The computation of local directional motion was formulated as an elementary movement detector (EMD) model more than half a century ago. Solving target detection or figure-ground discrimination problems can be equivalent to extracting boundaries between a target and the background based on the motion discontinuities in the output of a retinotopic array of EMDs. Individual EMDs cannot measure true velocities, however, due to their sensitivity to pattern properties such as luminance contrast and spatial frequency content. It remains unclear how local directional motion signals are further integrated to enable figure-ground discrimination. Here, we present a computational model inspired by fly motion vision. Simulations suggest that the heavily fluctuating output of an EMD array is naturally surmounted by a lobula network, which is hypothesized to be downstream of the local motion detectors and have parallel pathways with distinct directional selectivity. The lobula network carries out a spatiotemporal smoothing operation for visual motion, especially across time, enabling the segmentation of moving figures from the background. The model qualitatively reproduces experimental observations in the visually evoked response characteristics of one type of lobula columnar (LC) cell. The model is further shown to be robust to natural scene variability. Our results suggest that the lobula is involved in local motion-based target detection.
Collapse
Affiliation(s)
- Zhihua Wu
- School of Life Sciences, Shanghai University, Shanghai, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Aike Guo
- School of Life Sciences, Shanghai University, Shanghai, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- International Academic Center of Complex Systems, Advanced Institute of Natural Sciences, Beijing Normal University at Zhuhai, Zhuhai, Guangdong, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Mano O, Creamer MS, Badwan BA, Clark DA. Predicting individual neuron responses with anatomically constrained task optimization. Curr Biol 2021; 31:4062-4075.e4. [PMID: 34324832 DOI: 10.1016/j.cub.2021.06.090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/24/2021] [Accepted: 06/29/2021] [Indexed: 01/28/2023]
Abstract
Artificial neural networks trained to solve sensory tasks can develop statistical representations that match those in biological circuits. However, it remains unclear whether they can reproduce properties of individual neurons. Here, we investigated how artificial networks predict individual neuron properties in the visual motion circuits of the fruit fly Drosophila. We trained anatomically constrained networks to predict movement in natural scenes, solving the same inference problem as fly motion detectors. Units in the artificial networks adopted many properties of analogous individual neurons, even though they were not explicitly trained to match these properties. Among these properties was the split into ON and OFF motion detectors, which is not predicted by classical motion detection models. The match between model and neurons was closest when models were trained to be robust to noise. These results demonstrate how anatomical, task, and noise constraints can explain properties of individual neurons in a small neural network.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
3
|
Wang H, Fu Q, Wang H, Baxter P, Peng J, Yue S. A bioinspired angular velocity decoding neural network model for visually guided flights. Neural Netw 2021; 136:180-193. [PMID: 33494035 DOI: 10.1016/j.neunet.2020.12.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Revised: 12/03/2020] [Accepted: 12/07/2020] [Indexed: 11/17/2022]
Abstract
Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model's potential for implementation in micro air vehicles which have only visual sensors.
Collapse
Affiliation(s)
- Huatian Wang
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Hongxin Wang
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Paul Baxter
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China.
| | - Shigang Yue
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK.
| |
Collapse
|
4
|
Kirkels LAMH, Zhang W, Duijnhouwer J, van Wezel RJA. Opto-locomotor reflexes of mice to reverse-phi stimuli. J Vis 2020; 20:7. [PMID: 32097483 PMCID: PMC7343431 DOI: 10.1167/jov.20.2.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
In a reverse-phi stimulus, the contrast luminance of moving dots is reversed each displacement step. Under those conditions, the direction of the moving dots is perceived in the direction opposite of the displacement direction of the dots. In this study, we investigate if mice respond oppositely to phi and reverse-phi stimuli. Mice ran head-fixed on a Styrofoam ball floating on pressurized air at the center of a large dome. We projected random dot patterns that were displaced rightward or leftward, using either a phi or a reverse-phi stimulus. For phi stimuli, changes in direction caused the mice to reflexively compensate and adjust their running direction in the direction of the displaced pattern. We show that for reverse-phi stimuli mice compensate in the direction opposite to the displacement direction of the dots, in accordance with the perceived direction of displacement in humans for reverse-phi stimuli.
Collapse
|
5
|
Dynamic Signal Compression for Robust Motion Vision in Flies. Curr Biol 2020; 30:209-221.e8. [PMID: 31928873 DOI: 10.1016/j.cub.2019.10.035] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 09/17/2019] [Accepted: 10/18/2019] [Indexed: 12/16/2022]
Abstract
Sensory systems need to reliably extract information from highly variable natural signals. Flies, for instance, use optic flow to guide their course and are remarkably adept at estimating image velocity regardless of image statistics. Current circuit models, however, cannot account for this robustness. Here, we demonstrate that the Drosophila visual system reduces input variability by rapidly adjusting its sensitivity to local contrast conditions. We exhaustively map functional properties of neurons in the motion detection circuit and find that local responses are compressed by surround contrast. The compressive signal is fast, integrates spatially, and derives from neural feedback. Training convolutional neural networks on estimating the velocity of natural stimuli shows that this dynamic signal compression can close the performance gap between model and organism. Overall, our work represents a comprehensive mechanistic account of how neural systems attain the robustness to carry out survival-critical tasks in challenging real-world environments.
Collapse
|
6
|
Differential Tuning to Visual Motion Allows Robust Encoding of Optic Flow in the Dragonfly. J Neurosci 2019; 39:8051-8063. [PMID: 31481434 DOI: 10.1523/jneurosci.0143-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 07/22/2019] [Accepted: 08/07/2019] [Indexed: 11/21/2022] Open
Abstract
Visual cues provide an important means for aerial creatures to ascertain their self-motion through the environment. In many insects, including flies, moths, and bees, wide-field motion-sensitive neurons in the third optic ganglion are thought to underlie such motion encoding; however, these neurons can only respond robustly over limited speed ranges. The task is more complicated for some species of dragonflies that switch between extended periods of hovering flight and fast-moving pursuit of prey and conspecifics, requiring motion detection over a broad range of velocities. Since little is known about motion processing in these insects, we performed intracellular recordings from hawking, emerald dragonflies (Hemicordulia spp.) and identified a diverse group of motion-sensitive neurons that we named lobula tangential cells (LTCs). Following prolonged visual stimulation with drifting gratings, we observed significant differences in both temporal and spatial tuning of LTCs. Cluster analysis of these changes confirmed several groups of LTCs with distinctive spatiotemporal tuning. These differences were associated with variation in velocity tuning in response to translated, natural scenes. LTCs with differences in velocity tuning ranges and optima may underlie how a broad range of motion velocities are encoded. In the hawking dragonfly, changes in LTC tuning over time are therefore likely to support their extensive range of behaviors, from hovering to fast-speed pursuits.SIGNIFICANCE STATEMENT Understanding how animals navigate the world is an inherently difficult and interesting problem. Insects are useful models for understanding neuronal mechanisms underlying these activities, with neurons that encode wide-field motion previously identified in insects, such as flies, hawkmoths, and butterflies. Like some Dipteran flies, dragonflies exhibit complex aerobatic behaviors, such as hovering, patrolling, and aerial combat. However, dragonflies lack halteres that support such diverse behavior in flies. To understand how dragonflies might address this problem using only visual cues, we recorded from their wide-field motion-sensitive neurons. We found these differ strongly in the ways they respond to sustained motion, allowing them collectively to encode the very broad range of velocities experienced during diverse behavior.
Collapse
|
7
|
Salazar-Gatzimas E, Chen J, Creamer MS, Mano O, Mandel HB, Matulis CA, Pottackal J, Clark DA. Direct Measurement of Correlation Responses in Drosophila Elementary Motion Detectors Reveals Fast Timescale Tuning. Neuron 2017; 92:227-239. [PMID: 27710784 DOI: 10.1016/j.neuron.2016.09.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/22/2016] [Accepted: 08/29/2016] [Indexed: 10/20/2022]
Abstract
Animals estimate visual motion by integrating light intensity information over time and space. The integration requires nonlinear processing, which makes motion estimation circuitry sensitive to specific spatiotemporal correlations that signify visual motion. Classical models of motion estimation weight these correlations to produce direction-selective signals. However, the correlational algorithms they describe have not been directly measured in elementary motion-detecting neurons (EMDs). Here, we employed stimuli to directly measure responses to pairwise correlations in Drosophila's EMD neurons, T4 and T5. Activity in these neurons was required for behavioral responses to pairwise correlations and was predictive of those responses. The pattern of neural responses in the EMDs was inconsistent with one classical model of motion detection, and the timescale and selectivity of correlation responses constrained the temporal filtering properties in potential models. These results reveal how neural responses to pairwise correlations drive visual behavior in this canonical motion-detecting circuit.
Collapse
Affiliation(s)
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Holly B Mandel
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | | | - Joseph Pottackal
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
8
|
Li J, Lindemann JP, Egelhaaf M. Peripheral Processing Facilitates Optic Flow-Based Depth Perception. Front Comput Neurosci 2016; 10:111. [PMID: 27818631 PMCID: PMC5073142 DOI: 10.3389/fncom.2016.00111] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 10/04/2016] [Indexed: 12/19/2022] Open
Abstract
Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements ("optic flow") during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.
Collapse
Affiliation(s)
- Jinglin Li
- Department of Neurobiology and Center of Excellence Cognitive Interaction Technology, Bielefeld UniversityBielefeld, Germany
| | | | | |
Collapse
|
9
|
|
10
|
Bertrand OJN, Lindemann JP, Egelhaaf M. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes. PLoS Comput Biol 2015; 11:e1004339. [PMID: 26583771 PMCID: PMC4652890 DOI: 10.1371/journal.pcbi.1004339] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2014] [Accepted: 05/13/2015] [Indexed: 11/18/2022] Open
Abstract
Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects.
Collapse
Affiliation(s)
| | | | - Martin Egelhaaf
- Neurobiologie & CITEC, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
11
|
Ullrich TW, Kern R, Egelhaaf M. Influence of environmental information in natural scenes and the effects of motion adaptation on a fly motion-sensitive neuron during simulated flight. Biol Open 2014; 4:13-21. [PMID: 25505148 PMCID: PMC4295162 DOI: 10.1242/bio.20149449] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Gaining information about the spatial layout of natural scenes is a challenging task that flies need to solve, especially when moving at high velocities. A group of motion sensitive cells in the lobula plate of flies is supposed to represent information about self-motion as well as the environment. Relevant environmental features might be the nearness of structures, influencing retinal velocity during translational self-motion, and the brightness contrast. We recorded the responses of the H1 cell, an individually identifiable lobula plate tangential cell, during stimulation with image sequences, simulating translational motion through natural sceneries with a variety of differing depth structures. A correlation was found between the average nearness of environmental structures within large parts of the cell's receptive field and its response across a variety of scenes, but no correlation was found between the brightness contrast of the stimuli and the cell response. As a consequence of motion adaptation resulting from repeated translation through the environment, the time-dependent response modulations induced by the spatial structure of the environment were increased relatively to the background activity of the cell. These results support the hypothesis that some lobula plate tangential cells do not only serve as sensors of self-motion, but also as a part of a neural system that processes information about the spatial layout of natural scenes.
Collapse
Affiliation(s)
- Thomas W Ullrich
- Department of Neurobiology, Bielefeld University, Universitätsstrasse 25, 33615 Bielefeld, Germany Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1/Zehlendorfer Damm 201, 33619 Bielefeld, Germany
| | - Roland Kern
- Department of Neurobiology, Bielefeld University, Universitätsstrasse 25, 33615 Bielefeld, Germany Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1/Zehlendorfer Damm 201, 33619 Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Universitätsstrasse 25, 33615 Bielefeld, Germany Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1/Zehlendorfer Damm 201, 33619 Bielefeld, Germany
| |
Collapse
|
12
|
Egelhaaf M, Kern R, Lindemann JP. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains. Front Neural Circuits 2014; 8:127. [PMID: 25389392 PMCID: PMC4211400 DOI: 10.3389/fncir.2014.00127] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 10/05/2014] [Indexed: 11/13/2022] Open
Abstract
Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around ("optic flow") to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and-in many behavioral contexts-less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| | - Roland Kern
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| | - Jens Peter Lindemann
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
13
|
Schwegmann A, Lindemann JP, Egelhaaf M. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis. Front Comput Neurosci 2014; 8:83. [PMID: 25136314 PMCID: PMC4118023 DOI: 10.3389/fncom.2014.00083] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2014] [Accepted: 07/14/2014] [Indexed: 02/04/2023] Open
Abstract
Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way.
Collapse
Affiliation(s)
| | | | - Martin Egelhaaf
- Department of Neurobiology and Center of Excellence Cognitive Interaction Technology, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
14
|
O'Carroll DC, Barnett PD, Nordström K. Temporal and spatial adaptation of transient responses to local features. Front Neural Circuits 2012; 6:74. [PMID: 23087617 PMCID: PMC3474938 DOI: 10.3389/fncir.2012.00074] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Accepted: 10/01/2012] [Indexed: 11/15/2022] Open
Abstract
Interpreting visual motion within the natural environment is a challenging task, particularly considering that natural scenes vary enormously in brightness, contrast and spatial structure. The performance of current models for the detection of self-generated optic flow depends critically on these very parameters, but despite this, animals manage to successfully navigate within a broad range of scenes. Within global scenes local areas with more salient features are common. Recent work has highlighted the influence that local, salient features have on the encoding of optic flow, but it has been difficult to quantify how local transient responses affect responses to subsequent features and thus contribute to the global neural response. To investigate this in more detail we used experimenter-designed stimuli and recorded intracellularly from motion-sensitive neurons. We limited the stimulus to a small vertically elongated strip, to investigate local and global neural responses to pairs of local “doublet” features that were designed to interact with each other in the temporal and spatial domain. We show that the passage of a high-contrast doublet feature produces a complex transient response from local motion detectors consistent with predictions of a simple computational model. In the neuron, the passage of a high-contrast feature induces a local reduction in responses to subsequent low-contrast features. However, this neural contrast gain reduction appears to be recruited only when features stretch vertically (i.e., orthogonal to the direction of motion) across at least several aligned neighboring ommatidia. Horizontal displacement of the components of elongated features abolishes the local adaptation effect. It is thus likely that features in natural scenes with vertically aligned edges, such as tree trunks, recruit the greatest amount of response suppression. This property could emphasize the local responses to such features vs. those in nearby texture within the scene.
Collapse
Affiliation(s)
- David C O'Carroll
- Adelaide Centre for Neuroscience Research, School of Medical Sciences, The University of Adelaide Adelaide, SA, Australia
| | | | | |
Collapse
|
15
|
Hennig P, Egelhaaf M. Neuronal encoding of object and distance information: a model simulation study on naturalistic optic flow processing. Front Neural Circuits 2012; 6:14. [PMID: 22461769 PMCID: PMC3309705 DOI: 10.3389/fncir.2012.00014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2011] [Accepted: 03/05/2012] [Indexed: 11/13/2022] Open
Abstract
We developed a model of the input circuitry of the FD1 cell, an identified motion-sensitive interneuron in the blowfly's visual system. The model circuit successfully reproduces the FD1 cell's most conspicuous property: its larger responses to objects than to spatially extended patterns. The model circuit also mimics the time-dependent responses of FD1 to dynamically complex naturalistic stimuli, shaped by the blowfly's saccadic flight and gaze strategy: the FD1 responses are enhanced when, as a consequence of self-motion, a nearby object crosses the receptive field during intersaccadic intervals. Moreover, the model predicts that these object-induced responses are superimposed by pronounced pattern-dependent fluctuations during movements on virtual test flights in a three-dimensional environment with systematic modifications of the environmental patterns. Hence, the FD1 cell is predicted to detect not unambiguously objects defined by the spatial layout of the environment, but to be also sensitive to objects distinguished by textural features. These ambiguous detection abilities suggest an encoding of information about objects-irrespective of the features by which the objects are defined-by a population of cells, with the FD1 cell presumably playing a prominent role in such an ensemble.
Collapse
Affiliation(s)
| | - Martin Egelhaaf
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology”, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
16
|
Meyer HG, Lindemann JP, Egelhaaf M. Pattern-dependent response modulations in motion-sensitive visual interneurons--a model study. PLoS One 2011; 6:e21488. [PMID: 21760894 PMCID: PMC3132178 DOI: 10.1371/journal.pone.0021488] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2011] [Accepted: 05/30/2011] [Indexed: 12/03/2022] Open
Abstract
Even if a stimulus pattern moves at a constant velocity across the receptive field of motion-sensitive neurons, such as lobula plate tangential cells (LPTCs) of flies, the response amplitude modulates over time. The amplitude of these response modulations is related to local pattern properties of the moving retinal image. On the one hand, pattern-dependent response modulations have previously been interpreted as 'pattern-noise', because they deteriorate the neuron's ability to provide unambiguous velocity information. On the other hand, these modulations might also provide the system with valuable information about the textural properties of the environment. We analyzed the influence of the size and shape of receptive fields by simulations of four versions of LPTC models consisting of arrays of elementary motion detectors of the correlation type (EMDs). These models have previously been suggested to account for many aspects of LPTC response properties. Pattern-dependent response modulations decrease with an increasing number of EMDs included in the receptive field of the LPTC models, since spatial changes within the visual field are smoothed out by the summation of spatially displaced EMD responses. This effect depends on the shape of the receptive field, being the more pronounced--for a given total size--the more elongated the receptive field is along the direction of motion. Large elongated receptive fields improve the quality of velocity signals. However, if motion signals need to be localized the velocity coding is only poor but the signal provides--potentially useful--local pattern information. These modelling results suggest that motion vision by correlation type movement detectors is subject to uncertainty: you cannot obtain both an unambiguous and a localized velocity signal from the output of a single cell. Hence, the size and shape of receptive fields of motion sensitive neurons should be matched to their potential computational task.
Collapse
Affiliation(s)
- Hanno Gerd Meyer
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany.
| | | | | |
Collapse
|
17
|
Shoemaker PA, Hyslop AM, Humbert JS. Optic flow estimation on trajectories generated by bio-inspired closed-loop flight. BIOLOGICAL CYBERNETICS 2011; 104:339-350. [PMID: 21626306 DOI: 10.1007/s00422-011-0436-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2010] [Accepted: 05/09/2011] [Indexed: 05/30/2023]
Abstract
We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.
Collapse
|
18
|
Babies B, Lindemann JP, Egelhaaf M, Möller R. Contrast-independent biologically inspired motion detection. SENSORS (BASEL, SWITZERLAND) 2011; 11:3303-26. [PMID: 22163800 PMCID: PMC3231623 DOI: 10.3390/s110303303] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2011] [Revised: 03/15/2011] [Accepted: 03/17/2011] [Indexed: 11/16/2022]
Abstract
Optic flow, i.e., retinal image movement resulting from ego-motion, is a crucial source of information used for obstacle avoidance and course control in flying insects. Optic flow analysis may prove promising for mobile robotics although it is currently not among the standard techniques. Insects have developed a computationally cheap analysis mechanism for image motion. Detailed computational models, the so-called elementary motion detectors (EMDs), describe motion detection in insects. However, the technical application of EMDs is complicated by the strong effect of local pattern contrast on their motion response. Here we present augmented versions of an EMD, the (s)cc-EMDs, which normalise their responses for contrast and thereby reduce the sensitivity to contrast changes. Thus, velocity changes of moving natural images are reflected more reliably in the detector response. The (s)cc-EMDs can easily be implemented in hardware and software and can be a valuable novel visual motion sensor for mobile robots.
Collapse
Affiliation(s)
- Birthe Babies
- Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, D-33594 Bielefeld, Germany; E-Mails: (J.P.L.); (M.E.)
- Computer Engineering Group, Faculty of Technology, Bielefeld University, D-33594 Bielefeld, Germany; E-Mail:
| | - Jens Peter Lindemann
- Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, D-33594 Bielefeld, Germany; E-Mails: (J.P.L.); (M.E.)
- Department of Neurobiology, Faculty of Biology, Bielefeld University, D-33594 Bielefeld, Germany
| | - Martin Egelhaaf
- Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, D-33594 Bielefeld, Germany; E-Mails: (J.P.L.); (M.E.)
- Department of Neurobiology, Faculty of Biology, Bielefeld University, D-33594 Bielefeld, Germany
| | - Ralf Möller
- Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, D-33594 Bielefeld, Germany; E-Mails: (J.P.L.); (M.E.)
- Computer Engineering Group, Faculty of Technology, Bielefeld University, D-33594 Bielefeld, Germany; E-Mail:
| |
Collapse
|
19
|
Barnett PD, Nordström K, O'Carroll DC. Motion adaptation and the velocity coding of natural scenes. Curr Biol 2010; 20:994-9. [PMID: 20537540 DOI: 10.1016/j.cub.2010.03.072] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2009] [Revised: 03/25/2010] [Accepted: 03/25/2010] [Indexed: 11/30/2022]
Abstract
Estimating relative velocity in the natural environment is challenging because natural scenes vary greatly in contrast and spatial structure. Widely accepted correlation-based models for elementary motion detectors (EMDs) are sensitive to contrast and spatial structure and consequently generate ambiguous estimates of velocity. Identified neurons in the third optic lobe of the hoverfly can reliably encode the velocity of natural images largely independent of contrast, despite receiving inputs directly from arrays of such EMDs. This contrast invariance suggests an important role for additional neural processes in robust encoding of image motion. However, it remains unclear which neural processes are contributing to contrast invariance. By recording from horizontal system neurons in the hoverfly lobula, we show two activity-dependent adaptation mechanisms acting as near-ideal normalizers for images of different contrasts that would otherwise produce highly variable response magnitudes. Responses to images that are initially weak neural drivers are boosted over several hundred milliseconds. Responses to images that are initially strong neural drivers are reduced over longer time scales. These adaptation mechanisms appear to be matched to higher-order natural image statistics reconciling the neurons' accurate encoding of image velocity with the inherent ambiguity of correlation-based motion detectors.
Collapse
Affiliation(s)
- Paul D Barnett
- Discipline of Physiology, School of Medical Sciences, The University of Adelaide, Adelaide SA 5005, Australia
| | | | | |
Collapse
|
20
|
Robust models for optic flow coding in natural scenes inspired by insect biology. PLoS Comput Biol 2009; 5:e1000555. [PMID: 19893631 PMCID: PMC2766641 DOI: 10.1371/journal.pcbi.1000555] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2009] [Accepted: 10/02/2009] [Indexed: 11/19/2022] Open
Abstract
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors.
Collapse
|
21
|
A bee in the corridor: centering and wall-following. Naturwissenschaften 2008; 95:1181-7. [PMID: 18813898 DOI: 10.1007/s00114-008-0440-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2008] [Revised: 07/23/2008] [Accepted: 08/01/2008] [Indexed: 10/21/2022]
Abstract
In an attempt to better understand the mechanism underlying lateral collision avoidance in flying insects, we trained honeybees (Apis mellifera) to fly through a large (95-cm wide) flight tunnel. We found that, depending on the entrance and feeder positions, honeybees would either center along the corridor midline or fly along one wall. Bees kept following one wall even when a major (150-cm long) part of the opposite wall was removed. These findings cannot be accounted for by the "optic flow balance" hypothesis that has been put forward to explain the typical bees' "centering response" observed in narrower corridors. Both centering and wall-following behaviors are well accounted for, however, by a control scheme called the lateral optic flow regulator, i.e., a feedback system that strives to maintain the unilateral optic flow constant. The power of this control scheme is that it would allow the bee to guide itself visually in a corridor without having to measure its speed or distance from the walls.
Collapse
|
22
|
A model for the detection of moving targets in visual clutter inspired by insect physiology. PLoS One 2008; 3:e2784. [PMID: 18665213 PMCID: PMC2464731 DOI: 10.1371/journal.pone.0002784] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2008] [Accepted: 06/25/2008] [Indexed: 11/23/2022] Open
Abstract
We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as ‘small target motion detectors’ (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1st order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear ‘matched filter’ to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues.
Collapse
|
23
|
Mah EL, Brinkworth RSA, O'Carroll DC. Implementation of an elaborated neuromorphic model of a biological photoreceptor. BIOLOGICAL CYBERNETICS 2008; 98:357-369. [PMID: 18327606 DOI: 10.1007/s00422-008-0222-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2007] [Accepted: 02/02/2008] [Indexed: 05/26/2023]
Abstract
We describe here an elaborated neuromorphic model based on the photoreceptors of flies and realised in both software simulation and hardware using discrete circuit components. The design of the model is based on optimisations and further elaborations to the mathematical model initially developed by van Hateren and Snippe that has been shown to accurately simulate biological responses in simulations under both steady-state and limited dynamic conditions. The model includes an adaptive time constant, nonlinear adaptive gain control, logarithmic saturation and a nonlinear adaptive frequency response mechanism. It consists of a linear phototransduction stage, a dynamic filter stage, two divisive feedback loops and a static nonlinearity. In order to test the biological accuracy of the model, impulses and step responses were used to test and evaluate the steady-state characteristics of both the biological (fly) and artificial (new neuromorphic model) photoreceptors. These tests showed that the model has faithfully captured most of the essential characteristics of the insect photoreceptor cells. The model showed a decreasing response to impulsive stimuli when the background intensity was increased, indicating that the circuit adapted to background luminance in order to improve the overall operating range and better encode the contrast of the stimulus rather than luminance. The model also showed the same change in its frequency response characteristics as the biological photoreceptors over a luminance range of 70,000 cd/m(2), with the corner frequency of the circuit ranging from 10 to 90 Hz depending on the current state of adaptation. Complex naturalistic experiments have also further proven the robustness of the model to perform in real-world scenario. The model showed great correlation to the biological photoreceptors with an r (2) value exceeding 0.83. Our model could act as an excellent platform for future experiments that could be carried out in scenarios where in vivo intracellular recording from biological photoreceptors would be impractical or impossible, or as a front-end for an artificial imaging system.
Collapse
Affiliation(s)
- Eng-Leng Mah
- Discipline of Physiology, School of Molecular and Biomedical Science and the Centre for Biomedical Engineering, The University of Adelaide, Adelaide, SA, 5005, Australia.
| | | | | |
Collapse
|
24
|
Franceschini N, Ruffier F, Serres J. A bio-inspired flying robot sheds light on insect piloting abilities. Curr Biol 2007; 17:329-35. [PMID: 17291757 DOI: 10.1016/j.cub.2006.12.032] [Citation(s) in RCA: 75] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2006] [Revised: 11/30/2006] [Accepted: 12/01/2006] [Indexed: 10/23/2022]
Abstract
When insects are flying forward, the image of the ground sweeps backward across their ventral viewfield and forms an "optic flow," which depends on both the groundspeed and the groundheight. To explain how these animals manage to avoid the ground by using this visual motion cue, we suggest that insect navigation hinges on a visual-feedback loop we have called the optic-flow regulator, which controls the vertical lift. To test this idea, we built a micro-helicopter equipped with an optic-flow regulator and a bio-inspired optic-flow sensor. This fly-by-sight micro-robot can perform exacting tasks such as take-off, level flight, and landing. Our control scheme accounts for many hitherto unexplained findings published during the last 70 years on insects' visually guided performances; for example, it accounts for the fact that honeybees descend in a headwind, land with a constant slope, and drown when travelling over mirror-smooth water. Our control scheme explains how insects manage to fly safely without any of the instruments used onboard aircraft to measure the groundheight, groundspeed, and descent speed. An optic-flow regulator is quite simple in terms of its neural implementation and just as appropriate for insects as it would be for aircraft.
Collapse
Affiliation(s)
- Nicolas Franceschini
- Biorobotics Lab, Movement and Perception Institute, Centre National de la Recherche Scientifique and University of the Mediterranean, 163 Avenue de Luminy, CP938, Marseille F-13288, cedex 9, France.
| | | | | |
Collapse
|
25
|
Douglass JK, Strausfeld NJ. Diverse speed response properties of motion sensitive neurons in the fly's optic lobe. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2006; 193:233-47. [PMID: 17106704 DOI: 10.1007/s00359-006-0185-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2006] [Revised: 10/02/2006] [Accepted: 10/07/2006] [Indexed: 11/27/2022]
Abstract
Speed and acceleration are fundamental components of visual motion that animals can use to interpret the world. Behavioral studies have established that insects discriminate speed largely independently of contrast and spatial frequency, and physiological recordings suggest that a subset of premotor descending neurons is in this sense speed-selective. Neural substrates and mechanisms of speed selectivity in insects, however, are unknown. Using blow flies Phaenicia sericata, intracellular recordings and dye-fills were obtained from medulla and lobula complex neurons which, though not necessarily speed-selective themselves, are positioned to participate in circuits that produce speed-selectivity in descending neurons. Stimulation with sinusoidally varied grating motion (0-200 degrees /s) provided a range of instantaneous velocities and accelerations. The resulting speed response profiles are indicative of four distinct speed ranges, supporting the hypothesis that the spatiotemporal tuning of mid-level neurons contains sufficient diversity to account for the emergence of speed selectivity at the descending neuron level. This type of mechanism has been proposed to explain speed discrimination in both insects and mammals, but has seemed less likely for insects due to possible constraints on small brains. Two additional recordings are suggestive of acceleration-selectivity, a potentially useful visual capability that is of uncertain functional significance for arthropods.
Collapse
Affiliation(s)
- John K Douglass
- Arizona Research Laboratories, Division of Neurobiology, 611 Gould-Simpson Bldg., University of Arizona, Tucson, AZ 85721, USA.
| | | |
Collapse
|
26
|
Straw AD, Warrant EJ, O'Carroll DC. A `bright zone' in male hoverfly (Eristalis tenax) eyes and associated faster motion detection and increased contrast sensitivity. J Exp Biol 2006; 209:4339-54. [PMID: 17050849 DOI: 10.1242/jeb.02517] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
SUMMARY
Eyes of the hoverfly Eristalis tenax are sexually dimorphic such that males have a fronto-dorsal region of large facets. In contrast to other large flies in which large facets are associated with a decreased interommatidial angle to form a dorsal `acute zone' of increased spatial resolution, we show that a dorsal region of large facets in males appears to form a `bright zone' of increased light capture without substantially increased spatial resolution. Theoretically, more light allows for increased performance in tasks such as motion detection. To determine the effect of the bright zone on motion detection, local properties of wide field motion detecting neurons were investigated using localized sinusoidal gratings. The pattern of local preferred directions of one class of these cells, the HS cells, in Eristalis is similar to that reported for the blowfly Calliphora. The bright zone seems to contribute to local contrast sensitivity; high contrast sensitivity exists in portions of the receptive field served by large diameter facet lenses of males and is not observed in females. Finally, temporal frequency tuning is also significantly faster in this frontal portion of the world, particularly in males, where it overcompensates for the higher spatial-frequency tuning and shifts the predicted local velocity optimum to higher speeds. These results indicate that increased retinal illuminance due to the bright zone of males is used to enhance contrast sensitivity and speed motion detector responses. Additionally, local neural properties vary across the visual world in a way not expected if HS cells serve purely as matched filters to measure yaw-induced visual motion.
Collapse
Affiliation(s)
- Andrew D Straw
- Discipline of Physiology, School of Molecular and Biomedical Science, The University of Adelaide, SA 5005, Australia.
| | | | | |
Collapse
|