1
|
Smithers SP, Brett MF, How MJ, Scott-Samuel NE, Roberts NW. Fiddler crabs (Afruca tangeri) detect second-order motion in both intensity and polarization. Commun Biol 2024; 7:1255. [PMID: 39362984 PMCID: PMC11450093 DOI: 10.1038/s42003-024-06953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 09/24/2024] [Indexed: 10/05/2024] Open
Abstract
Motion vision is vital for a wide range of animal behaviors. Fiddler crabs, for example, rely heavily on motion to detect the movement of avian predators. They are known to detect first-order motion using both intensity (defined by spatiotemporal correlations in luminance) and polarization information (defined separately as spatiotemporal correlations in the degree and/or angle of polarization). However, little is known about their ability to detect second-order motion, another important form of motion information; defined separately by spatiotemporal correlations in higher-order image properties. In this work we used behavioral experiments to test how fiddler crabs (Afruca tangeri) responded to both second-order intensity and polarization stimuli. Fiddler crabs responded to a number of different intensity based second-order stimuli. Furthermore, the crabs also responded to second-order polarization stimuli, a behaviorally relevant stimulus applicable to an unpolarized flying bird when viewed against a polarized sky. The detection of second-order motion in polarization is, to the best of our knowledge, the first demonstration of this ability in any animal. This discovery therefore opens a new dimension in our understanding of how animals use polarization vision for target detection and the broader importance of second-order motion detection for animal behavior.
Collapse
Affiliation(s)
- Samuel P Smithers
- School of Biological Sciences, University of Bristol, Bristol Life Sciences Building, Bristol, UK.
- Department of Psychology, Northeastern University, Boston, MA, USA.
| | - Maisie F Brett
- School of Biological Sciences, University of Bristol, Bristol Life Sciences Building, Bristol, UK
| | - Martin J How
- School of Biological Sciences, University of Bristol, Bristol Life Sciences Building, Bristol, UK
| | | | - Nicholas W Roberts
- School of Biological Sciences, University of Bristol, Bristol Life Sciences Building, Bristol, UK.
| |
Collapse
|
2
|
Duan W, Zhang Y, Zhang X, Yang J, Shan H, Liu L, Wei H. A Visual Pathway into Central Complex for High-Frequency Motion-Defined Bars in Drosophila. J Neurosci 2023; 43:4821-4836. [PMID: 37290936 PMCID: PMC10312062 DOI: 10.1523/jneurosci.0128-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 05/31/2023] [Accepted: 06/02/2023] [Indexed: 06/10/2023] Open
Abstract
Relative motion breaks a camouflaged target from a same-textured background, thus eliciting discrimination of a motion-defined object. Ring (R) neurons are critical components in the Drosophila central complex, which has been implicated in multiple visually guided behaviors. Using two-photon calcium imaging with female flies, we demonstrated that a specific population of R neurons that innervate the superior domain of bulb neuropil, termed superior R neurons, encoded a motion-defined bar with high spatial frequency contents. Upstream superior tuberculo-bulbar (TuBu) neurons transmitted visual signals by releasing acetylcholine within synapses connected with superior R neurons. Blocking TuBu or R neurons impaired tracking performance of the bar, which reveals their importance in motion-defined feature encoding. Additionally, the presentation of a low spatial frequency luminance-defined bar evoked consistent excitation in R neurons of the superior bulb, whereas either excited or inhibited responses were evoked in the inferior bulb. The distinct properties of the responses to the two bar stimuli indicate there is a functional division between the bulb subdomains. Moreover, physiological and behavioral tests with restricted lines suggest that R4d neurons play a vital role in tracking motion-defined bars. We conclude that the central complex receives the motion-defined features via a visual pathway from superior TuBu to R neurons and might encode different visual features via distinct response patterns at the population level, thereby driving visually guided behaviors.SIGNIFICANCE STATEMENT Animals could discriminate a motion-defined object that is indistinguishable with a same-textured background until it moves, but little is known about the underlying neural mechanisms. In this study, we identified that R neurons and their upstream partners, TuBu neurons, innervating the superior bulb of Drosophila central brain are involved in the discrimination of high-frequency motion-defined bars. Our study provides new evidence that R neurons receive multiple visual inputs from distinct upstream neurons, indicating a population coding mechanism for the fly central brain to discriminate diverse visual features. These results build progress in unraveling neural substrates for visually guided behaviors.
Collapse
Affiliation(s)
- Wenlan Duan
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
| | - Yihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
| | - Xin Zhang
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
| | - Jihua Yang
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
| | - Heying Shan
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Li Liu
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
- Chinese Academy of Sciences Key Laboratory of Mental Health, Beijing 100101, China
| | - Hongying Wei
- State Key Laboratory of Brain and Cognitive Science, Chinese Academy of Sciences Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100039, China
| |
Collapse
|
3
|
Abstract
Multisensory integration is synergistic—input from one sensory modality might modulate the behavioural response to another. Work in flies has shown that a small visual object presented in the periphery elicits innate aversive steering responses in flight, likely representing an approaching threat. Object aversion is switched to approach when paired with a plume of food odour. The ‘open-loop’ design of prior work facilitated the observation of changing valence. How does odour influence visual object responses when an animal has naturally active control over its visual experience? In this study, we use closed-loop feedback conditions, in which a fly's steering effort is coupled to the angular velocity of the visual stimulus, to confirm that flies steer toward or ‘fixate’ a long vertical stripe on the visual midline. They tend either to steer away from or ‘antifixate’ a small object or to disengage active visual control, which manifests as uncontrolled object ‘spinning’ within this experimental paradigm. Adding a plume of apple cider vinegar decreases the probability of both antifixation and spinning, while increasing the probability of frontal fixation for objects of any size, including a normally typically aversive small object.
Collapse
Affiliation(s)
- Karen Y Cheng
- UCLA Department of Integrative Biology and Physiology, Los Angeles, CA, USA
| | - Mark A Frye
- UCLA Department of Integrative Biology and Physiology, Los Angeles, CA, USA
| |
Collapse
|
4
|
Ji X, Yuan D, Wei H, Cheng Y, Wang X, Yang J, Hu P, Gestrich JY, Liu L, Zhu Y. Differentiation of Theta Visual Motion from Fourier Motion Requires LC16 and R18C12 Neurons in Drosophila. iScience 2020; 23:101041. [PMID: 32325414 PMCID: PMC7176990 DOI: 10.1016/j.isci.2020.101041] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/09/2020] [Accepted: 04/01/2020] [Indexed: 11/19/2022] Open
Abstract
Many animals perceive features of higher-order visual motion that are beyond the spatiotemporal correlations of luminance defined in first-order motion. Although the neural mechanisms of first-order motion detection have become understood in recent years, those underlying higher-order motion perception remain unclear. Here, we established a paradigm to assess the detection of theta motion—a type of higher-order motion—in freely walking Drosophila. Behavioral screening using this paradigm identified two clusters of neurons in the central brain, designated as R18C12, which were required for perception of theta motion but not for first-order motion. Furthermore, theta motion-activated R18C12 neurons were structurally and functionally located downstream of visual projection neurons in lobula, lobula columnar cells LC16, which activated R18C12 neurons via interactions of acetylcholine (ACh) and muscarinic acetylcholine receptors (mAChRs). The current study provides new insights into LC neurons and the neuronal mechanisms underlying visual information processing in complex natural scenes. Perception of theta motion requires LC16 and R18C12 neurons R18C12 neurons are activated by theta motion R18C12 neurons form synaptic connections with LC16 neurons LC16 neurons activate R18C12 neurons through ACh acting on mAChR
Collapse
Affiliation(s)
- Xiaoxiao Ji
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Deliang Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Hongying Wei
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Yaxin Cheng
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Xinwei Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Jihua Yang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Pengbo Hu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Julia Yvonne Gestrich
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Li Liu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China; CAS Key Laboratory of Mental Health, Beijing 100101, P. R. China.
| | - Yan Zhu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China.
| |
Collapse
|
5
|
Second-order cues to figure motion enable object detection during prey capture by praying mantises. Proc Natl Acad Sci U S A 2019; 116:27018-27027. [PMID: 31818943 DOI: 10.1073/pnas.1912310116] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Detecting motion is essential for animals to perform a wide variety of functions. In order to do so, animals could exploit motion cues, including both first-order cues-such as luminance correlation over time-and second-order cues, by correlating higher-order visual statistics. Since first-order motion cues are typically sufficient for motion detection, it is unclear why sensitivity to second-order motion has evolved in animals, including insects. Here, we investigate the role of second-order motion in prey capture by praying mantises. We show that prey detection uses second-order motion cues to detect figure motion. We further present a model of prey detection based on second-order motion sensitivity, resulting from a layer of position detectors feeding into a second layer of elementary-motion detectors. Mantis stereopsis, in contrast, does not require figure motion and is explained by a simpler model that uses only the first layer in both eyes. Second-order motion cues thus enable prey motion to be detected, even when perfectly matching the average background luminance and independent of the elementary motion of any parts of the prey. Subsequent to prey detection, processes such as stereopsis could work to determine the distance to the prey. We thus demonstrate how second-order motion mechanisms enable ecologically relevant behavior such as detecting camouflaged targets for other visual functions including stereopsis and target tracking.
Collapse
|
6
|
Shoemaker PA. Neural Network Model for Detection of Edges Defined by Image Dynamics. Front Comput Neurosci 2019; 13:76. [PMID: 31787888 PMCID: PMC6854273 DOI: 10.3389/fncom.2019.00076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 10/14/2019] [Indexed: 11/24/2022] Open
Abstract
Insects can detect the presence of discrete objects in their visual fields based on a range of differences in spatiotemporal characteristics between the images of object and background. This includes but is not limited to relative motion. Evidence suggests that edge detection is an integral part of this capability, and this study examines the ability of a bio-inspired processing model to detect the presence of boundaries between two regions of a one-dimensional visual field, based on general differences in image dynamics. The model consists of two parts. The first is an early vision module inspired by insect visual processing, which implements adaptive photoreception, ON and OFF channels with transient and sustained characteristics, and delayed and undelayed signal paths. This is replicated for a number of photoreceptors in a small linear array. It is followed by an artificial neural network trained to discriminate the presence vs. absence of an edge based on the array output signals. Input data are derived from natural imagery and feature both static and moving edges between regions with moving texture, flickering texture, and static patterns in all possible combinations. The model can discriminate the presence of edges, stationary or moving, at rates far higher than chance. The resources required (numbers of neurons and visual signals) are realistic relative to those available in the insect second optic ganglion, where the bulk of such processing would be likely to take place.
Collapse
Affiliation(s)
- Patrick A Shoemaker
- Computational Science Research Center, San Diego State University, San Diego, CA, United States
| |
Collapse
|
7
|
Olfactory and Neuromodulatory Signals Reverse Visual Object Avoidance to Approach in Drosophila. Curr Biol 2019; 29:2058-2065.e2. [PMID: 31155354 DOI: 10.1016/j.cub.2019.05.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 04/01/2019] [Accepted: 05/01/2019] [Indexed: 12/15/2022]
Abstract
Behavioral reactions of animals to environmental sensory stimuli are sometimes reflexive and stereotyped but can also vary depending on contextual conditions. Engaging in active foraging or flight provokes a reversal in the valence of carbon dioxide responses from aversion to approach in Drosophila [1, 2], whereas mosquitoes encountering this same chemical cue show enhanced approach toward a small visual object [3]. Sensory plasticity in insects has been broadly attributed to the action of biogenic amines, which modulate behaviors such as olfactory learning, aggression, feeding, and egg laying [4-14]. Octopamine acts rapidly upon the onset of flight to modulate the response gain of directionally selective motion-detecting neurons in Drosophila [15]. How the action of biogenic amines might couple sensory modalities to each other or to locomotive states remains poorly understood. Here, we use a visual flight simulator [16] equipped for odor delivery [17] to confirm that flies avoid a small contrasting visual object in odorless air [18] but that the same animals reverse their preference to approach in the presence of attractive food odor. An aversive odor does not reverse object aversion. Optogenetic activation of either octopaminergic neurons or directionally selective motion-detecting neurons that express octopamine receptors elicits visual valence reversal in the absence of odor. Our results suggest a parsimonious model in which odor-activated octopamine release excites the motion detection pathway to increase the saliency of either a small object or a bar, eliciting tracking responses by both visual features.
Collapse
|
8
|
Keleş MF, Mongeau JM, Frye MA. Object features and T4/T5 motion detectors modulate the dynamics of bar tracking by Drosophila. ACTA ACUST UNITED AC 2019; 222:jeb.190017. [PMID: 30446539 DOI: 10.1242/jeb.190017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/09/2018] [Indexed: 01/21/2023]
Abstract
Visual objects can be discriminated by static spatial features such as luminance or dynamic features such as relative movement. Flies track a solid dark vertical bar moving on a bright background, a behavioral reaction so strong that for a rigidly tethered fly, the steering trajectory is phase advanced relative to the moving bar, apparently in anticipation of its future position. By contrast, flickering bars that generate no coherent motion or have a surface texture that moves in the direction opposite to the bar generate steering responses that lag behind the stimulus. It remains unclear how the spatial properties of a bar influence behavioral response dynamics. Here, we show that a dark bar defined by its luminance contrast to the uniform background drives a co-directional steering response that is phase advanced relative to the response to a textured bar defined only by its motion relative to a stationary textured background. The textured bar drives an initial contra-directional turn and phase-locked tracking. The qualitatively distinct response dynamics could indicate parallel visual processing of a luminance versus motion-defined object. Calcium imaging shows that T4/T5 motion-detecting neurons are more responsive to a solid dark bar than a motion-defined bar. Genetically blocking T4/T5 neurons eliminates the phase-advanced co-directional response to the luminance-defined bar, leaving the orientation response largely intact. We conclude that T4/T5 neurons mediate a co-directional optomotor response to a luminance-defined bar, thereby driving phase-advanced wing kinematics, whereas separate unknown visual pathways elicit the contra-directional orientation response.
Collapse
Affiliation(s)
- Mehmet F Keleş
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA 90095-7239, USA
| | - Jean-Michel Mongeau
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA 90095-7239, USA
| | - Mark A Frye
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA 90095-7239, USA
| |
Collapse
|
9
|
Toepfer F, Wolf R, Heisenberg M. Multi-stability with ambiguous visual stimuli in Drosophila orientation behavior. PLoS Biol 2018; 16:e2003113. [PMID: 29438378 PMCID: PMC5826666 DOI: 10.1371/journal.pbio.2003113] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 02/26/2018] [Accepted: 01/19/2018] [Indexed: 11/23/2022] Open
Abstract
It is widely accepted for humans and higher animals that vision is an active process in which the organism interprets the stimulus. To find out whether this also holds for lower animals, we designed an ambiguous motion stimulus, which serves as something like a multi-stable perception paradigm in Drosophila behavior. Confronted with a uniform panoramic texture in a closed-loop situation in stationary flight, the flies adjust their yaw torque to stabilize their virtual self-rotation. To make the visual input ambiguous, we added a second texture. Both textures got a rotatory bias to move into opposite directions at a constant relative angular velocity. The results indicate that the fly now had three possible frames of reference for self-rotation: either of the two motion components as well as the integrated motion vector of the two. In this ambiguous stimulus situation, the flies generated a continuous sequence of behaviors, each one adjusted to one or another of the three references. Vision is considered an active process in humans and higher animals in which the stimulus is interpreted by the subject and can be perceived in different ways if it is ambiguous. We aimed to find out whether this also holds for lower animals, such as the fruit fly Drosophila melanogaster. To provide ambiguity, we exposed flies to transparent motion stimuli in a flight simulator and found their behavior to be multi-stable. These results show that the visual system of the fly can separate the individual components of a transparent motion stimulus, and that this kind of stimulus is ambiguous to the fly. The extent to which the fly shows component selectivity in its behavior depends on several properties of the stimulus, like pattern contrast and element density. The alternations between the different behaviors exhibit a stochasticity reminiscent of the temporal dynamics in human multi-stable perception.
Collapse
Affiliation(s)
| | - Reinhard Wolf
- Rudolf Virchow Center, University of Wuerzburg, Germany
| | | |
Collapse
|
10
|
Mongeau JM, Frye MA. Drosophila Spatiotemporally Integrates Visual Signals to Control Saccades. Curr Biol 2017; 27:2901-2914.e2. [PMID: 28943085 DOI: 10.1016/j.cub.2017.08.035] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 07/31/2017] [Accepted: 08/15/2017] [Indexed: 11/16/2022]
Abstract
Like many visually active animals, including humans, flies generate both smooth and rapid saccadic movements to stabilize their gaze. How rapid body saccades and smooth movement interact for simultaneous object pursuit and gaze stabilization is not understood. We directly observed these interactions in magnetically tethered Drosophila free to rotate about the yaw axis. A moving bar elicited sustained bouts of saccades following the bar, with surprisingly little smooth movement. By contrast, a moving panorama elicited robust smooth movement interspersed with occasional optomotor saccades. The amplitude, angular velocity, and torque transients of bar-fixation saccades were finely tuned to the speed of bar motion and were triggered by a threshold in the temporal integral of the bar error angle rather than its absolute retinal position error. Optomotor saccades were tuned to the dynamics of panoramic image motion and were triggered by a threshold in the integral of velocity over time. A hybrid control model based on integrated motion cues simulates saccade trigger and dynamics. We propose a novel algorithm for tuning fixation saccades in flies.
Collapse
Affiliation(s)
- Jean-Michel Mongeau
- Department of Integrative Biology and Physiology, University of California, Los Angeles, Los Angeles, CA 90095-7239, USA
| | - Mark A Frye
- Department of Integrative Biology and Physiology, University of California, Los Angeles, Los Angeles, CA 90095-7239, USA.
| |
Collapse
|
11
|
Abstract
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection.
Collapse
|
12
|
Aptekar JW, Keles MF, Mongeau JM, Lu PM, Frye MA, Shoemaker PA. Method and software for using m-sequences to characterize parallel components of higher-order visual tracking behavior in Drosophila. Front Neural Circuits 2014; 8:130. [PMID: 25400550 PMCID: PMC4215624 DOI: 10.3389/fncir.2014.00130] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2014] [Accepted: 10/09/2014] [Indexed: 11/17/2022] Open
Abstract
A moving visual figure may contain first-order signals defined by variation in mean luminance, as well as second-order signals defined by constant mean luminance and variation in luminance envelope, or higher-order signals that cannot be estimated by taking higher moments of the luminance distribution. Separating these properties of a moving figure to experimentally probe the visual subsystems that encode them is technically challenging and has resulted in debated mechanisms of visual object detection by flies. Our prior work took a white noise systems identification approach using a commercially available electronic display system to characterize the spatial variation in the temporal dynamics of two distinct subsystems for first- and higher-order components of visual figure tracking. The method relied on the use of single pixel displacements of two visual stimuli according to two binary maximum length shift register sequences (m-sequences) and cross-correlation of each m-sequence with time-varying flight steering measurements. The resultant spatio-temporal action fields represent temporal impulse responses parameterized by the azimuthal location of the visual figure, one STAF for first-order and another for higher-order components of compound stimuli. Here we review m-sequence and reverse correlation procedures, then describe our application in detail, provide Matlab code, validate the STAFs, and demonstrate the utility and robustness of STAFs by predicting the results of other published experimental procedures. This method has demonstrated how two relatively modest innovations on classical white noise analysis—the inclusion of space as a way to organize response kernels and the use of linear decoupling to measure the response to two channels of visual information simultaneously—could substantially improve our basic understanding of visual processing in the fly.
Collapse
Affiliation(s)
- Jacob W Aptekar
- Department of Integrative Biology and Physiology, Howard Hughes Medical Institute, University of California, Los Angeles Los Angeles, CA, USA
| | - Mehmet F Keles
- Department of Integrative Biology and Physiology, Howard Hughes Medical Institute, University of California, Los Angeles Los Angeles, CA, USA
| | - Jean-Michel Mongeau
- Department of Integrative Biology and Physiology, Howard Hughes Medical Institute, University of California, Los Angeles Los Angeles, CA, USA
| | - Patrick M Lu
- Department of Integrative Biology and Physiology, Howard Hughes Medical Institute, University of California, Los Angeles Los Angeles, CA, USA
| | - Mark A Frye
- Department of Integrative Biology and Physiology, Howard Hughes Medical Institute, University of California, Los Angeles Los Angeles, CA, USA
| | | |
Collapse
|
13
|
Fox JL, Aptekar JW, Zolotova NM, Shoemaker PA, Frye MA. Figure-ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses. ACTA ACUST UNITED AC 2013; 217:558-69. [PMID: 24198267 PMCID: PMC3922833 DOI: 10.1242/jeb.097220] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input.
Collapse
Affiliation(s)
- Jessica L Fox
- Howard Hughes Medical Institute and Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA 90095-7239, USA
| | | | | | | | | |
Collapse
|
14
|
Abstract
A compact genome and a tiny brain make Drosophila the prime model to understand the neural substrate of behavior. The neurogenetic efforts to reveal neural circuits underlying Drosophila vision started about half a century ago, and now the field is booming with sophisticated genetic tools, rich behavioral assays, and importantly, a greater number of scientists joining from different backgrounds. This review will briefly cover the structural anatomy of the Drosophila visual system, the animal’s visual behaviors, the genes involved in assembling these circuits, the new and powerful techniques, and the challenges ahead for ultimately identifying the general principles of biological computation in the brain.
A typical brain utilizes a great many compact neural circuits to collect and process information from the internal biological and external environmental worlds and generates motor commands for observable behaviors. The fruit fly Drosophila melanogaster, despite of its miniature body and tiny brain, can survive in almost any corner of the world.1 It can find food, court mate, fight rival conspecific, avoid predators, and amazingly fly without crashing into trees. Drosophila vision and its underlying neuronal machinery has been a key research model for at least half century for neurogeneticists.2 Given the efforts invested on the visual system, this animal model is likely to offer the first full understanding of how visual information is computed by a multi-cellular organism. Furthermore, research in Drosophila has revealed many genes that play crucial roles in the formation of functional brains across species. The architectural similarities between the visual systems of Drosophila and vertebrate at the molecular, cellular, and network levels suggest new principles discovered at the circuit level on the relationship between neurons and behavior in Drosophila shall also contribute greatly to our understanding of the general principles for how bigger brains work.3 I start with the anatomy of Drosophila visual system, which surprisingly still contains many uncharted areas.
Collapse
Affiliation(s)
- Yan Zhu
- State Key Laboratory of Brain and Cognitive Science; Institute of Biophysics; Chinese Academy of Sciences; Beijing, China
| |
Collapse
|
15
|
Abstract
In higher-order motion stimuli, the direction of object motion does not follow the direction of luminance change. Such stimuli could be generated by the wing movements of a flying butterfly and further complicated by its motion in and out of shadows. Human subjects readily perceive the direction of higher-order motion, although this stands in stark contrast to prevailing motion vision models. Flies and humans compute motion in similar ways, and because flies behaviorally track bars containing higher-order motion cues, they become an attractive model system for investigating the neurophysiology underlying higher-order motion sensitivity. We here use intracellular electrophysiology of motion-vision-sensitive neurons in the hoverfly lobula plate to quantify responses to stimuli containing higher-order motion. We show that motion sensitivity can be broken down into two separate streams, directionally coding for elementary motion and figure motion, respectively, and that responses to Fourier and theta motion can be predicted from these. The sensitivity is affected both by the stimulus' time course and by the neuron's underlying receptive field. Responses to preferred-direction theta motion are sexually dimorphic and particularly robust along the visual midline.
Collapse
|