1
|
DiBianca S, Jeka J, Reimann H. Visual motion detection thresholds can be reliably measured during walking and standing. Front Hum Neurosci 2023; 17:1239071. [PMID: 38021240 PMCID: PMC10665501 DOI: 10.3389/fnhum.2023.1239071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction In upright standing and walking, the motion of the body relative to the environment is estimated from a combination of visual, vestibular, and somatosensory cues. Associations between vestibular or somatosensory impairments and balance problems are well established, but less is known whether visual motion detection thresholds affect upright balance control. Typically, visual motion threshold values are measured while sitting, with the head fixated to eliminate self-motion. In this study we investigated whether visual motion detection thresholds: (1) can be reliably measured during standing and walking in the presence of natural self-motion; and (2) differ during standing and walking. Methods Twenty-nine subjects stood on and walked on a self-paced, instrumented treadmill inside a virtual visual environment projected on a large dome. Participants performed a two-alternative forced choice experiment in which they discriminated between a counterclockwise ("left") and clockwise ("right") rotation of a visual scene. A 6-down 1-up adaptive staircase algorithm was implemented to change the amplitude of the rotation. A psychometric fit to the participants' binary responses provided an estimate for the detection threshold. Results We found strong correlations between the repeated measurements in both the walking (R = 0.84, p < 0.001) and the standing condition (R = 0.73, p < 0.001) as well as good agreement between the repeated measures with Bland-Altman plots. Average thresholds during walking (mean = 1.04°, SD = 0.43°) were significantly higher than during standing (mean = 0.73°, SD = 0.47°). Conclusion Visual motion detection thresholds can be reliably measured during both walking and standing, and thresholds are higher during walking.
Collapse
Affiliation(s)
- Stephen DiBianca
- Coordination of Balance and Posture, Kinesiology and Applied Physiology, Biomechanics and Movement Science, University of Delaware, Newark, DE, United States
| | | | | |
Collapse
|
2
|
Chen J, Gish CM, Fransen JW, Salazar-Gatzimas E, Clark DA, Borghuis BG. Direct comparison reveals algorithmic similarities in fly and mouse visual motion detection. iScience 2023; 26:107928. [PMID: 37810236 PMCID: PMC10550730 DOI: 10.1016/j.isci.2023.107928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
Evolution has equipped vertebrates and invertebrates with neural circuits that selectively encode visual motion. While similarities in the computations performed by these circuits in mouse and fruit fly have been noted, direct experimental comparisons have been lacking. Because molecular mechanisms and neuronal morphology in the two species are distinct, we directly compared motion encoding in these two species at the algorithmic level, using matched stimuli and focusing on a pair of analogous neurons, the mouse ON starburst amacrine cell (ON SAC) and Drosophila T4 neurons. We find that the cells share similar spatiotemporal receptive field structures, sensitivity to spatiotemporal correlations, and tuning to sinusoidal drifting gratings, but differ in their responses to apparent motion stimuli. Both neuron types showed a response to summed sinusoids that deviates from models for motion processing in these cells, underscoring the similarities in their processing and identifying response features that remain to be explained.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
| | - Caitlin M Gish
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - James W Fransen
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| | | | - Damon A Clark
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Molecular, Cellular, Developmental Biology, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Bart G Borghuis
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| |
Collapse
|
3
|
Fu Q. Motion perception based on ON/OFF channels: A survey. Neural Netw 2023; 165:1-18. [PMID: 37263088 DOI: 10.1016/j.neunet.2023.05.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 04/02/2023] [Accepted: 05/17/2023] [Indexed: 06/03/2023]
Abstract
Motion perception is an essential ability for animals and artificially intelligent systems interacting effectively, safely with surrounding objects and environments. Biological visual systems, that have naturally evolved over hundreds-million years, are quite efficient and robust for motion perception, whereas artificial vision systems are far from such capability. This paper argues that the gap can be significantly reduced by formulation of ON/OFF channels in motion perception models encoding luminance increment (ON) and decrement (OFF) responses within receptive field, separately. Such signal-bifurcating structure has been found in neural systems of many animal species articulating early motion is split and processed in segregated pathways. However, the corresponding biological substrates, and the necessity for artificial vision systems have never been elucidated together, leaving concerns on uniqueness and advantages of ON/OFF channels upon building dynamic vision systems to address real world challenges. This paper highlights the importance of ON/OFF channels in motion perception through surveying current progress covering both neuroscience and computationally modelling works with applications. Compared to related literature, this paper for the first time provides insights into implementation of different selectivity to directional motion of looming, translating, and small-sized target movement based on ON/OFF channels in keeping with soundness and robustness of biological principles. Existing challenges and future trends of such bio-plausible computational structure for visual perception in connection with hotspots of machine learning, advanced vision sensors like event-driven camera finally are discussed.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
4
|
Abstract
How neurons detect the direction of motion is a prime example of neural computation: Motion vision is found in the visual systems of virtually all sighted animals, it is important for survival, and it requires interesting computations with well-defined linear and nonlinear processing steps-yet the whole process is of moderate complexity. The genetic methods available in the fruit fly Drosophila and the charting of a connectome of its visual system have led to rapid progress and unprecedented detail in our understanding of how neurons compute the direction of motion in this organism. The picture that emerged incorporates not only the identity, morphology, and synaptic connectivity of each neuron involved but also its neurotransmitters, its receptors, and their subcellular localization. Together with the neurons' membrane potential responses to visual stimulation, this information provides the basis for a biophysically realistic model of the circuit that computes the direction of visual motion.
Collapse
Affiliation(s)
- Alexander Borst
- Max Planck Institute for Biological Intelligence, Martinsried, Germany; ,
| | - Lukas N Groschner
- Max Planck Institute for Biological Intelligence, Martinsried, Germany; ,
| |
Collapse
|
5
|
Wen P, Landy MS, Rokers B. Identifying cortical areas that underlie the transformation from 2D retinal to 3D head-centric motion signals. Neuroimage 2023; 270:119909. [PMID: 36801370 PMCID: PMC10061442 DOI: 10.1016/j.neuroimage.2023.119909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023] Open
Abstract
Accurate motion perception requires that the visual system integrate the 2D retinal motion signals received by the two eyes into a single representation of 3D motion. However, most experimental paradigms present the same stimulus to the two eyes, signaling motion limited to a 2D fronto-parallel plane. Such paradigms are unable to dissociate the representation of 3D head-centric motion signals (i.e., 3D object motion relative to the observer) from the associated 2D retinal motion signals. Here, we used stereoscopic displays to present separate motion signals to the two eyes and examined their representation in visual cortex using fMRI. Specifically, we presented random-dot motion stimuli that specified various 3D head-centric motion directions. We also presented control stimuli, which matched the motion energy of the retinal signals, but were inconsistent with any 3D motion direction. We decoded motion direction from BOLD activity using a probabilistic decoding algorithm. We found that 3D motion direction signals can be reliably decoded in three major clusters in the human visual system. Critically, in early visual cortex (V1-V3), we found no significant difference in decoding performance between stimuli specifying 3D motion directions and the control stimuli, suggesting that these areas represent the 2D retinal motion signals, rather than 3D head-centric motion itself. In voxels in and surrounding hMT and IPS0 however, decoding performance was consistently superior for stimuli that specified 3D motion directions compared to control stimuli. Our results reveal the parts of the visual processing hierarchy that are critical for the transformation of retinal into 3D head-centric motion signals and suggest a role for IPS0 in their representation, in addition to its sensitivity to 3D object structure and static depth.
Collapse
Affiliation(s)
- Puti Wen
- Psychology, New York University Abu Dhabi, United Arab Emirates.
| | - Michael S Landy
- Department of Psychology and Center for Neural Science, New York University, United States
| | - Bas Rokers
- Psychology, New York University Abu Dhabi, United Arab Emirates; Department of Psychology and Center for Neural Science, New York University, United States
| |
Collapse
|
6
|
Egelhaaf M. Optic flow based spatial vision in insects. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2023:10.1007/s00359-022-01610-w. [PMID: 36609568 DOI: 10.1007/s00359-022-01610-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/06/2022] [Accepted: 12/24/2022] [Indexed: 01/09/2023]
Abstract
The optic flow, i.e., the displacement of retinal images of objects in the environment induced by self-motion, is an important source of spatial information, especially for fast-flying insects. Spatial information over a wide range of distances, from the animal's immediate surroundings over several hundred metres to kilometres, is necessary for mediating behaviours, such as landing manoeuvres, collision avoidance in spatially complex environments, learning environmental object constellations and path integration in spatial navigation. To facilitate the processing of spatial information, the complexity of the optic flow is often reduced by active vision strategies. These result in translations and rotations being largely separated by a saccadic flight and gaze mode. Only the translational components of the optic flow contain spatial information. In the first step of optic flow processing, an array of local motion detectors provides a retinotopic spatial proximity map of the environment. This local motion information is then processed in parallel neural pathways in a task-specific manner and used to control the different components of spatial behaviour. A particular challenge here is that the distance information extracted from the optic flow does not represent the distances unambiguously, but these are scaled by the animal's speed of locomotion. Possible ways of coping with this ambiguity are discussed.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Universitätsstraße 25, 33615, Bielefeld, Germany.
| |
Collapse
|
7
|
Sawant Y, Kundu JN, Radhakrishnan VB, Sridharan D. A Midbrain Inspired Recurrent Neural Network Model for Robust Change Detection. J Neurosci 2022; 42:8262-8283. [PMID: 36123120 PMCID: PMC9653281 DOI: 10.1523/jneurosci.0164-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 07/26/2022] [Accepted: 07/30/2022] [Indexed: 11/21/2022] Open
Abstract
We present a biologically inspired recurrent neural network (RNN) that efficiently detects changes in natural images. The model features sparse, topographic connectivity (st-RNN), closely modeled on the circuit architecture of a "midbrain attention network." We deployed the st-RNN in a challenging change blindness task, in which changes must be detected in a discontinuous sequence of images. Compared with a conventional RNN, the st-RNN learned 9x faster and achieved state-of-the-art performance with 15x fewer connections. An analysis of low-dimensional dynamics revealed putative circuit mechanisms, including a critical role for a global inhibitory (GI) motif, for successful change detection. The model reproduced key experimental phenomena, including midbrain neurons' sensitivity to dynamic stimuli, neural signatures of stimulus competition, as well as hallmark behavioral effects of midbrain microstimulation. Finally, the model accurately predicted human gaze fixations in a change blindness experiment, surpassing state-of-the-art saliency-based methods. The st-RNN provides a novel deep learning model for linking neural computations underlying change detection with psychophysical mechanisms.SIGNIFICANCE STATEMENT For adaptive survival, our brains must be able to accurately and rapidly detect changing aspects of our visual world. We present a novel deep learning model, a sparse, topographic recurrent neural network (st-RNN), that mimics the neuroanatomy of an evolutionarily conserved "midbrain attention network." The st-RNN achieved robust change detection in challenging change blindness tasks, outperforming conventional RNN architectures. The model also reproduced hallmark experimental phenomena, both neural and behavioral, reported in seminal midbrain studies. Lastly, the st-RNN outperformed state-of-the-art models at predicting human gaze fixations in a laboratory change blindness experiment. Our deep learning model may provide important clues about key mechanisms by which the brain efficiently detects changes.
Collapse
Affiliation(s)
- Yash Sawant
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
| | - Jogendra Nath Kundu
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012, India
| | | | - Devarajan Sridharan
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
- Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560012, India
| |
Collapse
|
8
|
Barnatan Y, Tomsic D, Cámera A, Sztarker J. Matched function of the neuropil processing optic flow in flies and crabs: the lobula plate mediates optomotor responses in Neohelice granulata. Proc Biol Sci 2022; 289:20220812. [PMID: 35975436 PMCID: PMC9382210 DOI: 10.1098/rspb.2022.0812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/12/2022] [Indexed: 11/12/2022] Open
Abstract
When an animal rotates (whether it is an arthropod, a fish, a bird or a human) a drift of the visual panorama occurs over its retina, termed optic flow. The image is stabilized by compensatory behaviours (driven by the movement of the eyes, head or the whole body depending on the animal) collectively termed optomotor responses. The dipteran lobula plate has been consistently linked with optic flow processing and the control of optomotor responses. Crabs have a neuropil similarly located and interconnected in the optic lobes, therefore referred to as a lobula plate too. Here we show that the crabs' lobula plate is required for normal optomotor responses since the response was lost or severely impaired in animals whose lobula plate had been lesioned. The effect was behaviour-specific, since avoidance responses to approaching visual stimuli were not affected. Crabs require simpler optic flow processing than flies (because they move slower and in two-dimensional instead of three-dimensional space), consequently their lobula plates are relatively smaller. Nonetheless, they perform the same essential role in the visual control of behaviour. Our findings add a fundamental piece to the current debate on the evolutionary relationship between the lobula plates of insects and crustaceans.
Collapse
Affiliation(s)
- Yair Barnatan
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET-Universidad de Buenos Aires, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
| | - Daniel Tomsic
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET-Universidad de Buenos Aires, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
- Departamento de Fisiología, Biología Molecular y Celular Dr. Héctor Maldonado, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
| | - Alejandro Cámera
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET-Universidad de Buenos Aires, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
| | - Julieta Sztarker
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET-Universidad de Buenos Aires, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
- Departamento de Fisiología, Biología Molecular y Celular Dr. Héctor Maldonado, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón II, Ciudad Universitaria, 1428 Buenos Aires, Argentina
| |
Collapse
|
9
|
Stöckl A, Grittner R, Taylor G, Rau C, Bodey AJ, Kelber A, Baird E. Allometric scaling of a superposition eye optimizes sensitivity and acuity in large and small hawkmoths. Proc Biol Sci 2022; 289:20220758. [PMID: 35892218 PMCID: PMC9326294 DOI: 10.1098/rspb.2022.0758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Animals vary widely in body size within and across species. This has consequences for the function of organs and body parts in both large and small individuals. How these scale, in relation to body size, reveals evolutionary investment strategies, often resulting in trade-offs between functions. Eyes exemplify these trade-offs, as they are limited by their absolute size in two key performance features: sensitivity and spatial acuity. Due to their size polymorphism, insect compound eyes are ideal models for studying the allometric scaling of eye performance. Previous work on apposition compound eyes revealed that allometric scaling led to poorer spatial resolution and visual sensitivity in small individuals, across a range of insect species. Here, we used X-ray microtomography to investigate allometric scaling in superposition compound eyes-the second most common eye type in insects-for the first time. Our results reveal a novel strategy to cope with the trade-off between sensitivity and spatial acuity, as we show that the eyes of the hummingbird hawkmoth retain an optimal balance between these performance measures across all body sizes.
Collapse
Affiliation(s)
- Anna Stöckl
- Behavioral Physiology and Sociobiology (Zoology II), University of Würzburg, Würzburg, Germany
| | - Rebecca Grittner
- Behavioral Physiology and Sociobiology (Zoology II), University of Würzburg, Würzburg, Germany
| | - Gavin Taylor
- Institute for Globally Distributed Open Research and Education (IGDORE), Ribeirão Preto, Brazil
| | - Christoph Rau
- Diamond Light Source, Harwell Science and Innovation Campus, Didcot, UK
| | - Andrew J. Bodey
- Diamond Light Source, Harwell Science and Innovation Campus, Didcot, UK
| | - Almut Kelber
- Department of Biology, Lund University, Lund, Sweden
| | - Emily Baird
- Department of Zoology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
10
|
Ben-Ami S, Gupta P, Yadav M, Shah P, Talwar G, Paswan S, Ganesh S, Troje NF, Sinha P. Human (but not animal) motion can be recognized at first sight - After treatment for congenital blindness. Neuropsychologia 2022; 174:108307. [PMID: 35752267 DOI: 10.1016/j.neuropsychologia.2022.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 05/27/2022] [Accepted: 06/13/2022] [Indexed: 11/24/2022]
Abstract
The long-standing nativist vs. empiricist debate asks a foundational question in epistemology - does our knowledge arise through experience or is it available innately? Studies that probe the sensitivity of newborns and patients recovering from congenital blindness are central in informing this dialogue. One of the most robust sensitivities our visual system possesses is to 'biological motion' - the movement patterns of humans and other vertebrates. Various biological motion perception skills (such as distinguishing between movement of human and non-human animals, or between upright and inverted human movement) become evident within the first months of life. The mechanisms of acquiring these capabilities, and specifically the contribution of visual experience to their development, are still under debate. We had the opportunity to directly examine the role of visual experience in biological motion perception, by testing what level of sensitivity is present immediately upon onset of sight following years of congenital visual deprivation. Two congenitally blind patients who underwent sight-restorative cataract-removal surgery late in life (at the ages of 7 and 20 years) were tested before and after sight restoration. The patients were shown displays of walking humans, pigeons, and cats, and asked to describe what they saw. Visual recognition of movement patterns emerged immediately upon eye-opening following surgery, when the patients spontaneously began to identify human, but not animal, biological motion. This recognition ability was evident contemporaneously for upright and inverted human displays. These findings suggest that visual recognition of human motion patterns may not critically depend on visual experience, as it was evident upon first exposure to un-obstructed sight in patients with very limited prior visual exposure, and furthermore, was not limited to the typical (upright) orientation of humans in real-life settings.
Collapse
Affiliation(s)
- Shlomit Ben-Ami
- MIT Department of Brain and Cognitive Sciences, Cambridge, MA, USA; Sagol School of Neuroscience, School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel; Minducate Science of Learning Research and Innovation Center, Tel-Aviv University, Tel Aviv, Israel.
| | - Priti Gupta
- The Project Prakash Center, Delhi, India; Amarnath and Shashi Khosla School of Information Technology, Indian Institute of Technology, Delhi, India
| | | | | | | | - Saroj Paswan
- The Project Prakash Center, Delhi, India; Department of Ophthalmology, Dr. Shroff's Charity Eye Hospital, Delhi, India
| | - Suma Ganesh
- Department of Ophthalmology, Dr. Shroff's Charity Eye Hospital, Delhi, India
| | | | - Pawan Sinha
- MIT Department of Brain and Cognitive Sciences, Cambridge, MA, USA
| |
Collapse
|
11
|
An Artificial Visual System for Motion Direction Detection Based on the Hassenstein–Reichardt Correlator Model. ELECTRONICS 2022. [DOI: 10.3390/electronics11091423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The perception of motion direction is essential for the survival of visual animals. Despite various theoretical and biophysical investigations that have been conducted to elucidate directional selectivity at the neural level, the systemic mechanism of motion direction detection remains elusive. Here, we develop an artificial visual system (AVS) based on the core computation of the Hassenstein–Reichardt correlator (HRC) model for global motion direction detection. With reference to the biological investigations of Drosophila, we first describe a local motion-sensitive, directionally detective neuron that only responds to ON motion signals with high pattern contrast in a particular direction. Then, we use the full-neurons scheme motion direction detection mechanism to detect the global motion direction based on our previous research. The mechanism enables our AVS to detect multiple directions in a two-dimensional view, and the global motion direction is inferred from the outputs of all local motion-sensitive directionally detective neurons. To verify the reliability of our AVS, we conduct a series of experiments and compare its performance with the time-considered convolution neural network (CNN) and the EfficientNetB0 under the same conditions. The experimental results demonstrated that our system is reliable in detecting the direction of motion, and among the three models, our AVS has better motion direction detection capabilities.
Collapse
|
12
|
A novel motion direction detection mechanism based on dendritic computation of direction-selective ganglion cells. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
13
|
Grittner R, Baird E, Stöckl A. Spatial tuning of translational optic flow responses in hawkmoths of varying body size. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2021; 208:279-296. [PMID: 34893928 PMCID: PMC8934765 DOI: 10.1007/s00359-021-01530-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 11/28/2021] [Accepted: 11/30/2021] [Indexed: 11/12/2022]
Abstract
To safely navigate their environment, flying insects rely on visual cues, such as optic flow. Which cues insects can extract from their environment depends closely on the spatial and temporal response properties of their visual system. These in turn can vary between individuals that differ in body size. How optic flow-based flight control depends on the spatial structure of visual cues, and how this relationship scales with body size, has previously been investigated in insects with apposition compound eyes. Here, we characterised the visual flight control response limits and their relationship to body size in an insect with superposition compound eyes: the hummingbird hawkmoth Macroglossum stellatarum. We used the hawkmoths’ centring response in a flight tunnel as a readout for their reception of translational optic flow stimuli of different spatial frequencies. We show that their responses cut off at different spatial frequencies when translational optic flow was presented on either one, or both tunnel walls. Combined with differences in flight speed, this suggests that their flight control was primarily limited by their temporal rather than spatial resolution. We also observed strong individual differences in flight performance, but no correlation between the spatial response cutoffs and body or eye size.
Collapse
Affiliation(s)
- Rebecca Grittner
- Behavioral Physiology and Sociobiology (Zoology II), University of Würzburg, Würzburg, Germany
| | - Emily Baird
- Department of Zoology, Stockholm University, Stockholm, Sweden
| | - Anna Stöckl
- Behavioral Physiology and Sociobiology (Zoology II), University of Würzburg, Würzburg, Germany.
| |
Collapse
|
14
|
Mano O, Creamer MS, Badwan BA, Clark DA. Predicting individual neuron responses with anatomically constrained task optimization. Curr Biol 2021; 31:4062-4075.e4. [PMID: 34324832 DOI: 10.1016/j.cub.2021.06.090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/24/2021] [Accepted: 06/29/2021] [Indexed: 01/28/2023]
Abstract
Artificial neural networks trained to solve sensory tasks can develop statistical representations that match those in biological circuits. However, it remains unclear whether they can reproduce properties of individual neurons. Here, we investigated how artificial networks predict individual neuron properties in the visual motion circuits of the fruit fly Drosophila. We trained anatomically constrained networks to predict movement in natural scenes, solving the same inference problem as fly motion detectors. Units in the artificial networks adopted many properties of analogous individual neurons, even though they were not explicitly trained to match these properties. Among these properties was the split into ON and OFF motion detectors, which is not predicted by classical motion detection models. The match between model and neurons was closest when models were trained to be robust to noise. These results demonstrate how anatomical, task, and noise constraints can explain properties of individual neurons in a small neural network.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
15
|
Li J, Niemeier M, Kern R, Egelhaaf M. Disentangling of Local and Wide-Field Motion Adaptation. Front Neural Circuits 2021; 15:713285. [PMID: 34531728 PMCID: PMC8438216 DOI: 10.3389/fncir.2021.713285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 08/11/2021] [Indexed: 11/21/2022] Open
Abstract
Motion adaptation has been attributed in flying insects a pivotal functional role in spatial vision based on optic flow. Ongoing motion enhances in the visual pathway the representation of spatial discontinuities, which manifest themselves as velocity discontinuities in the retinal optic flow pattern during translational locomotion. There is evidence for different spatial scales of motion adaptation at the different visual processing stages. Motion adaptation is supposed to take place, on the one hand, on a retinotopic basis at the level of local motion detecting neurons and, on the other hand, at the level of wide-field neurons pooling the output of many of these local motion detectors. So far, local and wide-field adaptation could not be analyzed separately, since conventional motion stimuli jointly affect both adaptive processes. Therefore, we designed a novel stimulus paradigm based on two types of motion stimuli that had the same overall strength but differed in that one led to local motion adaptation while the other did not. We recorded intracellularly the activity of a particular wide-field motion-sensitive neuron, the horizontal system equatorial cell (HSE) in blowflies. The experimental data were interpreted based on a computational model of the visual motion pathway, which included the spatially pooling HSE-cell. By comparing the difference between the recorded and modeled HSE-cell responses induced by the two types of motion adaptation, the major characteristics of local and wide-field adaptation could be pinpointed. Wide-field adaptation could be shown to strongly depend on the activation level of the cell and, thus, on the direction of motion. In contrast, the response gain is reduced by local motion adaptation to a similar extent independent of the direction of motion. This direction-independent adaptation differs fundamentally from the well-known adaptive adjustment of response gain according to the prevailing overall stimulus level that is considered essential for an efficient signal representation by neurons with a limited operating range. Direction-independent adaptation is discussed to result from the joint activity of local motion-sensitive neurons of different preferred directions and to lead to a representation of the local motion direction that is independent of the overall direction of global motion.
Collapse
Affiliation(s)
- Jinglin Li
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Roland Kern
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
16
|
Abstract
Time is largely a hidden variable in vision. It is the condition for seeing interesting things such as spatial forms and patterns, colours and movements in the external world, and yet is not meant to be noticed in itself. Temporal aspects of visual processing have received comparatively little attention in research. Temporal properties have been made explicit mainly in measurements of resolution and integration in simple tasks such as detection of spatially homogeneous flicker or light pulses of varying duration. Only through a mechanistic understanding of their basis in retinal photoreceptors and circuits can such measures guide modelling of natural vision in different species and illuminate functional and evolutionary trade-offs. Temporal vision research would benefit from bridging traditions that speak different languages. Towards that goal, I here review studies from the fields of human psychophysics, retinal physiology and neuroethology, with a focus on fundamental constraints set by early vision. Summary: Simple measures of temporal vision such as the critical flicker frequency can be useful for modelling natural vision only if their relationship to photoreceptor responses and retinal processing is understood.
Collapse
Affiliation(s)
- Kristian Donner
- Molecular and Integrative Biosciences Research Programme, Faculty of Biological and Environmental Sciences, University of Helsinki, 00014 Helsinki, Finland
| |
Collapse
|
17
|
De Agrò M, Rößler DC, Kim K, Shamble PS. Perception of biological motion by jumping spiders. PLoS Biol 2021; 19:e3001172. [PMID: 34264925 PMCID: PMC8282030 DOI: 10.1371/journal.pbio.3001172] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/11/2021] [Indexed: 11/26/2022] Open
Abstract
The body of most creatures is composed of interconnected joints. During motion, the spatial location of these joints changes, but they must maintain their distances to one another, effectively moving semirigidly. This pattern, termed "biological motion" in the literature, can be used as a visual cue, enabling many animals (including humans) to distinguish animate from inanimate objects. Crucially, even artificially created scrambled stimuli, with no recognizable structure but that maintains semirigid movement patterns, are perceived as animated. However, to date, biological motion perception has only been reported in vertebrates. Due to their highly developed visual system and complex visual behaviors, we investigated the capability of jumping spiders to discriminate biological from nonbiological motion using point-light display stimuli. These kinds of stimuli maintain motion information while being devoid of structure. By constraining spiders on a spherical treadmill, we simultaneously presented 2 point-light displays with specific dynamic traits and registered their preference by observing which pattern they turned toward. Spiders clearly demonstrated the ability to discriminate between biological motion and random stimuli, but curiously turned preferentially toward the latter. However, they showed no preference between biological and scrambled displays, results that match responses produced by vertebrates. Crucially, spiders turned toward the stimuli when these were only visible by the lateral eyes, evidence that this task may be eye specific. This represents the first demonstration of biological motion recognition in an invertebrate, posing crucial questions about the evolutionary history of this ability and complex visual processing in nonvertebrate systems.
Collapse
Affiliation(s)
- Massimo De Agrò
- John Harvard Distinguished Science Fellows Program, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Zoology, Regensburg University, Regensburg, Germany
| | - Daniela C. Rößler
- John Harvard Distinguished Science Fellows Program, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts, United States of America
| | - Kris Kim
- John Harvard Distinguished Science Fellows Program, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts, United States of America
| | - Paul S. Shamble
- John Harvard Distinguished Science Fellows Program, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts, United States of America
| |
Collapse
|
18
|
Billah MA, Faruque IA. Bioinspired Visuomotor Feedback in a Multiagent Group/Swarm Context. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3033703] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
19
|
Abstract
Young children are adept at several types of scientific reasoning, yet older children and adults have difficulty mastering formal scientific ideas and practices. Why do “little scientists” often become scientifically illiterate adults? We address this question by examining the role of intuition in learning science, both as a body of knowledge and as a method of inquiry. Intuition supports children's understanding of everyday phenomena but conflicts with their ability to learn physical and biological concepts that defy firsthand observation, such as molecules, forces, genes, and germs. Likewise, intuition supports children's causal learning but provides little guidance on how to navigate higher-order constraints on scientific induction, such as the control of variables or the coordination of theory and data. We characterize the foundations of children's intuitive understanding of the natural world, as well as the conceptual scaffolds needed to bridge these intuitions with formal science.
Collapse
Affiliation(s)
- Andrew Shtulman
- Department of Psychology, Occidental College, Los Angeles, California 91104, USA
| | - Caren Walker
- Department of Psychology, University of California, San Diego, La Jolla, California 92093, USA
| |
Collapse
|
20
|
Fu Q, Yue S. Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds. BIOLOGICAL CYBERNETICS 2020; 114:443-460. [PMID: 32623517 PMCID: PMC7554016 DOI: 10.1007/s00422-020-00841-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/19/2020] [Indexed: 06/03/2023]
Abstract
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; (2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction or null-direction translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| |
Collapse
|
21
|
Zavatone-Veth JA, Badwan BA, Clark DA. A minimal synaptic model for direction selective neurons in Drosophila. J Vis 2020; 20:2. [PMID: 32040161 PMCID: PMC7343402 DOI: 10.1167/jov.20.2.2] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomic and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results but have rarely been compared across experiments. Here we use the wealth of available anatomical and physiological data to construct a minimal, biophysically inspired synaptic model for Drosophila’s first-order direction-selective T4 cells. We show how this model relates mathematically to classical models of motion detection, including the Hassenstein-Reichardt correlator model. We used numerical simulation to test how well this synaptic model could reproduce measurements of T4 cells across many datasets and stimulus modalities. These comparisons include responses to sinusoid gratings, to apparent motion stimuli, to stochastic stimuli, and to natural scenes. Without fine-tuning this model, it sufficed to reproduce many, but not all, response properties of T4 cells. Since this model is flexible and based on straightforward biophysical properties, it provides an extensible framework for developing a mechanistic understanding of T4 neural response properties. Moreover, it can be used to assess the sufficiency of simple biophysical mechanisms to describe features of the direction-selective computation and identify where our understanding must be improved.
Collapse
|
22
|
Bach M, Atala-Gérard L. The Rotating Snakes Illusion Is a Straightforward Consequence of Nonlinearity in Arrays of Standard Motion Detectors. Iperception 2020; 11:2041669520958025. [PMID: 33149875 PMCID: PMC7585899 DOI: 10.1177/2041669520958025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 08/17/2020] [Indexed: 11/29/2022] Open
Abstract
The Rotating Snakes illusion is a motion illusion based on repeating, asymmetric luminance patterns. Recently, we found certain gray-value conditions where a weak illusory motion occurs in the opposite direction. Of the four models for explaining the illusion, one also explains the unexpected perceived opposite direction.We here present a simple new model, without free parameters, based on an array of standard correlation-type motion detectors with a subsequent nonlinearity (e.g., saturation) before summing the detector outputs. The model predicts (a) the pattern-appearance motion illusion for steady fixation, (b) an illusion under the real-world situation of saccades across or near the pattern (pattern shift), (c) a relative maximum of illusory motion for the same gray values where it is found psychophysically, and (d) the opposite illusion for certain luminance values. We submit that the new model's sparseness of assumptions justifies adding a fifth model to explain this illusion.
Collapse
Affiliation(s)
- Michael Bach
- Eye Center, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Lea Atala-Gérard
- Eye Center, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| |
Collapse
|
23
|
|
24
|
Hsu SJ, Cheng B. Retinal slip compensation of pitch-constrained blue bottle flies flying in a flight mill. J Exp Biol 2020; 223:jeb210104. [PMID: 32371444 DOI: 10.1242/jeb.210104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 04/23/2020] [Indexed: 11/20/2022]
Abstract
In the presence of wind or background image motion, flies are able to maintain a constant retinal slip velocity by regulating flight speed to the extent permitted by their locomotor capacity. Here we investigated the retinal slip compensation of tethered blue bottle flies (Calliphora vomitoria) flying semi-freely along an annular corridor in a magnetically levitated flight mill enclosed by two motorized cylindrical walls. We perturbed the flies' retinal slip by spinning the cylindrical walls, generating bilaterally averaged retinal slip perturbations from -0.3 to 0.3 m s-1 (or -116.4 to 116.4 deg s-1). When the perturbation was less than ∼0.1 m s-1 (38.4 deg s-1), the flies successfully compensated the perturbations and maintained a retinal slip velocity by adjusting their airspeed up to 20%. However, with greater retinal slip perturbation, the flies' compensation became saturated as their airspeed plateaued, indicating that they were unable to further maintain a constant retinal slip velocity. The compensation gain, i.e. the ratio of airspeed compensation and retinal slip perturbation, depended on the spatial frequency of the grating patterns, being the largest at 12 m-1 (0.04 deg-1).
Collapse
Affiliation(s)
- Shih-Jung Hsu
- Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 16802, USA
| | - Bo Cheng
- Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 16802, USA
| |
Collapse
|
25
|
Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 2020; 106:388-403.e18. [DOI: 10.1016/j.neuron.2020.01.040] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 01/08/2023]
|
26
|
Ji X, Yuan D, Wei H, Cheng Y, Wang X, Yang J, Hu P, Gestrich JY, Liu L, Zhu Y. Differentiation of Theta Visual Motion from Fourier Motion Requires LC16 and R18C12 Neurons in Drosophila. iScience 2020; 23:101041. [PMID: 32325414 PMCID: PMC7176990 DOI: 10.1016/j.isci.2020.101041] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/09/2020] [Accepted: 04/01/2020] [Indexed: 11/19/2022] Open
Abstract
Many animals perceive features of higher-order visual motion that are beyond the spatiotemporal correlations of luminance defined in first-order motion. Although the neural mechanisms of first-order motion detection have become understood in recent years, those underlying higher-order motion perception remain unclear. Here, we established a paradigm to assess the detection of theta motion—a type of higher-order motion—in freely walking Drosophila. Behavioral screening using this paradigm identified two clusters of neurons in the central brain, designated as R18C12, which were required for perception of theta motion but not for first-order motion. Furthermore, theta motion-activated R18C12 neurons were structurally and functionally located downstream of visual projection neurons in lobula, lobula columnar cells LC16, which activated R18C12 neurons via interactions of acetylcholine (ACh) and muscarinic acetylcholine receptors (mAChRs). The current study provides new insights into LC neurons and the neuronal mechanisms underlying visual information processing in complex natural scenes. Perception of theta motion requires LC16 and R18C12 neurons R18C12 neurons are activated by theta motion R18C12 neurons form synaptic connections with LC16 neurons LC16 neurons activate R18C12 neurons through ACh acting on mAChR
Collapse
Affiliation(s)
- Xiaoxiao Ji
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Deliang Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Hongying Wei
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Yaxin Cheng
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Xinwei Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Jihua Yang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Pengbo Hu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Julia Yvonne Gestrich
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - Li Liu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China; CAS Key Laboratory of Mental Health, Beijing 100101, P. R. China.
| | - Yan Zhu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Chaoyang District, Beijing 100101, P. R. China; College of Life Sciences, University of the Chinese Academy of Sciences, Beijing 100049, P. R. China.
| |
Collapse
|
27
|
Meyer HG, Klimeck D, Paskarbeit J, Rückert U, Egelhaaf M, Porrmann M, Schneider A. Resource-efficient bio-inspired visual processing on the hexapod walking robot HECTOR. PLoS One 2020; 15:e0230620. [PMID: 32236111 PMCID: PMC7112198 DOI: 10.1371/journal.pone.0230620] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 03/04/2020] [Indexed: 11/26/2022] Open
Abstract
Emulating the highly resource-efficient processing of visual motion information in the brain of flying insects, a bio-inspired controller for collision avoidance and navigation was implemented on a novel, integrated System-on-Chip-based hardware module. The hardware module is used to control visually-guided navigation behavior of the stick insect-like hexapod robot HECTOR. By leveraging highly parallelized bio-inspired algorithms to extract nearness information from visual motion in dynamically reconfigurable logic, HECTOR is able to navigate to predefined goal positions without colliding with obstacles. The system drastically outperforms CPU- and graphics card-based implementations in terms of speed and resource efficiency, making it suitable to be also placed on fast moving robots, such as flying drones.
Collapse
Affiliation(s)
- Hanno Gerd Meyer
- Research Group Biomechatronics, CITEC, Bielefeld University, Bielefeld, Germany
- Department of Neurobiology and CITEC, Bielefeld University, Bielefeld, Germany
- Biomechatronics and Embedded Systems Group, Faculty of Engineering and Mathematics, University of Applied Sciences, Bielefeld, Germany
| | - Daniel Klimeck
- Cognitronics and Sensor Systems Group, CITEC, Bielefeld University, Bielefeld, Germany
| | - Jan Paskarbeit
- Research Group Biomechatronics, CITEC, Bielefeld University, Bielefeld, Germany
| | - Ulrich Rückert
- Cognitronics and Sensor Systems Group, CITEC, Bielefeld University, Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology and CITEC, Bielefeld University, Bielefeld, Germany
| | - Mario Porrmann
- Computer Engineering Group, Osnabrück University, Osnabrück, Germany
| | - Axel Schneider
- Research Group Biomechatronics, CITEC, Bielefeld University, Bielefeld, Germany
- Biomechatronics and Embedded Systems Group, Faculty of Engineering and Mathematics, University of Applied Sciences, Bielefeld, Germany
| |
Collapse
|
28
|
Tehrani-Saleh A, Adami C. Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing? ENTROPY 2020; 22:e22040385. [PMID: 33286159 PMCID: PMC7516857 DOI: 10.3390/e22040385] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/11/2020] [Accepted: 03/25/2020] [Indexed: 11/16/2022]
Abstract
How cognitive neural systems process information is largely unknown, in part because of how difficult it is to accurately follow the flow of information from sensors via neurons to actuators. Measuring the flow of information is different from measuring correlations between firing neurons, for which several measures are available, foremost among them the Shannon information, which is an undirected measure. Several information-theoretic notions of "directed information" have been used to successfully detect the flow of information in some systems, in particular in the neuroscience community. However, recent work has shown that directed information measures such as transfer entropy can sometimes inadequately estimate information flow, or even fail to identify manifest directed influences, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Because it is unclear how often such cryptic influences emerge in cognitive systems, the usefulness of transfer entropy measures to reconstruct information flow is unknown. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks (motion detection and sound localization). Besides counting the frequency of problematic logic gates, we also test whether transfer entropy applied to an activity time-series recorded from behaving digital brains can infer information flow, compared to a ground-truth model of direct influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer directed information when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.
Collapse
Affiliation(s)
- Ali Tehrani-Saleh
- Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA;
- BEACON Center for the Study of Evolution, Michigan State University, East Lansing, MI 48824, USA
| | - Christoph Adami
- BEACON Center for the Study of Evolution, Michigan State University, East Lansing, MI 48824, USA
- Department of Microbiology & Molecular Genetics, Michigan State University, East Lansing, MI 48824, USA
- Department of Physics & Astronomy, Michigan State University, East Lansing, MI 48824, USA
- Correspondence:
| |
Collapse
|
29
|
Matulis CA, Chen J, Gonzalez-Suarez AD, Behnia R, Clark DA. Heterogeneous Temporal Contrast Adaptation in Drosophila Direction-Selective Circuits. Curr Biol 2020; 30:222-236.e6. [PMID: 31928874 PMCID: PMC7003801 DOI: 10.1016/j.cub.2019.11.077] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 11/06/2019] [Accepted: 11/26/2019] [Indexed: 11/23/2022]
Abstract
In visual systems, neurons adapt both to the mean light level and to the range of light levels, or the contrast. Contrast adaptation has been studied extensively, but it remains unclear how it is distributed among neurons in connected circuits, and how early adaptation affects subsequent computations. Here, we investigated temporal contrast adaptation in neurons across Drosophila's visual motion circuitry. Several ON-pathway neurons showed strong adaptation to changes in contrast over time. One of these neurons, Mi1, showed almost complete adaptation on fast timescales, and experiments ruled out several potential mechanisms for its adaptive properties. When contrast adaptation reduced the gain in ON-pathway cells, it was accompanied by decreased motion responses in downstream direction-selective cells. Simulations show that contrast adaptation can substantially improve motion estimates in natural scenes. The benefits are larger for ON-pathway adaptation, which helps explain the heterogeneous distribution of contrast adaptation in these circuits.
Collapse
Affiliation(s)
- Catherine A Matulis
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA
| | | | - Rudy Behnia
- Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Damon A Clark
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, 260 Whitney Avenue, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, 333 Cedar Street, New Haven, CT 06510, USA.
| |
Collapse
|
30
|
How fly neurons compute the direction of visual motion. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 206:109-124. [PMID: 31691093 PMCID: PMC7069908 DOI: 10.1007/s00359-019-01375-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 10/16/2019] [Accepted: 10/23/2019] [Indexed: 10/25/2022]
Abstract
Detecting the direction of image motion is a fundamental component of visual computation, essential for survival of the animal. However, at the level of individual photoreceptors, the direction in which the image is shifting is not explicitly represented. Rather, directional motion information needs to be extracted from the photoreceptor array by comparing the signals of neighboring units over time. The exact nature of this process as implemented in the visual system of the fruit fly Drosophila melanogaster has been studied in great detail, and much progress has recently been made in determining the neural circuits giving rise to directional motion information. The results reveal the following: (1) motion information is computed in parallel ON and OFF pathways. (2) Within each pathway, T4 (ON) and T5 (OFF) cells are the first neurons to represent the direction of motion. Four subtypes of T4 and T5 cells exist, each sensitive to one of the four cardinal directions. (3) The core process of direction selectivity as implemented on the dendrites of T4 and T5 cells comprises both an enhancement of signals for motion along their preferred direction as well as a suppression of signals for motion along the opposite direction. This combined strategy ensures a high degree of direction selectivity right at the first stage where the direction of motion is computed. (4) At the subsequent processing stage, tangential cells spatially integrate direct excitation from ON and OFF-selective T4 and T5 cells and indirect inhibition from bi-stratified LPi cells activated by neighboring T4/T5 terminals, thus generating flow-field-selective responses.
Collapse
|
31
|
Chen J, Mandel HB, Fitzgerald JE, Clark DA. Asymmetric ON-OFF processing of visual motion cancels variability induced by the structure of natural scenes. eLife 2019; 8:e47579. [PMID: 31613221 PMCID: PMC6884396 DOI: 10.7554/elife.47579] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 10/12/2019] [Indexed: 02/05/2023] Open
Abstract
Animals detect motion using a variety of visual cues that reflect regularities in the natural world. Experiments in animals across phyla have shown that motion percepts incorporate both pairwise and triplet spatiotemporal correlations that could theoretically benefit motion computation. However, it remains unclear how visual systems assemble these cues to build accurate motion estimates. Here, we used systematic behavioral measurements of fruit fly motion perception to show how flies combine local pairwise and triplet correlations to reduce variability in motion estimates across natural scenes. By generating synthetic images with statistics controlled by maximum entropy distributions, we show that the triplet correlations are useful only when images have light-dark asymmetries that mimic natural ones. This suggests that asymmetric ON-OFF processing is tuned to the particular statistics of natural scenes. Since all animals encounter the world's light-dark asymmetries, many visual systems are likely to use asymmetric ON-OFF processing to improve motion estimation.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
| | - Holly B Mandel
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
| | - James E Fitzgerald
- Janelia Research CampusHoward Hughes Medical InstituteAshburnUnited States
| | - Damon A Clark
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
- Department of PhysicsYale UniversityNew HavenUnited States
- Department of NeuroscienceYale UniversityNew HavenUnited States
| |
Collapse
|
32
|
Zanker JM. Prey Capture: Becoming Invisible When You Move. Curr Biol 2019; 29:R875-R877. [DOI: 10.1016/j.cub.2019.07.076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Daly IM, How MJ, Partridge JC, Roberts NW. Gaze stabilization in mantis shrimp in response to angled stimuli. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 205:515-527. [PMID: 31093738 PMCID: PMC6647723 DOI: 10.1007/s00359-019-01341-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 05/02/2019] [Accepted: 05/04/2019] [Indexed: 11/24/2022]
Abstract
Gaze stabilization is a fundamental aspect of vision and almost all animals shift their eyes to compensate for any self-movement relative to the external environment. When it comes to mantis shrimp, however, the situation becomes complicated due to the complexity of their visual system and their range of eye movements. The stalked eyes of mantis shrimp can independently move left and right, and up and down, whilst simultaneously rotating about the axis of the eye stalks. Despite the large range of rotational freedom, mantis shrimp nevertheless show a stereotypical gaze stabilization response to horizontal motion of a wide-field, high-contrast stimulus. This response is often accompanied by pitch (up-down) and torsion (about the eye stalk) rotations which, surprisingly, have no effect on the performance of yaw (side-to-side) gaze stabilization. This unusual feature of mantis shrimp vision suggests that their neural circuitry for detecting motion is radially symmetric and immune to the confounding effects of torsional self-motion. In this work, we reinforce this finding, demonstrating that the yaw gaze stabilization response of the mantis shrimp is robust to the ambiguous motion cues arising from the motion of striped visual gratings in which the angle of a grating is offset from its direction of travel.
Collapse
Affiliation(s)
- Ilse M Daly
- School of Biological Sciences, University of Bristol, Tyndall Avenue, Bristol, BS8 1TQ, UK.
| | - Martin J How
- School of Biological Sciences, University of Bristol, Tyndall Avenue, Bristol, BS8 1TQ, UK
| | - Julian C Partridge
- Oceans Institute, University of Western Australia, 35 Stirling Highway, (M470), Crawley, WA, 6009, Australia
| | - Nicholas W Roberts
- School of Biological Sciences, University of Bristol, Tyndall Avenue, Bristol, BS8 1TQ, UK
| |
Collapse
|
34
|
Abbas W, Masip Rodo D. Computer Methods for Automatic Locomotion and Gesture Tracking in Mice and Small Animals for Neuroscience Applications: A Survey. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3274. [PMID: 31349617 PMCID: PMC6696321 DOI: 10.3390/s19153274] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 07/19/2019] [Accepted: 07/21/2019] [Indexed: 01/07/2023]
Abstract
Neuroscience has traditionally relied on manually observing laboratory animals in controlled environments. Researchers usually record animals behaving freely or in a restrained manner and then annotate the data manually. The manual annotation is not desirable for three reasons; (i) it is time-consuming, (ii) it is prone to human errors, and (iii) no two human annotators will 100% agree on annotation, therefore, it is not reproducible. Consequently, automated annotation for such data has gained traction because it is efficient and replicable. Usually, the automatic annotation of neuroscience data relies on computer vision and machine learning techniques. In this article, we have covered most of the approaches taken by researchers for locomotion and gesture tracking of specific laboratory animals, i.e. rodents. We have divided these papers into categories based upon the hardware they use and the software approach they take. We have also summarized their strengths and weaknesses.
Collapse
Affiliation(s)
- Waseem Abbas
- Multimedia and Telecommunications Department, Universitat Oberta de Catalunya, 08018 Barcelona, Spain.
| | - David Masip Rodo
- Multimedia and Telecommunications Department, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
| |
Collapse
|
35
|
Dynamic nonlinearities enable direction opponency in Drosophila elementary motion detectors. Nat Neurosci 2019; 22:1318-1326. [PMID: 31346296 PMCID: PMC6748873 DOI: 10.1038/s41593-019-0443-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 06/03/2019] [Indexed: 12/13/2022]
Abstract
Direction-selective neurons respond to visual motion in a preferred direction. They are direction-opponent if they are also inhibited by motion in the opposite direction. In flies and vertebrates, direction opponency has been observed in second-order direction-selective neurons, which achieve this opponency by subtracting signals from first-order direction-selective cells with opposite directional tunings. Here, we report direction opponency in Drosophila that emerges in first-order direction-selective neurons, the elementary motion detectors T4 and T5. This opponency persists when synaptic output from these cells is blocked, suggesting that it arises from feedforward, not feedback, computations. These observations exclude a broad class of linear-nonlinear models that have been proposed to describe direction-selective computations. However, they are consistent with models that include dynamic nonlinearities. Simulations of opponent models suggest that direction opponency in first-order motion detectors improves motion discriminability by suppressing noise generated by the local structure of natural scenes.
Collapse
|
36
|
Moscatelli A, Scotto CR, Ernst MO. Illusory changes in the perceived speed of motion derived from proprioception and touch. J Neurophysiol 2019; 122:1555-1565. [PMID: 31314634 DOI: 10.1152/jn.00719.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
In vision, the perceived velocity of a moving stimulus differs depending on whether we pursue it with the eyes or not: A stimulus moving across the retina with the eyes stationary is perceived as being faster compared with a stimulus of the same physical speed that the observer pursues with the eyes, while its retinal motion is zero. This effect is known as the Aubert-Fleischl phenomenon. Here, we describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion only (i.e., motion across the skin), while keeping the hand world stationary, or from kinesthesia only by tracking the stimulus with a guided arm movement, such that the tactile motion on the finger was zero (i.e., only finger motion but no movement across the skin). Participants overestimated the velocity of the stimulus determined from tactile motion compared with kinesthesia in analogy with the visual Aubert-Fleischl phenomenon. In two follow-up experiments, we manipulated the stimulus noise by changing the texture of the touched surface. Similarly to the visual phenomenon, this significantly affected the strength of the illusion. This study supports the hypothesis of shared computations for motion processing between vision and touch.NEW & NOTEWORTHY In vision, the perceived velocity of a moving stimulus is different depending on whether we pursue it with the eyes or not, an effect known as the Aubert-Fleischl phenomenon. We describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion or by pursuing it with the hand. Participants overestimated the stimulus velocity measured from tactile motion compared with kinesthesia, in analogy with the visual Aubert-Fleischl phenomenon.
Collapse
Affiliation(s)
- Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| | - Cecile R Scotto
- Centre de Recherches sur la Cognition et l'Apprentissage, Université de Poitiers-Université de Tours-Centre National de la Recherche Scientifique, Poitiers, France.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
37
|
Barnatan Y, Tomsic D, Sztarker J. Unidirectional Optomotor Responses and Eye Dominance in Two Species of Crabs. Front Physiol 2019; 10:586. [PMID: 31156462 PMCID: PMC6532708 DOI: 10.3389/fphys.2019.00586] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 04/26/2019] [Indexed: 11/13/2022] Open
Abstract
Animals, from invertebrates to humans, stabilize the panoramic optic flow through compensatory movements of the eyes, the head or the whole body, a behavior known as optomotor response (OR). The same optic flow moved clockwise or anticlockwise elicits equivalent compensatory right or left turning movements, respectively. However, if stimulated monocularly, many animals show a unique effective direction of motion, i.e., a unidirectional OR. This phenomenon has been reported in various species from mammals to birds, reptiles, and amphibious, but among invertebrates, it has only been tested in flies, where the directional sensitivity is opposite to that found in vertebrates. Although OR has been extensively investigated in crabs, directional sensitivity has never been analyzed. Here, we present results of behavioral experiments aimed at exploring the directional sensitivity of the OR in two crab species belonging to different families: the varunid mud crab Neohelice granulata and the ocypode fiddler crab Uca uruguayensis. By using different conditions of visual perception (binocular, left or right monocular) and direction of flow field motion (clockwise, anticlockwise), we found in both species that in monocular conditions, OR is effectively displayed only with progressive (front-to-back) motion stimulation. Binocularly elicited responses were directional insensitive and significantly weaker than monocular responses. These results are coincident with those described in flies and suggest a commonality in the circuit underlying this behavior among arthropods. Additionally, we found the existence of a remarkable eye dominance for the OR, which is associated to the size of the larger claw. This is more evident in the fiddler crab where the difference between the two claws is huge.
Collapse
Affiliation(s)
- Yair Barnatan
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Daniel Tomsic
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET, Universidad de Buenos Aires, Buenos Aires, Argentina.,Departamento de Fisiología, Biología Molecular y Celular Dr. Héctor Maldonado, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Julieta Sztarker
- Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE) CONICET, Universidad de Buenos Aires, Buenos Aires, Argentina.,Departamento de Fisiología, Biología Molecular y Celular Dr. Héctor Maldonado, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
38
|
Taylor GJ, Tichit P, Schmidt MD, Bodey AJ, Rau C, Baird E. Bumblebee visual allometry results in locally improved resolution and globally improved sensitivity. eLife 2019; 8:40613. [PMID: 30803484 PMCID: PMC6391067 DOI: 10.7554/elife.40613] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 12/23/2018] [Indexed: 12/19/2022] Open
Abstract
The quality of visual information that is available to an animal is limited by the size of its eyes. Differences in eye size can be observed even between closely related individuals, yet we understand little about how this affects vision. Insects are good models for exploring the effects of size on visual systems because many insect species exhibit size polymorphism. Previous work has been limited by difficulties in determining the 3D structure of eyes. We have developed a novel method based on x-ray microtomography to measure the 3D structure of insect eyes and to calculate predictions of their visual capabilities. We used our method to investigate visual allometry in the bumblebee Bombus terrestris and found that size affects specific aspects of vision, including binocular overlap, optical sensitivity, and dorsofrontal visual resolution. This reveals that differential scaling between eye areas provides flexibility that improves the visual capabilities of larger bumblebees. Bees fly through complex environments in search of nectar from flowers. They are aided in this quest by excellent eyesight. Scientists have extensively studied the eyesight of honeybees to learn more about how such tiny eyes work and how they process and learn visual information. Less is known about the honeybee’s larger cousins, the bumblebees, which are also important pollinators. Bumblebees come in different sizes and one question scientists have is how eye size affects vision. Bigger bumblebees are known to have bigger eyes, and bigger eyes are usually better. But which aspects of vision are improved in larger eyes is not clear. For example, does the size of a bee’s eyes affect how large their field of view is, or how sensitive they are to light? Or does it impact their visual acuity, a measurement of the smallest objects the eye can see? Scaling up an eye would likely improve all these aspects of sight slightly, but changes in a small area of the eye might more drastically improve some parts of vision. Now, Taylor et al. show that larger bumblebees with bigger eyes have better vision than their smaller counterparts. In the experiments, a technique called microtomography was used to measure the 3D structure of bumblebee eyes. The measurements were then applied to build 3D models of the bumblebee eyes, and computational geometry was used to calculate the sensitivity, acuity, and viewing direction across the entire surface of each model eye. Taylor et al. found that larger bees had improved ability to see small objects in front or slightly above them. They had a bigger area of overlap between the sight in both eyes when they looked forward and up. They were also more sensitive to light across the eye. The experiments show that improvements in eyesight with larger size are very specific and likely help larger bees to adapt to their environment. Behavioral studies could help scientists better understand how these changes help bigger bees and how the traits evolved. These findings might also help engineers trying to design miniature cameras to help small, flying autonomous vehicles navigate. Bees fly through complex environments and face challenges similar to those small flying vehicles would face. Emulating the design of bee eyes and how they change with size might lead to the development of better cameras for these vehicles.
Collapse
Affiliation(s)
| | - Pierre Tichit
- Department of Biology, Lund University, Lund, Sweden
| | - Marie D Schmidt
- Department of Biology, Lund University, Lund, Sweden.,Westphalian University of Applied Sciences, Bocholt, Germany
| | | | | | - Emily Baird
- Department of Biology, Lund University, Lund, Sweden.,Department of Zoology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
39
|
Fu Q, Wang H, Hu C, Yue S. Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review. ARTIFICIAL LIFE 2019; 25:263-311. [PMID: 31397604 DOI: 10.1162/artl_a_00297] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
Collapse
Affiliation(s)
- Qinbing Fu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Hongxin Wang
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Cheng Hu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Shigang Yue
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| |
Collapse
|
40
|
Creamer MS, Mano O, Clark DA. Visual Control of Walking Speed in Drosophila. Neuron 2018; 100:1460-1473.e6. [PMID: 30415994 PMCID: PMC6405217 DOI: 10.1016/j.neuron.2018.10.028] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 08/29/2018] [Accepted: 10/16/2018] [Indexed: 10/27/2022]
Abstract
An animal's self-motion generates optic flow across its retina, and it can use this visual signal to regulate its orientation and speed through the world. While orientation control has been studied extensively in Drosophila and other insects, much less is known about the visual cues and circuits that regulate translational speed. Here, we show that flies regulate walking speed with an algorithm that is tuned to the speed of visual motion, causing them to slow when visual objects are nearby. This regulation does not depend strongly on the spatial structure or the direction of visual stimuli, making it algorithmically distinct from the classic computation that controls orientation. Despite the different algorithms, the visual circuits that regulate walking speed overlap with those that regulate orientation. Taken together, our findings suggest that walking speed is controlled by a hierarchical computation that combines multiple motion detectors with distinct tunings. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
41
|
Salazar-Gatzimas E, Agrochao M, Fitzgerald JE, Clark DA. The Neuronal Basis of an Illusory Motion Percept Is Explained by Decorrelation of Parallel Motion Pathways. Curr Biol 2018; 28:3748-3762.e8. [PMID: 30471993 DOI: 10.1016/j.cub.2018.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 09/28/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022]
Abstract
Both vertebrates and invertebrates perceive illusory motion, known as "reverse-phi," in visual stimuli that contain sequential luminance increments and decrements. However, increment (ON) and decrement (OFF) signals are initially processed by separate visual neurons, and parallel elementary motion detectors downstream respond selectively to the motion of light or dark edges, often termed ON- and OFF-edges. It remains unknown how and where ON and OFF signals combine to generate reverse-phi motion signals. Here, we show that each of Drosophila's elementary motion detectors encodes motion by combining both ON and OFF signals. Their pattern of responses reflects combinations of increments and decrements that co-occur in natural motion, serving to decorrelate their outputs. These results suggest that the general principle of signal decorrelation drives the functional specialization of parallel motion detection channels, including their selectivity for moving light or dark edges.
Collapse
Affiliation(s)
- Emilio Salazar-Gatzimas
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
42
|
Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation. Neural Netw 2018; 106:127-143. [PMID: 30059829 DOI: 10.1016/j.neunet.2018.04.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 03/15/2018] [Accepted: 04/03/2018] [Indexed: 11/20/2022]
Abstract
Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1's collision selectivity to its neighbouring looming detector - the LGMD2. The SFA mechanism can enhance the LGMD1's collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.
Collapse
|
43
|
Buatois A, Flumian C, Schultheiss P, Avarguès-Weber A, Giurfa M. Transfer of Visual Learning Between a Virtual and a Real Environment in Honey Bees: The Role of Active Vision. Front Behav Neurosci 2018; 12:139. [PMID: 30057530 PMCID: PMC6053632 DOI: 10.3389/fnbeh.2018.00139] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 06/18/2018] [Indexed: 01/19/2023] Open
Abstract
To study visual learning in honey bees, we developed a virtual reality (VR) system in which the movements of a tethered bee walking stationary on a spherical treadmill update the visual panorama presented in front of it (closed-loop conditions), thus creating an experience of immersion within a virtual environment. In parallel, we developed a small Y-maze with interchangeable end-boxes, which allowed replacing repeatedly a freely walking bee into the starting point of the maze for repeated decision recording. Using conditioning and transfer experiments between the VR setup and the Y-maze, we studied the extent to which movement freedom and active vision are crucial for learning a simple color discrimination. Approximately 57% of the bees learned the visual discrimination in both conditions. Transfer from VR to the maze improved significantly the bees’ performances: 75% of bees having chosen the CS+ continued doing so and 100% of bees having chosen the CS− reverted their choice in favor of the CS+. In contrast, no improvement was seen for these two groups of bees during the reciprocal transfer from the Y-maze to VR. In this case, bees exhibited inconsistent choices in the VR setup. The asymmetric transfer between contexts indicates that the information learned in each environment may be different despite the similar learning success. Moreover, it shows that reducing the possibility of active vision and movement freedom in the passage from the maze to the VR impairs the expression of visual learning while increasing them in the reciprocal transfer improves it. Our results underline the active nature of visual processing in bees and allow discussing the developments required for immersive VR experiences in insects.
Collapse
Affiliation(s)
- Alexis Buatois
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Clara Flumian
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Patrick Schultheiss
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Martin Giurfa
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| |
Collapse
|
44
|
Stöckl AL, O'Carroll D, Warrant EJ. Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths. Proc Biol Sci 2018. [PMID: 28637860 DOI: 10.1098/rspb.2017.0880] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta, and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila, but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies.
Collapse
Affiliation(s)
- A L Stöckl
- Department of Biology, University of Lund, Sölvegatan 35, 22362 Lund, Sweden
| | - D O'Carroll
- Department of Biology, University of Lund, Sölvegatan 35, 22362 Lund, Sweden
| | - E J Warrant
- Department of Biology, University of Lund, Sölvegatan 35, 22362 Lund, Sweden
| |
Collapse
|
45
|
Gruntman E, Romani S, Reiser MB. Simple integration of fast excitation and offset, delayed inhibition computes directional selectivity in Drosophila. Nat Neurosci 2018; 21:250-257. [PMID: 29311742 PMCID: PMC5967973 DOI: 10.1038/s41593-017-0046-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 11/06/2017] [Indexed: 02/07/2023]
Abstract
A neuron that extracts directionally selective motion information from upstream signals lacking this selectivity must compare visual responses from spatially offset inputs. Distinguishing among prevailing algorithmic models for this computation requires measuring fast neuronal activity and inhibition. In the Drosophila melanogaster visual system, a fourth-order neuron-T4-is the first cell type in the ON pathway to exhibit directionally selective signals. Here we use in vivo whole-cell recordings of T4 to show that directional selectivity originates from simple integration of spatially offset fast excitatory and slow inhibitory inputs, resulting in a suppression of responses to the nonpreferred motion direction. We constructed a passive, conductance-based model of a T4 cell that accurately predicts the neuron's response to moving stimuli. These results connect the known circuit anatomy of the motion pathway to the algorithmic mechanism by which the direction of motion is computed.
Collapse
Affiliation(s)
- Eyal Gruntman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Michael B Reiser
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
46
|
Forbes PA, Chen A, Blouin JS. Sensorimotor control of standing balance. HANDBOOK OF CLINICAL NEUROLOGY 2018; 159:61-83. [DOI: 10.1016/b978-0-444-63916-5.00004-5] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
47
|
Clark DA, Demb JB. Parallel Computations in Insect and Mammalian Visual Motion Processing. Curr Biol 2017; 26:R1062-R1072. [PMID: 27780048 DOI: 10.1016/j.cub.2016.08.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world.
Collapse
Affiliation(s)
- Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology and Department of Physics, Yale University, New Haven, CT 06511, USA.
| | - Jonathan B Demb
- Department of Ophthalmology and Visual Science and Department of Cellular and Molecular Physiology, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
48
|
Salazar-Gatzimas E, Chen J, Creamer MS, Mano O, Mandel HB, Matulis CA, Pottackal J, Clark DA. Direct Measurement of Correlation Responses in Drosophila Elementary Motion Detectors Reveals Fast Timescale Tuning. Neuron 2017; 92:227-239. [PMID: 27710784 DOI: 10.1016/j.neuron.2016.09.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/22/2016] [Accepted: 08/29/2016] [Indexed: 10/20/2022]
Abstract
Animals estimate visual motion by integrating light intensity information over time and space. The integration requires nonlinear processing, which makes motion estimation circuitry sensitive to specific spatiotemporal correlations that signify visual motion. Classical models of motion estimation weight these correlations to produce direction-selective signals. However, the correlational algorithms they describe have not been directly measured in elementary motion-detecting neurons (EMDs). Here, we employed stimuli to directly measure responses to pairwise correlations in Drosophila's EMD neurons, T4 and T5. Activity in these neurons was required for behavioral responses to pairwise correlations and was predictive of those responses. The pattern of neural responses in the EMDs was inconsistent with one classical model of motion detection, and the timescale and selectivity of correlation responses constrained the temporal filtering properties in potential models. These results reveal how neural responses to pairwise correlations drive visual behavior in this canonical motion-detecting circuit.
Collapse
Affiliation(s)
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Holly B Mandel
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | | | - Joseph Pottackal
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
49
|
Bengochea M, Berón de Astrada M, Tomsic D, Sztarker J. A crustacean lobula plate: Morphology, connections, and retinotopic organization. J Comp Neurol 2017; 526:109-119. [PMID: 28884472 DOI: 10.1002/cne.24322] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 08/25/2017] [Accepted: 08/28/2017] [Indexed: 02/03/2023]
Abstract
The lobula plate is part of the lobula complex, the third optic neuropil, in the optic lobes of insects. It has been extensively studied in dipterous insects, where its role in processing flow-field motion information used for controlling optomotor responses was discovered early. Recently, a lobula plate was also found in malacostracan crustaceans. Here, we provide the first detailed description of the neuroarchitecture, the input and output connections and the retinotopic organization of the lobula plate in a crustacean, the crab Neohelice granulata using a variety of histological methods that include silver reduced staining and mass staining with dextran-conjugated dyes. The lobula plate of this crab is a small elongated neuropil. It receives separated retinotopic inputs from columnar neurons of the medulla and the lobula. In the anteroposterior plane, the neuropil possesses four layers defined by the arborizations of such columnar inputs. Medulla projecting neurons arborize mainly in two of these layers, one on each side, while input neurons arriving from the lobula branch only in one. The neuropil contains at least two classes of tangential elements, one connecting with the lateral protocerebrum and the other that exits the optic lobes toward the supraesophageal ganglion. The number of layers in the crab's lobula plate, the retinotopic connections received from the medulla and from the lobula, and the presence of large tangential neurons exiting the neuropil, reflect the general structure of the insect lobula plate and, hence, provide support to the notion of an evolutionary conserved function for this neuropil.
Collapse
Affiliation(s)
- Mercedes Bengochea
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Fisiología, Biología Molecular y Celular. CONICET-Universidad de Buenos Aires, Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE), Buenos Aires, Argentina
| | - Martín Berón de Astrada
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Fisiología, Biología Molecular y Celular. CONICET-Universidad de Buenos Aires, Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE), Buenos Aires, Argentina
| | - Daniel Tomsic
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Fisiología, Biología Molecular y Celular. CONICET-Universidad de Buenos Aires, Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE), Buenos Aires, Argentina
| | - Julieta Sztarker
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Fisiología, Biología Molecular y Celular. CONICET-Universidad de Buenos Aires, Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE), Buenos Aires, Argentina
| |
Collapse
|
50
|
Strother JA, Wu ST, Wong AM, Nern A, Rogers EM, Le JQ, Rubin GM, Reiser MB. The Emergence of Directional Selectivity in the Visual Motion Pathway of Drosophila. Neuron 2017; 94:168-182.e10. [PMID: 28384470 DOI: 10.1016/j.neuron.2017.03.010] [Citation(s) in RCA: 104] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 12/22/2016] [Accepted: 03/08/2017] [Indexed: 01/19/2023]
Abstract
The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila, the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models.
Collapse
Affiliation(s)
- James A Strother
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Shiuan-Tze Wu
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Allan M Wong
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Edward M Rogers
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Jasmine Q Le
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Michael B Reiser
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA.
| |
Collapse
|