1
|
Schoepe T, Janotte E, Milde MB, Bertrand OJN, Egelhaaf M, Chicca E. Finding the gap: neuromorphic motion-vision in dense environments. Nat Commun 2024; 15:817. [PMID: 38280859 PMCID: PMC10821932 DOI: 10.1038/s41467-024-45063-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/15/2024] [Indexed: 01/29/2024] Open
Abstract
Animals have evolved mechanisms to travel safely and efficiently within different habitats. On a journey in dense terrains animals avoid collisions and cross narrow passages while controlling an overall course. Multiple hypotheses target how animals solve challenges faced during such travel. Here we show that a single mechanism enables safe and efficient travel. We developed a robot inspired by insects. It has remarkable capabilities to travel in dense terrain, avoiding collisions, crossing gaps and selecting safe passages. These capabilities are accomplished by a neuromorphic network steering the robot toward regions of low apparent motion. Our system leverages knowledge about vision processing and obstacle avoidance in insects. Our results demonstrate how insects might safely travel through diverse habitats. We anticipate our system to be a working hypothesis to study insects' travels in dense terrains. Furthermore, it illustrates that we can design novel hardware systems by understanding the underlying mechanisms driving behaviour.
Collapse
Affiliation(s)
- Thorben Schoepe
- Peter Grünberg Institut 15, Forschungszentrum Jülich, Aachen, Germany.
- Faculty of Technology and Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University, Bielefeld, Germany.
- Bio-Inspired Circuits and Systems (BICS) Lab. Zernike Institute for Advanced Materials (Zernike Inst Adv Mat), University of Groningen, Groningen, Netherlands.
- CogniGron (Groningen Cognitive Systems and Materials Center), University of Groningen, Groningen, Netherlands.
| | - Ella Janotte
- Event Driven Perception for Robotics, Italian Institute of Technology, iCub facility, Genoa, Italy
| | - Moritz B Milde
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, Australia
| | | | - Martin Egelhaaf
- Neurobiology, Faculty of Biology, Bielefeld University, Bielefeld, Germany
| | - Elisabetta Chicca
- Faculty of Technology and Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University, Bielefeld, Germany
- Bio-Inspired Circuits and Systems (BICS) Lab. Zernike Institute for Advanced Materials (Zernike Inst Adv Mat), University of Groningen, Groningen, Netherlands
- CogniGron (Groningen Cognitive Systems and Materials Center), University of Groningen, Groningen, Netherlands
| |
Collapse
|
2
|
Wu Z, Guo A. Bioinspired figure-ground discrimination via visual motion smoothing. PLoS Comput Biol 2023; 19:e1011077. [PMID: 37083880 PMCID: PMC10155969 DOI: 10.1371/journal.pcbi.1011077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/03/2023] [Accepted: 04/04/2023] [Indexed: 04/22/2023] Open
Abstract
Flies detect and track moving targets among visual clutter, and this process mainly relies on visual motion. Visual motion is analyzed or computed with the pathway from the retina to T4/T5 cells. The computation of local directional motion was formulated as an elementary movement detector (EMD) model more than half a century ago. Solving target detection or figure-ground discrimination problems can be equivalent to extracting boundaries between a target and the background based on the motion discontinuities in the output of a retinotopic array of EMDs. Individual EMDs cannot measure true velocities, however, due to their sensitivity to pattern properties such as luminance contrast and spatial frequency content. It remains unclear how local directional motion signals are further integrated to enable figure-ground discrimination. Here, we present a computational model inspired by fly motion vision. Simulations suggest that the heavily fluctuating output of an EMD array is naturally surmounted by a lobula network, which is hypothesized to be downstream of the local motion detectors and have parallel pathways with distinct directional selectivity. The lobula network carries out a spatiotemporal smoothing operation for visual motion, especially across time, enabling the segmentation of moving figures from the background. The model qualitatively reproduces experimental observations in the visually evoked response characteristics of one type of lobula columnar (LC) cell. The model is further shown to be robust to natural scene variability. Our results suggest that the lobula is involved in local motion-based target detection.
Collapse
Affiliation(s)
- Zhihua Wu
- School of Life Sciences, Shanghai University, Shanghai, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Aike Guo
- School of Life Sciences, Shanghai University, Shanghai, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- International Academic Center of Complex Systems, Advanced Institute of Natural Sciences, Beijing Normal University at Zhuhai, Zhuhai, Guangdong, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Egelhaaf M. Optic flow based spatial vision in insects. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2023:10.1007/s00359-022-01610-w. [PMID: 36609568 DOI: 10.1007/s00359-022-01610-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/06/2022] [Accepted: 12/24/2022] [Indexed: 01/09/2023]
Abstract
The optic flow, i.e., the displacement of retinal images of objects in the environment induced by self-motion, is an important source of spatial information, especially for fast-flying insects. Spatial information over a wide range of distances, from the animal's immediate surroundings over several hundred metres to kilometres, is necessary for mediating behaviours, such as landing manoeuvres, collision avoidance in spatially complex environments, learning environmental object constellations and path integration in spatial navigation. To facilitate the processing of spatial information, the complexity of the optic flow is often reduced by active vision strategies. These result in translations and rotations being largely separated by a saccadic flight and gaze mode. Only the translational components of the optic flow contain spatial information. In the first step of optic flow processing, an array of local motion detectors provides a retinotopic spatial proximity map of the environment. This local motion information is then processed in parallel neural pathways in a task-specific manner and used to control the different components of spatial behaviour. A particular challenge here is that the distance information extracted from the optic flow does not represent the distances unambiguously, but these are scaled by the animal's speed of locomotion. Possible ways of coping with this ambiguity are discussed.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Universitätsstraße 25, 33615, Bielefeld, Germany.
| |
Collapse
|
4
|
Melville-Smith A, Finn A, Uzair M, Brinkworth RSA. Exploration of motion inhibition for the suppression of false positives in biologically inspired small target detection algorithms from a moving platform. BIOLOGICAL CYBERNETICS 2022; 116:661-685. [PMID: 36305942 PMCID: PMC9691501 DOI: 10.1007/s00422-022-00950-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
Detecting small moving targets against a cluttered background in visual data is a challenging task. The main problems include spatio-temporal target contrast enhancement, background suppression and accurate target segmentation. When targets are at great distances from a non-stationary camera, the difficulty of these challenges increases. In such cases the moving camera can introduce large spatial changes between frames which may cause issues in temporal algorithms; furthermore targets can approach a single pixel, thereby affecting spatial methods. Previous literature has shown that biologically inspired methods, based on the vision systems of insects, are robust to such conditions. It has also been shown that the use of divisive optic-flow inhibition with these methods enhances the detectability of small targets. However, the location within the visual pathway the inhibition should be applied was ambiguous. In this paper, we investigated the tunings of some of the optic-flow filters and use of a nonlinear transform on the optic-flow signal to modify motion responses for the purpose of suppressing false positives and enhancing small target detection. Additionally, we looked at multiple locations within the biologically inspired vision (BIV) algorithm where inhibition could further enhance detection performance, and look at driving the nonlinear transform with a global motion estimate. To get a better understanding of how the BIV algorithm performs, we compared to other state-of-the-art target detection algorithms, and look at how their performance can be enhanced with the optic-flow inhibition. Our explicit use of the nonlinear inhibition allows for the incorporation of a wider dynamic range of inhibiting signals, along with spatio-temporal filter refinement, which further increases target-background discrimination in the presence of camera motion. Extensive experiments shows that our proposed approach achieves an improvement of 25% over linearly conditioned inhibition schemes and 2.33 times the detection performance of the BIV model without inhibition. Moreover, our approach achieves between 10 and 104 times better detection performance compared to any conventional state-of-the-art moving object detection algorithm applied to the same, highly cluttered and moving scenes. Applying the nonlinear inhibition to other algorithms showed that their performance can be increased by up to 22 times. These findings show that the application of optic-flow- based signal suppression should be applied to enhance target detection from moving platforms. Furthermore, they indicate where best to look for evidence of such signals within the insect brain.
Collapse
Affiliation(s)
- Aaron Melville-Smith
- Defense and Systems Institute, UniSA STEM, University of South Australia, Adelaide, SA 5095 Australia
| | - Anthony Finn
- Defense and Systems Institute, UniSA STEM, University of South Australia, Adelaide, SA 5095 Australia
| | - Muhammad Uzair
- Defense and Systems Institute, UniSA STEM, University of South Australia, Adelaide, SA 5095 Australia
| | | |
Collapse
|
5
|
Li J, Niemeier M, Kern R, Egelhaaf M. Disentangling of Local and Wide-Field Motion Adaptation. Front Neural Circuits 2021; 15:713285. [PMID: 34531728 PMCID: PMC8438216 DOI: 10.3389/fncir.2021.713285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 08/11/2021] [Indexed: 11/21/2022] Open
Abstract
Motion adaptation has been attributed in flying insects a pivotal functional role in spatial vision based on optic flow. Ongoing motion enhances in the visual pathway the representation of spatial discontinuities, which manifest themselves as velocity discontinuities in the retinal optic flow pattern during translational locomotion. There is evidence for different spatial scales of motion adaptation at the different visual processing stages. Motion adaptation is supposed to take place, on the one hand, on a retinotopic basis at the level of local motion detecting neurons and, on the other hand, at the level of wide-field neurons pooling the output of many of these local motion detectors. So far, local and wide-field adaptation could not be analyzed separately, since conventional motion stimuli jointly affect both adaptive processes. Therefore, we designed a novel stimulus paradigm based on two types of motion stimuli that had the same overall strength but differed in that one led to local motion adaptation while the other did not. We recorded intracellularly the activity of a particular wide-field motion-sensitive neuron, the horizontal system equatorial cell (HSE) in blowflies. The experimental data were interpreted based on a computational model of the visual motion pathway, which included the spatially pooling HSE-cell. By comparing the difference between the recorded and modeled HSE-cell responses induced by the two types of motion adaptation, the major characteristics of local and wide-field adaptation could be pinpointed. Wide-field adaptation could be shown to strongly depend on the activation level of the cell and, thus, on the direction of motion. In contrast, the response gain is reduced by local motion adaptation to a similar extent independent of the direction of motion. This direction-independent adaptation differs fundamentally from the well-known adaptive adjustment of response gain according to the prevailing overall stimulus level that is considered essential for an efficient signal representation by neurons with a limited operating range. Direction-independent adaptation is discussed to result from the joint activity of local motion-sensitive neurons of different preferred directions and to lead to a representation of the local motion direction that is independent of the overall direction of global motion.
Collapse
Affiliation(s)
- Jinglin Li
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Roland Kern
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
6
|
Sensitivity to expression levels underlies differential dominance of a putative null allele of the Drosophila tβh gene in behavioral phenotypes. PLoS Biol 2021; 19:e3001228. [PMID: 33970909 PMCID: PMC8136860 DOI: 10.1371/journal.pbio.3001228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Revised: 05/20/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022] Open
Abstract
The biogenic amine octopamine (OA) and its precursor tyramine (TA) are involved in controlling a plethora of different physiological and behavioral processes. The tyramine-β-hydroxylase (tβh) gene encodes the enzyme catalyzing the last synthesis step from TA to OA. Here, we report differential dominance (from recessive to overdominant) of the putative null tβhnM18 allele in 2 behavioral measures in Buridan’s paradigm (walking speed and stripe deviation) and in proboscis extension (sugar sensitivity) in the fruit fly Drosophila melanogaster. The behavioral analysis of transgenic tβh expression experiments in mutant and wild-type flies as well as of OA and TA receptor mutants revealed a complex interaction of both aminergic systems. Our analysis suggests that the different neuronal networks responsible for the 3 phenotypes show differential sensitivity to tβh gene expression levels. The evidence suggests that this sensitivity is brought about by a TA/OA opponent system modulating the involved neuronal circuits. This conclusion has important implications for standard transgenic techniques commonly used in functional genetics. Differential dominance occurs when genes associated with several phenotypes (pleiotropic genes) show different modes of inheritance (e.g., recessive, dominant or overdominant) depending on the phenotype. This study reveals that differential sensitivity to gene expression levels can mediate differential dominance, which can be a significant challenge for standard transgenic techniques commonly used to elucidate gene function.
Collapse
|
7
|
Wang H, Fu Q, Wang H, Baxter P, Peng J, Yue S. A bioinspired angular velocity decoding neural network model for visually guided flights. Neural Netw 2021; 136:180-193. [PMID: 33494035 DOI: 10.1016/j.neunet.2020.12.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Revised: 12/03/2020] [Accepted: 12/07/2020] [Indexed: 11/17/2022]
Abstract
Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model's potential for implementation in micro air vehicles which have only visual sensors.
Collapse
Affiliation(s)
- Huatian Wang
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Hongxin Wang
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Paul Baxter
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China.
| | - Shigang Yue
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK.
| |
Collapse
|
8
|
Doussot C, Bertrand OJN, Egelhaaf M. Visually guided homing of bumblebees in ambiguous situations: A behavioural and modelling study. PLoS Comput Biol 2020; 16:e1008272. [PMID: 33048938 PMCID: PMC7553325 DOI: 10.1371/journal.pcbi.1008272] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 08/19/2020] [Indexed: 11/19/2022] Open
Abstract
Returning home is a crucial task accomplished daily by many animals, including humans. Because of their tiny brains, insects, like bees or ants, are good study models for efficient navigation strategies. Bees and ants are known to rely mainly on learned visual information about the nest surroundings to pinpoint their barely visible nest-entrance. During the return, when the actual sight of the insect matches the learned information, the insect is easily guided home. Occasionally, modifications to the visual environment may take place while the insect is on a foraging trip. Here, we addressed the ecologically relevant question of how bumblebees’ homing is affected by such a situation. In an artificial setting, we habituated bees to be guided to their nest by two constellations of visual cues. After habituation, these cues were displaced during foraging trips into a conflict situation. We recorded bumblebees’ return flights in such circumstances and investigated where they search for their nest entrance following the degree of displacement between the two visually relevant cues. Bumblebees mostly searched at the fictive nest location as indicated by either cue constellation, but never at a compromise location between them. We compared these experimental results to the predictions of different types of homing models. We found that models guiding an agent by a single holistic view of the nest surroundings could not account for the bumblebees’ search behaviour in cue-conflict situations. Instead, homing models relying on multiple views were sufficient. We could further show that homing models required fewer views and got more robust to height changes if optic flow-based spatial information was encoded and learned, rather than just brightness information. Returning home sounds trivial, but to a concealed underground location like a burrow, is less easy. For the buff-tailed bumblebees, this task is a routine. After collecting pollen in gardens or flowered meadows, bees must return to their underground nest to feed the queen’s larvae. The nest entrance is almost invisible for a returning bee; therefore, it guides its flight by information about the surrounding visual environment. Since the seminal work of Timbergern, many experiments have focused on how visual information is guiding foraging insects back home. In these experiments, returning foragers were confronted with a coherent displacement of the entire nest surroundings, hence, leading the bees to a unique new location. But in nature, the objects constituting the visual environment maybe unorderly displaced, as some are differently inclined to the action of different factors, e.g. wind. In our study, we moved objects in a tricky way to create two fictitious nest entrances. The bees searched at the fictitious nest entrances, but never in-between. The distance between the fictitious nests affected the bees’ search. Finally, we could predict the search location by using bio-inspired homing models potentially interesting for implementing in autonomous robots.
Collapse
Affiliation(s)
- Charlotte Doussot
- Neurobiology, Faculty of Biology, Universität Bielefeld, Germany
- * E-mail:
| | | | - Martin Egelhaaf
- Neurobiology, Faculty of Biology, Universität Bielefeld, Germany
| |
Collapse
|
9
|
Wang H, Peng J, Zheng X, Yue S. A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:839-853. [PMID: 31056526 DOI: 10.1109/tnnls.2019.2910418] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems-ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.
Collapse
|
10
|
Differential Tuning to Visual Motion Allows Robust Encoding of Optic Flow in the Dragonfly. J Neurosci 2019; 39:8051-8063. [PMID: 31481434 DOI: 10.1523/jneurosci.0143-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 07/22/2019] [Accepted: 08/07/2019] [Indexed: 11/21/2022] Open
Abstract
Visual cues provide an important means for aerial creatures to ascertain their self-motion through the environment. In many insects, including flies, moths, and bees, wide-field motion-sensitive neurons in the third optic ganglion are thought to underlie such motion encoding; however, these neurons can only respond robustly over limited speed ranges. The task is more complicated for some species of dragonflies that switch between extended periods of hovering flight and fast-moving pursuit of prey and conspecifics, requiring motion detection over a broad range of velocities. Since little is known about motion processing in these insects, we performed intracellular recordings from hawking, emerald dragonflies (Hemicordulia spp.) and identified a diverse group of motion-sensitive neurons that we named lobula tangential cells (LTCs). Following prolonged visual stimulation with drifting gratings, we observed significant differences in both temporal and spatial tuning of LTCs. Cluster analysis of these changes confirmed several groups of LTCs with distinctive spatiotemporal tuning. These differences were associated with variation in velocity tuning in response to translated, natural scenes. LTCs with differences in velocity tuning ranges and optima may underlie how a broad range of motion velocities are encoded. In the hawking dragonfly, changes in LTC tuning over time are therefore likely to support their extensive range of behaviors, from hovering to fast-speed pursuits.SIGNIFICANCE STATEMENT Understanding how animals navigate the world is an inherently difficult and interesting problem. Insects are useful models for understanding neuronal mechanisms underlying these activities, with neurons that encode wide-field motion previously identified in insects, such as flies, hawkmoths, and butterflies. Like some Dipteran flies, dragonflies exhibit complex aerobatic behaviors, such as hovering, patrolling, and aerial combat. However, dragonflies lack halteres that support such diverse behavior in flies. To understand how dragonflies might address this problem using only visual cues, we recorded from their wide-field motion-sensitive neurons. We found these differ strongly in the ways they respond to sustained motion, allowing them collectively to encode the very broad range of velocities experienced during diverse behavior.
Collapse
|
11
|
Cyr A, Thériault F, Ross M, Berberian N, Chartier S. Spiking Neurons Integrating Visual Stimuli Orientation and Direction Selectivity in a Robotic Context. Front Neurorobot 2018; 12:75. [PMID: 30524261 PMCID: PMC6256284 DOI: 10.3389/fnbot.2018.00075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 10/31/2018] [Indexed: 11/13/2022] Open
Abstract
Visual motion detection is essential for the survival of many species. The phenomenon includes several spatial properties, not fully understood at the level of a neural circuit. This paper proposes a computational model of a visual motion detector that integrates direction and orientation selectivity features. A recent experiment in the Drosophila model highlights that stimulus orientation influences the neural response of direction cells. However, this interaction and the significance at the behavioral level are currently unknown. As such, another objective of this article is to study the effect of merging these two visual processes when contextualized in a neuro-robotic model and an operant conditioning procedure. In this work, the learning task was solved using an artificial spiking neural network, acting as the brain controller for virtual and physical robots, showing a behavior modulation from the integration of both visual processes.
Collapse
Affiliation(s)
- André Cyr
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Frédéric Thériault
- Department of Computer Science, Cégep du Vieux Montréal, Montreal, QC, Canada
| | - Matthew Ross
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Nareg Berberian
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Sylvain Chartier
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| |
Collapse
|
12
|
A Novel Algorithm to Improve Digital Chaotic Sequence Complexity through CCEMD and PE. ENTROPY 2018; 20:e20040295. [PMID: 33265386 PMCID: PMC7512813 DOI: 10.3390/e20040295] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Revised: 04/10/2018] [Accepted: 04/12/2018] [Indexed: 11/17/2022]
Abstract
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further, we designed a new algorithm based on complementary ensemble empirical mode decomposition (CEEMD) and permutation entropy (PE) that can effectively enhance digital chaotic sequence complexity. In addition, an image encryption experiment was performed with post-processing of the chaotic binary sequences by the new algorithm. The experimental results show good performance of the chaotic binary sequence.
Collapse
|