1
|
Chen H, Fan B, Li H, Peng J. Rigid propagation of visual motion in the insect's neural system. Neural Netw 2025; 181:106874. [PMID: 39522416 DOI: 10.1016/j.neunet.2024.106874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 10/16/2024] [Accepted: 10/28/2024] [Indexed: 11/16/2024]
Abstract
In the pursuit of developing an efficient artificial visual system for visual motion detection, researchers find inspiration from the visual motion-sensitive neural pathways in the insect's neural system. Although multiple proposed neural computational models exhibit significant performance aligned with those observed from insects, the mathematical basis for how these models characterize the sensitivity of visual neurons to corresponding motion patterns remains to be elucidated. To fill this research gap, this study originally proposed that the rigid propagation of visual motion is an essential mathematical property of the models for the insect's visual neural system, meaning that the dynamics of the model output remain consistent with the visual motion dynamics reflected in the input. To verify this property, this study uses the small target motion detector (STMD) neural pathway - one of the visual motion-sensitive pathways in the insect's neural system - as an exemplar, rigorously demonstrating that the dynamics of translational visual motion are rigidly propagated through the encoding of retinal measurements in STMD computational models. Numerical experiment results further substantiate the proposed property of STMD models. This study offers a novel theoretical framework for exploring the nature of the visual motion perception underlying the insect's visual neural system and brings an innovative perspective to the broader research field of insect visual motion perception and artificial visual systems.
Collapse
Affiliation(s)
- Hao Chen
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China.
| | - Boquan Fan
- Institute of Applied Mathematics, AMSS, Chinese Academy of Sciences, Beijing 100190, China.
| | - Haiyang Li
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China.
| | - Jigen Peng
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China.
| |
Collapse
|
2
|
Qin Z, Fu Q, Peng J. A computationally efficient and robust looming perception model based on dynamic neural field. Neural Netw 2024; 179:106502. [PMID: 38996688 DOI: 10.1016/j.neunet.2024.106502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 06/18/2024] [Accepted: 06/29/2024] [Indexed: 07/14/2024]
Abstract
There are primarily two classes of bio-inspired looming perception visual systems. The first class employs hierarchical neural networks inspired by well-acknowledged anatomical pathways responsible for looming perception, and the second maps nonlinear relationships between physical stimulus attributes and neuronal activity. However, even with multi-layered structures, the former class is sometimes fragile in looming selectivity, i.e., the ability to well discriminate between approaching and other categories of movements. While the latter class leaves qualms regarding how to encode visual movements to indicate physical attributes like angular velocity/size. Beyond those, we propose a novel looming perception model based on dynamic neural field (DNF). The DNF is a brain-inspired framework that incorporates both lateral excitation and inhibition within the field through instant feedback, it could be an easily-built model to fulfill the looming sensitivity observed in biological visual systems. To achieve our target of looming perception with computational efficiency, we introduce a single-field DNF with adaptive lateral interactions and dynamic activation threshold. The former mechanism creates antagonism to translating motion, and the latter suppresses excitation during receding. Accordingly, the proposed model exhibits the strongest response to moving objects signaling approaching over other types of external stimuli. The effectiveness of the proposed model is supported by relevant mathematical analysis and ablation study. The computational efficiency and robustness of the model are verified through systematic experiments including on-line collision-detection tasks in micro-mobile robots, at success rate of 93% compared with state-of-the-art methods. The results demonstrate its superiority over the model-based methods concerning looming perception.
Collapse
Affiliation(s)
- Ziyan Qin
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| | - Jigen Peng
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
3
|
Gao G, Liu R, Wang M, Fu Q. A Computationally Efficient Neuronal Model for Collision Detection with Contrast Polarity-Specific Feed-Forward Inhibition. Biomimetics (Basel) 2024; 9:650. [PMID: 39590222 PMCID: PMC11592146 DOI: 10.3390/biomimetics9110650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 10/11/2024] [Accepted: 10/17/2024] [Indexed: 11/28/2024] Open
Abstract
Animals utilize their well-evolved dynamic vision systems to perceive and evade collision threats. Driven by biological research, bio-inspired models based on lobula giant movement detectors (LGMDs) address certain gaps in constructing artificial collision-detecting vision systems with robust selectivity, offering reliable, low-cost, and miniaturized collision sensors across various scenes. Recent progress in neuroscience has revealed the energetic advantages of dendritic arrangements presynaptic to the LGMDs, which receive contrast polarity-specific signals on separate dendritic fields. Specifically, feed-forward inhibitory inputs arise from parallel ON/OFF pathways interacting with excitation. However, none of the previous research has investigated the evolution of a computational LGMD model with feed-forward inhibition (FFI) separated by opposite polarity. This study fills this vacancy by presenting an optimized neuronal model where FFI is divided into ON/OFF channels, each with distinct synaptic connections. To align with the energy efficiency of biological systems, we introduce an activation function associated with neural computation of FFI and interactions between local excitation and lateral inhibition within ON/OFF channels, ignoring non-active signal processing. This approach significantly improves the time efficiency of the LGMD model, focusing only on substantial luminance changes in image streams. The proposed neuronal model not only accelerates visual processing in relatively stationary scenes but also maintains robust selectivity to ON/OFF-contrast looming stimuli. Additionally, it can suppress translational motion to a moderate extent. Comparative testing with state-of-the-art based on ON/OFF channels was conducted systematically using a range of visual stimuli, including indoor structured and complex outdoor scenes. The results demonstrated significant time savings in silico while retaining original collision selectivity. Furthermore, the optimized model was implemented in the embedded vision system of a micro-mobile robot, achieving the highest success ratio of collision avoidance at 97.51% while nearly halving the processing time compared with previous models. This highlights a robust and parsimonious collision-sensing mode that effectively addresses real-world challenges.
Collapse
|
4
|
Deng Y, Ruan H, He S, Yang T, Guo D. A Biomimetic Visual Detection Model: Event-Driven LGMDs Implemented With Fractional Spiking Neuron Circuits. IEEE Trans Biomed Eng 2024; 71:2978-2990. [PMID: 38787675 DOI: 10.1109/tbme.2024.3404976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024]
Abstract
OBJECTIVE Lobula giant motion detectors (LGMDs) in locusts effectively predict collisions and trigger avoidance, with potential applications in autonomous driving and UAVs. Research on LGMD characteristics splits into two views: one focusing on the presynaptic visual pathway, the other on the postsynaptic LGMD neurons. Both perspectives have support, leading to two computational models, but they lack a biophysical description of the individual LGMD neuron behavior. This paper aims to mimic and explain LGMD behavior based on fractional spiking neurons (FSNs) and construct a biomimetic visual model for the LGMD compatible with these characteristics. METHODS We implement the visual model using an event camera to simulate photoreceptors and follow the ON/OFF visual pathway, incorporating lateral inhibition to mimic the LGMD system from the bottom up. Second, most computational models of motion perception use only the dendrites within the LGMD neurons as the ideal pathway for linear summation, ignoring dendritic effects inducing neuronal properties. Thus, we introduce FSN circuits by altering dendritic morphological parameters to simulate multi-scale spike frequency adaptation (SFA) observed in LGMDs. Additionally, we add one more circuit of dendritic trees into the FSNs to be compatible with the postsynaptic feed-forward inhibition (FFI) in LGMD neurons, providing a novel explanatory and predictive model. RESULTS We test that the event-driven biomimetic visual model can achieve collision detection and looming selection in different complex scenes, especially fast-moving objects.
Collapse
|
5
|
Dai Z, Fu Q, Peng J, Li H. SLoN: a spiking looming perception network exploiting neural encoding and processing in ON/OFF channels. Front Neurosci 2024; 18:1291053. [PMID: 38510466 PMCID: PMC10950957 DOI: 10.3389/fnins.2024.1291053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/14/2024] [Indexed: 03/22/2024] Open
Abstract
Looming perception, the ability to sense approaching objects, is crucial for the survival of humans and animals. After hundreds of millions of years of evolutionary development, biological entities have evolved efficient and robust looming perception visual systems. However, current artificial vision systems fall short of such capabilities. In this study, we propose a novel spiking neural network for looming perception that mimics biological vision to communicate motion information through action potentials or spikes, providing a more realistic approach than previous artificial neural networks based on sum-then-activate operations. The proposed spiking looming perception network (SLoN) comprises three core components. Neural encoding, known as phase coding, transforms video signals into spike trains, introducing the concept of phase delay to depict the spatial-temporal competition between phasic excitatory and inhibitory signals shaping looming selectivity. To align with biological substrates where visual signals are bifurcated into parallel ON/OFF channels encoding brightness increments and decrements separately to achieve specific selectivity to ON/OFF-contrast stimuli, we implement eccentric down-sampling at the entrance of ON/OFF channels, mimicking the foveal region of the mammalian receptive field with higher acuity to motion, computationally modeled with a leaky integrate-and-fire (LIF) neuronal network. The SLoN model is deliberately tested under various visual collision scenarios, ranging from synthetic to real-world stimuli. A notable achievement is that the SLoN selectively spikes for looming features concealed in visual streams against other categories of movements, including translating, receding, grating, and near misses, demonstrating robust selectivity in line with biological principles. Additionally, the efficacy of the ON/OFF channels, the phase coding with delay, and the eccentric visual processing are further investigated to demonstrate their effectiveness in looming perception. The cornerstone of this study rests upon showcasing a new paradigm for looming perception that is more biologically plausible in light of biological motion perception.
Collapse
|
6
|
Hong J, Sun X, Peng J, Fu Q. A Bio-Inspired Probabilistic Neural Network Model for Noise-Resistant Collision Perception. Biomimetics (Basel) 2024; 9:136. [PMID: 38534821 DOI: 10.3390/biomimetics9030136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/19/2023] [Accepted: 02/20/2024] [Indexed: 03/28/2024] Open
Abstract
Bio-inspired models based on the lobula giant movement detector (LGMD) in the locust's visual brain have received extensive attention and application for collision perception in various scenarios. These models offer advantages such as low power consumption and high computational efficiency in visual processing. However, current LGMD-based computational models, typically organized as four-layered neural networks, often encounter challenges related to noisy signals, particularly in complex dynamic environments. Biological studies have unveiled the intrinsic stochastic nature of synaptic transmission, which can aid neural computation in mitigating noise. In alignment with these biological findings, this paper introduces a probabilistic LGMD (Prob-LGMD) model that incorporates a probability into the synaptic connections between multiple layers, thereby capturing the uncertainty in signal transmission, interaction, and integration among neurons. Comparative testing of the proposed Prob-LGMD model and two conventional LGMD models was conducted using a range of visual stimuli, including indoor structured scenes and complex outdoor scenes, all subject to artificial noise. Additionally, the model's performance was compared to standard engineering noise-filtering methods. The results clearly demonstrate that the proposed model outperforms all comparative methods, exhibiting a significant improvement in noise tolerance. This study showcases a straightforward yet effective approach to enhance collision perception in noisy environments.
Collapse
Affiliation(s)
- Jialan Hong
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Xuelong Sun
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Jigen Peng
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
7
|
Schoepe T, Janotte E, Milde MB, Bertrand OJN, Egelhaaf M, Chicca E. Finding the gap: neuromorphic motion-vision in dense environments. Nat Commun 2024; 15:817. [PMID: 38280859 PMCID: PMC10821932 DOI: 10.1038/s41467-024-45063-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/15/2024] [Indexed: 01/29/2024] Open
Abstract
Animals have evolved mechanisms to travel safely and efficiently within different habitats. On a journey in dense terrains animals avoid collisions and cross narrow passages while controlling an overall course. Multiple hypotheses target how animals solve challenges faced during such travel. Here we show that a single mechanism enables safe and efficient travel. We developed a robot inspired by insects. It has remarkable capabilities to travel in dense terrain, avoiding collisions, crossing gaps and selecting safe passages. These capabilities are accomplished by a neuromorphic network steering the robot toward regions of low apparent motion. Our system leverages knowledge about vision processing and obstacle avoidance in insects. Our results demonstrate how insects might safely travel through diverse habitats. We anticipate our system to be a working hypothesis to study insects' travels in dense terrains. Furthermore, it illustrates that we can design novel hardware systems by understanding the underlying mechanisms driving behaviour.
Collapse
Affiliation(s)
- Thorben Schoepe
- Peter Grünberg Institut 15, Forschungszentrum Jülich, Aachen, Germany.
- Faculty of Technology and Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University, Bielefeld, Germany.
- Bio-Inspired Circuits and Systems (BICS) Lab. Zernike Institute for Advanced Materials (Zernike Inst Adv Mat), University of Groningen, Groningen, Netherlands.
- CogniGron (Groningen Cognitive Systems and Materials Center), University of Groningen, Groningen, Netherlands.
| | - Ella Janotte
- Event Driven Perception for Robotics, Italian Institute of Technology, iCub facility, Genoa, Italy
| | - Moritz B Milde
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, Australia
| | | | - Martin Egelhaaf
- Neurobiology, Faculty of Biology, Bielefeld University, Bielefeld, Germany
| | - Elisabetta Chicca
- Faculty of Technology and Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University, Bielefeld, Germany
- Bio-Inspired Circuits and Systems (BICS) Lab. Zernike Institute for Advanced Materials (Zernike Inst Adv Mat), University of Groningen, Groningen, Netherlands
- CogniGron (Groningen Cognitive Systems and Materials Center), University of Groningen, Groningen, Netherlands
| |
Collapse
|
8
|
Wu H, Yue S, Hu C. Re-framing bio-plausible collision detection: identifying shared meta-properties through strategic prototyping. Front Neurorobot 2024; 18:1349498. [PMID: 38333372 PMCID: PMC10850265 DOI: 10.3389/fnbot.2024.1349498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/12/2024] [Indexed: 02/10/2024] Open
Abstract
Insects exhibit remarkable abilities in navigating complex natural environments, whether it be evading predators, capturing prey, or seeking out con-specifics, all of which rely on their compact yet reliable neural systems. We explore the field of bio-inspired robotic vision systems, focusing on the locust inspired Lobula Giant Movement Detector (LGMD) models. The existing LGMD models are thoroughly evaluated, identifying their common meta-properties that are essential for their functionality. This article reveals a common framework, characterized by layered structures and computational strategies, which is crucial for enhancing the capability of bio-inspired models for diverse applications. The result of this analysis is the Strategic Prototype, which embodies the identified meta-properties. It represents a modular and more flexible method for developing more responsive and adaptable robotic visual systems. The perspective highlights the potential of the Strategic Prototype: LGMD-Universally Prototype (LGMD-UP), the key to re-framing LGMD models and advancing our understanding and implementation of bio-inspired visual systems in robotics. It might open up more flexible and adaptable avenues for research and practical applications.
Collapse
Affiliation(s)
- Haotian Wu
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Shigang Yue
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| | - Cheng Hu
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| |
Collapse
|
9
|
Wang H, Zhao J, Wang H, Hu C, Peng J, Yue S. Attention and Prediction-Guided Motion Detection for Low-Contrast Small Moving Targets. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6340-6352. [PMID: 35533156 DOI: 10.1109/tcyb.2022.3170699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Small target motion detection within complex natural environments is an extremely challenging task for autonomous robots. Surprisingly, the visual systems of insects have evolved to be highly efficient in detecting mates and tracking prey, even though targets occupy as small as a few degrees of their visual fields. The excellent sensitivity to small target motion relies on a class of specialized neurons, called small target motion detectors (STMDs). However, existing STMD-based models are heavily dependent on visual contrast and perform poorly in complex natural environments, where small targets generally exhibit extremely low contrast against neighboring backgrounds. In this article, we develop an attention-and-prediction-guided visual system to overcome this limitation. The developed visual system comprises three main subsystems, namely: 1) an attention module; 2) an STMD-based neural network; and 3) a prediction module. The attention module searches for potential small targets in the predicted areas of the input image and enhances their contrast against a complex background. The STMD-based neural network receives the contrast-enhanced image and discriminates small moving targets from background false positives. The prediction module foresees future positions of the detected targets and generates a prediction map for the attention module. The three subsystems are connected in a recurrent architecture, allowing information to be processed sequentially to activate specific areas for small target detection. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness and superiority of the proposed visual system for detecting small, low-contrast moving targets against complex natural environments.
Collapse
|
10
|
Zheng Y, Wang Y, Wu G, Li H, Peng J. Enhancing LGMD-based model for collision prediction via binocular structure. Front Neurosci 2023; 17:1247227. [PMID: 37732308 PMCID: PMC10507862 DOI: 10.3389/fnins.2023.1247227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 08/21/2023] [Indexed: 09/22/2023] Open
Abstract
Introduction Lobular giant motion detector (LGMD) neurons, renowned for their distinctive response to looming stimuli, inspire the development of visual neural network models for collision prediction. However, the existing LGMD-based models could not yet incorporate the invaluable feature of depth distance and still suffer from the following two primary drawbacks. Firstly, they struggle to effectively distinguish the three fundamental motion patterns of approaching, receding, and translating, in contrast to the natural abilities of LGMD neurons. Secondly, due to their reliance on a general determination process employing an activation function and fixed threshold for output, these models exhibit dramatic fluctuations in prediction effectiveness across different scenarios. Methods To address these issues, we propose a novel LGMD-based model with a binocular structure (Bi-LGMD). The depth distance of the moving object is extracted by calculating the binocular disparity facilitating a clear differentiation of the motion patterns, after obtaining the moving object's contour through the basic components of the LGMD network. In addition, we introduce a self-adaptive warning depth-distance, enhancing the model's robustness in various motion scenarios. Results The effectiveness of the proposed model is verified using computer-simulated and real-world videos. Discussion Furthermore, the experimental results demonstrate that the proposed model is robust to contrast and noise.
Collapse
Affiliation(s)
- Yi Zheng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Yusi Wang
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Guangrong Wu
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Haiyang Li
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| |
Collapse
|
11
|
Chang Z, Fu Q, Chen H, Li H, Peng J. A look into feedback neural computation upon collision selectivity. Neural Netw 2023; 166:22-37. [PMID: 37480767 DOI: 10.1016/j.neunet.2023.06.039] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 05/20/2023] [Accepted: 06/27/2023] [Indexed: 07/24/2023]
Abstract
Physiological studies have shown that a group of locust's lobula giant movement detectors (LGMDs) has a diversity of collision selectivity to approaching objects, relatively darker or brighter than their backgrounds in cluttered environments. Such diversity of collision selectivity can serve locusts to escape from attack by natural enemies, and migrate in swarm free of collision. For computational studies, endeavours have been made to realize the diverse selectivity which, however, is still one of the most challenging tasks especially in complex and dynamic real world scenarios. The existing models are mainly formulated as multi-layered neural networks with merely feed-forward information processing, and do not take into account the effect of re-entrant signals in feedback loop, which is an essential regulatory loop for motion perception, yet never been explored in looming perception. In this paper, we inaugurate feedback neural computation for constructing a new LGMD-based model, named F-LGMD to look into the efficacy upon implementing different collision selectivity. Accordingly, the proposed neural network model features both feed-forward processing and feedback loop. The feedback control propagates output signals of parallel ON/OFF channels back into their starting neurons, thus makes part of the feed-forward neural network, i.e. the ON/OFF channels and the feedback loop form an iterative cycle system. Moreover, the feedback control is instantaneous, which leads to the existence of a fixed point whereby the fixed point theorem is applied to rigorously derive valid range of feedback coefficients. To verify the effectiveness of the proposed method, we conduct systematic experiments covering synthetic and natural collision datasets, and also online robotic tests. The experimental results show that the F-LGMD, with a unified network, can fulfil the diverse collision selectivity revealed in physiology, which not only reduces considerably the handcrafted parameters compared to previous studies, but also offers a both efficient and robust scheme for collision perception through feedback neural computation.
Collapse
Affiliation(s)
- Zefang Chang
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, China
| | - Hao Chen
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, China
| | - Haiyang Li
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, China
| | - Jigen Peng
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, China.
| |
Collapse
|
12
|
Sun X, Fu Q, Peng J, Yue S. An insect-inspired model facilitating autonomous navigation by incorporating goal approaching and collision avoidance. Neural Netw 2023; 165:106-118. [PMID: 37285728 DOI: 10.1016/j.neunet.2023.05.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 03/17/2023] [Accepted: 05/17/2023] [Indexed: 06/09/2023]
Abstract
Being one of the most fundamental and crucial capacity of robots and animals, autonomous navigation that consists of goal approaching and collision avoidance enables completion of various tasks while traversing different environments. In light of the impressive navigational abilities of insects despite their tiny brains compared to mammals, the idea of seeking solutions from insects for the two key problems of navigation, i.e., goal approaching and collision avoidance, has fascinated researchers and engineers for many years. However, previous bio-inspired studies have focused on merely one of these two problems at one time. Insect-inspired navigation algorithms that synthetically incorporate both goal approaching and collision avoidance, and studies that investigate the interactions of these two mechanisms in the context of sensory-motor closed-loop autonomous navigation are lacking. To fill this gap, we propose an insect-inspired autonomous navigation algorithm to integrate the goal approaching mechanism as the global working memory inspired by the sweat bee's path integration (PI) mechanism, and the collision avoidance model as the local immediate cue built upon the locust's lobula giant movement detector (LGMD) model. The presented algorithm is utilized to drive agents to complete navigation task in a sensory-motor closed-loop manner within a bounded static or dynamic environment. Simulation results demonstrate that the synthetic algorithm is capable of guiding the agent to complete challenging navigation tasks in a robust and efficient way. This study takes the first tentative step to integrate the insect-like navigation mechanisms with different functionalities (i.e., global goal and local interrupt) into a coordinated control system that future research avenues could build upon.
Collapse
Affiliation(s)
- Xuelong Sun
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China; Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, 510006, China
| | - Qinbing Fu
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China; Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, 510006, China
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China; Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, 510006, China.
| | - Shigang Yue
- Computational Intelligence Lab (CIL)/School of Computer Science, University of Lincoln, Lincoln, LN6 7TS, United Kingdom; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, United Kingdom.
| |
Collapse
|
13
|
Fu Q. Motion perception based on ON/OFF channels: A survey. Neural Netw 2023; 165:1-18. [PMID: 37263088 DOI: 10.1016/j.neunet.2023.05.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 04/02/2023] [Accepted: 05/17/2023] [Indexed: 06/03/2023]
Abstract
Motion perception is an essential ability for animals and artificially intelligent systems interacting effectively, safely with surrounding objects and environments. Biological visual systems, that have naturally evolved over hundreds-million years, are quite efficient and robust for motion perception, whereas artificial vision systems are far from such capability. This paper argues that the gap can be significantly reduced by formulation of ON/OFF channels in motion perception models encoding luminance increment (ON) and decrement (OFF) responses within receptive field, separately. Such signal-bifurcating structure has been found in neural systems of many animal species articulating early motion is split and processed in segregated pathways. However, the corresponding biological substrates, and the necessity for artificial vision systems have never been elucidated together, leaving concerns on uniqueness and advantages of ON/OFF channels upon building dynamic vision systems to address real world challenges. This paper highlights the importance of ON/OFF channels in motion perception through surveying current progress covering both neuroscience and computationally modelling works with applications. Compared to related literature, this paper for the first time provides insights into implementation of different selectivity to directional motion of looming, translating, and small-sized target movement based on ON/OFF channels in keeping with soundness and robustness of biological principles. Existing challenges and future trends of such bio-plausible computational structure for visual perception in connection with hotspots of machine learning, advanced vision sensors like event-driven camera finally are discussed.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
14
|
Zhao J, Wang H, Bellotto N, Hu C, Peng J, Yue S. Enhancing LGMD's Looming Selectivity for UAV With Spatial-Temporal Distributed Presynaptic Connections. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2539-2553. [PMID: 34495845 DOI: 10.1109/tnnls.2021.3106946] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Collision detection is one of the most challenging tasks for unmanned aerial vehicles (UAVs). This is especially true for small or micro-UAVs due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula giant movement detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect's visual neuron, LGMD is considered to be an ideal basis for building UAV's collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues, such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts' synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then, radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the nonpreferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is, therefore, tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably and has the potential to be implemented on embedded collision detection systems for small or micro-UAVs.
Collapse
|
15
|
Fu Q, Li Z, Peng J. Harmonizing motion and contrast vision for robust looming detection. ARRAY 2023. [DOI: 10.1016/j.array.2022.100272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
16
|
Wang H, Wang H, Zhao J, Hu C, Peng J, Yue S. A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:316-330. [PMID: 34264832 DOI: 10.1109/tnnls.2021.3094205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro-robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this article, we propose an STMD-based neural network with feedback connection (feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop and find that it shows a preference for high-velocity objects. Extensive experiments suggest that the feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.
Collapse
|
17
|
Jayachandran D, Pannone A, Das M, Schranghamer TF, Sen D, Das S. Insect-Inspired, Spike-Based, in-Sensor, and Night-Time Collision Detector Based on Atomically Thin and Light-Sensitive Memtransistors. ACS NANO 2022; 17:1068-1080. [PMID: 36584350 DOI: 10.1021/acsnano.2c07877] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Detecting a potential collision at night is a challenging task owing to the lack of discernible features that can be extracted from the available visual stimuli. To alert the driver or, alternatively, the maneuvering system of an autonomous vehicle, current technologies utilize resource draining and expensive solutions such as light detection and ranging (LiDAR) or image sensors coupled with extensive software running sophisticated algorithms. In contrast, insects perform the same task of collision detection with frugal neural resources. Even though the general architecture of separate sensing and processing modules is the same in insects and in image-sensor-based collision detectors, task-specific obstacle avoidance algorithms allow insects to reap substantial benefits in terms of size and energy. Here, we show that insect-inspired collision detection algorithms, when implemented in conjunction with in-sensor processing and enabled by innovative optoelectronic integrated circuits based on atomically thin and photosensitive memtransistor technology, can greatly simplify collision detection at night. The proposed collision detector eliminates the need for image capture and image processing yet demonstrates timely escape responses for cars on collision courses under various real-life scenarios at night. The collision detector also has a small footprint of ∼40 μm2 and consumes only a few hundred picojoules of energy. We strongly believe that the proposed collision detectors can augment existing sensors necessary for ensuring autonomous vehicular safety.
Collapse
Affiliation(s)
- Darsith Jayachandran
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
| | - Andrew Pannone
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
| | - Mayukh Das
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
| | - Thomas F Schranghamer
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
| | - Dipanjan Sen
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
| | - Saptarshi Das
- Engineering Science and Mechanics, Penn State University, University Park, Pennsylvania16802, United States
- Electrical Engineering and Computer Science, Penn State University, University Park, Pennsylvania16802, United States
- Materials Science and Engineering, Penn State University, University Park, Pennsylvania16802, United States
- Materials Research Institute, Penn State University, University Park, Pennsylvania16802, United States
| |
Collapse
|
18
|
Ling J, Wang H, Xu M, Chen H, Li H, Peng J. Mathematical study of neural feedback roles in small target motion detection. Front Neurorobot 2022; 16:984430. [PMID: 36203523 PMCID: PMC9530796 DOI: 10.3389/fnbot.2022.984430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 08/26/2022] [Indexed: 11/28/2022] Open
Abstract
Building an efficient and reliable small target motion detection visual system is challenging for artificial intelligence robotics because a small target only occupies few pixels and hardly displays visual features in images. Biological visual systems that have evolved over millions of years could be ideal templates for designing artificial visual systems. Insects benefit from a class of specialized neurons, called small target motion detectors (STMDs), which endow them with an excellent ability to detect small moving targets against a cluttered dynamic environment. Some bio-inspired models featured in feed-forward information processing architectures have been proposed to imitate the functions of the STMD neurons. However, feedback, a crucial mechanism for visual system regulation, has not been investigated deeply in the STMD-based neural circuits and its roles in small target motion detection remain unclear. In this paper, we propose a time-delay feedback STMD model for small target motion detection in complex backgrounds. The main contributions of this study are as follows. First, a feedback pathway is designed by transmitting information from output-layer neurons to lower-layer interneurons in the STMD pathway and the role of the feedback is analyzed from the view of mathematical analysis. Second, to estimate the feedback constant, the existence and uniqueness of solutions for nonlinear dynamical systems formed by feedback loop are analyzed via Schauder's fixed point theorem and contraction mapping theorem. Finally, an iterative algorithm is designed to solve the nonlinear problem and the performance of the proposed model is tested by experiments. Experimental results demonstrate that the feedback is able to weaken background false positives while maintaining a minor effect on small targets. It outperforms existing STMD-based models regarding the accuracy of fast-moving small target detection in visual clutter. The proposed feedback approach could inspire the relevant modeling of robust motion perception robotics visual systems.
Collapse
Affiliation(s)
- Jun Ling
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Hongxin Wang
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
- Computational Intelligence Lab (CIL), School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Mingshuo Xu
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Hao Chen
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Haiyang Li
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- *Correspondence: Haiyang Li
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Jigen Peng
| |
Collapse
|
19
|
A novel motion direction detection mechanism based on dendritic computation of direction-selective ganglion cells. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
20
|
Luan H, Fu Q, Zhang Y, Hua M, Chen S, Yue S. A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice. Front Neurosci 2022; 15:787256. [PMID: 35126038 PMCID: PMC8814358 DOI: 10.3389/fnins.2021.787256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s' receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.
Collapse
Affiliation(s)
- Hao Luan
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Computational Intelligence Laboratory (CIL), School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Yicheng Zhang
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Mu Hua
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Shengyong Chen
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Computational Intelligence Laboratory (CIL), School of Computer Science, University of Lincoln, Lincoln, United Kingdom
- *Correspondence: Shigang Yue
| |
Collapse
|
21
|
Fu Q, Sun X, Liu T, Hu C, Yue S. Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic. Front Robot AI 2021; 8:529872. [PMID: 34422912 PMCID: PMC8378452 DOI: 10.3389/frobt.2021.529872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 07/19/2021] [Indexed: 11/22/2022] Open
Abstract
Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, China.,School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Xuelong Sun
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Tian Liu
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Cheng Hu
- Machine Life and Intelligence Research Centre, School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, China
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, China.,School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| |
Collapse
|
22
|
Wernitznig S, Rind FC, Zankel A, Bock E, Gütl D, Hobusch U, Nikolic M, Pargger L, Pritz E, Radulović S, Sele M, Summerauer S, Pölt P, Leitinger G. The complex synaptic pathways onto a looming-detector neuron revealed using serial block-face scanning electron microscopy. J Comp Neurol 2021; 530:518-536. [PMID: 34338325 DOI: 10.1002/cne.25227] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 07/09/2021] [Accepted: 07/13/2021] [Indexed: 11/09/2022]
Abstract
The ability of locusts to detect looming stimuli and avoid collisions or predators depends on a neuronal circuit in the locust's optic lobe. Although comprehensively studied for over three decades, there are still major questions about the computational steps of this circuit. We used fourth instar larvae of Locusta migratoria to describe the connection between the lobula giant movement detector 1 (LGMD1) neuron in the lobula complex and the upstream neuropil, the medulla. Serial block-face scanning electron microscopy (SBEM) was used to characterize the morphology of the connecting neurons termed trans-medullary afferent (TmA) neurons and their synaptic connectivity. This enabled us to trace neurons over several hundred micrometers between the medulla and the lobula complex while identifying their synapses. We traced two different TmA neurons, each from a different individual, from their synapses with the LGMD in the lobula complex up into the medulla and describe their synaptic relationships. There is not a simple downstream transmission of the signal from a lamina neuron onto these TmA neurons; there is also a feedback loop in place with TmA neurons making outputs as well as receiving inputs. More than one type of neuron shapes the signal of the TmA neurons in the medulla. We found both columnar and trans-columnar neurons connected with the traced TmA neurons in the medulla. These findings indicate that there are computational steps in the medulla that have not been included in models of the neuronal pathway for looming detection.
Collapse
Affiliation(s)
- Stefan Wernitznig
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - F Claire Rind
- Newcastle University, Biosciences Institute, Newcastle upon Tyne, UK
| | - Armin Zankel
- Institute of Electron Microscopy and Nanoanalysis, NAWI Graz, Graz University of Technology, Graz, Austria.,Centre for Electron Microscopy, Graz, Austria
| | - Elisabeth Bock
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Daniel Gütl
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Ulrich Hobusch
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Manuela Nikolic
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Lukas Pargger
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Elisabeth Pritz
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Snježana Radulović
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Mariella Sele
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Susanne Summerauer
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Peter Pölt
- Institute of Electron Microscopy and Nanoanalysis, NAWI Graz, Graz University of Technology, Graz, Austria.,Centre for Electron Microscopy, Graz, Austria
| | - Gerd Leitinger
- Research Unit Electron Microscopic Techniques, Division of Cell Biology, Histology and Embryology, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria.,BioTechMed Graz, Graz, Austria
| |
Collapse
|
23
|
Hu B, Zhang Z. Bio-inspired visual neural network on spatio-temporal depth rotation perception. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05796-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
24
|
Wang H, Fu Q, Wang H, Baxter P, Peng J, Yue S. A bioinspired angular velocity decoding neural network model for visually guided flights. Neural Netw 2021; 136:180-193. [PMID: 33494035 DOI: 10.1016/j.neunet.2020.12.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Revised: 12/03/2020] [Accepted: 12/07/2020] [Indexed: 11/17/2022]
Abstract
Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model's potential for implementation in micro air vehicles which have only visual sensors.
Collapse
Affiliation(s)
- Huatian Wang
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Hongxin Wang
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Paul Baxter
- Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK
| | - Jigen Peng
- School of Mathematics and Information Science, Guangzhou University, Guangzhou, China; Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China.
| | - Shigang Yue
- Machine Life and Intelligence Research Center, Guangzhou University, Guangzhou, China; Computational Intelligence Laboratory (CIL), University of Lincoln, Lincoln, UK.
| |
Collapse
|
25
|
Fu Q, Hu C, Peng J, Rind FC, Yue S. A Robust Collision Perception Visual Neural Network With Specific Selectivity to Darker Objects. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:5074-5088. [PMID: 31804947 DOI: 10.1109/tcyb.2019.2946090] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Building an efficient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing artificial vision systems. In the locust's visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identified as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have verified the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.
Collapse
|
26
|
Fu Q, Yue S. Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds. BIOLOGICAL CYBERNETICS 2020; 114:443-460. [PMID: 32623517 PMCID: PMC7554016 DOI: 10.1007/s00422-020-00841-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/19/2020] [Indexed: 06/03/2023]
Abstract
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; (2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction or null-direction translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China.
- Computational Intelligence Lab/Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK.
| |
Collapse
|
27
|
Using Patent Technology Networks to Observe Neurocomputing Technology Hotspots and Development Trends. SUSTAINABILITY 2020. [DOI: 10.3390/su12187696] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, development in the fields of big data and artificial intelligence has given rise to interest among scholars in neurocomputing-related applications. Neurocomputing has relatively widespread applications because it is a critical technology in numerous fields. However, most studies on neurocomputing have focused on improving related algorithms or application fields; they have failed to highlight the main technology hotspots and development trends from a comprehensive viewpoint. To fill the research gap, this study adopts a new viewpoint and employs technological fields as its main subject. Neurocomputing patents are subjected to network analysis to construct a neurocomputing technology hotspot. The results reveal that the neurocomputing technology hotspots are algorithms, methods or devices for reading or recognizing printed or written characters or patterns, and digital storage characterized by the use of particular electric or magnetic storage elements. Furthermore, the technology hotspots are discovered to not be clustered around particular fields but, rather, are multidisciplinary. The applications that combine neurocomputing with digital storage are currently undergoing the most extensive development. Finally, patentee analysis reveal that neurocomputing technology is mainly being developed by information technology corporations, thereby indicating the market development potential of neurocomputing technology. This study constructs a technology hotspot network model to elucidate the trend in development of neurocomputing technology, and the findings may serve as a reference for industries planning to promote emerging technologies.
Collapse
|