1
|
Wang H, Zhang Z. Dragonfly visual evolutionary neural network: A novel bionic optimizer with related LSGO and engineering design optimization. iScience 2024; 27:109040. [PMID: 38375232 PMCID: PMC10875119 DOI: 10.1016/j.isci.2024.109040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/05/2023] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
Biological visual systems intrinsically include multiple kinds of motion-sensitive neurons. Some of them have been successfully used to construct neural computational models for problem-specific engineering applications such as motion detection, object tracking, etc. Nevertheless, it remains unclear how these neurons' response mechanisms can be contributed to the topic of optimization. Hereby, the dragonfly's visual response mechanism is integrated with the inspiration of swarm evolution to develop a dragonfly visual evolutionary neural network for large-scale global optimization (LSGO) problems. Therein, a grayscale image input-based dragonfly visual neural network online outputs multiple global learning rates, and later, such learning rates guide a population evolution-like state update strategy to seek the global optimum. The comparative experiments show that the neural network is a competitive optimizer capable of effectively solving LSGO benchmark suites with 2000 dimensions per example and the design of an operational amplifier.
Collapse
Affiliation(s)
- Heng Wang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, P.R. China
- Tongren Polytechnic College, Tongren, Guizhou 554300, P.R. China
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, P.R. China
- Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, P.R. China
| |
Collapse
|
2
|
Wang Y, Li H, Zheng Y, Peng J. A directionally selective collision-sensing visual neural network based on fractional-order differential operator. Front Neurorobot 2023; 17:1149675. [PMID: 37152416 PMCID: PMC10160397 DOI: 10.3389/fnbot.2023.1149675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
In this paper, we propose a directionally selective fractional-order lobular giant motion detector (LGMD) visual neural network. Unlike most collision-sensing network models based on LGMDs, our model can not only sense collision threats but also obtain the motion direction of the collision object. Firstly, this paper simulates the membrane potential response of neurons using the fractional-order differential operator to generate reliable collision response spikes. Then, a new correlation mechanism is proposed to obtain the motion direction of objects. Specifically, this paper performs correlation operation on the signals extracted from two pixels, utilizing the temporal delay of the signals to obtain their position relationship. In this way, the response characteristics of direction-selective neurons can be characterized. Finally, ON/OFF visual channels are introduced to encode increases and decreases in brightness, respectively, thereby modeling the bipolar response of special neurons. Extensive experimental results show that the proposed visual neural system conforms to the response characteristics of biological LGMD and direction-selective neurons, and that the performance of the system is stable and reliable.
Collapse
|
3
|
Group movement of UAVs in environment with dynamic obstacles: a survey. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2022. [DOI: 10.1108/ijius-06-2021-0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThe successful application of the group of unmanned aerial vehicles (UAVs) in the tasks of monitoring large areas is becoming a promising direction in modern robotics. This paper aims to study the tasks related to the control of the UAV group while performing a common mission.Design/methodology/approachThis paper discusses the main tasks solved in the process of developing an autonomous UAV group. During the survey, five key tasks of group robotics were investigated, namely, UAV group control, path planning, reconfiguration, task assignment and conflict resolution. Effective methods for solving each problem are presented, and an analysis and comparison of these methods are carried out. Several specifics of various types of UAVs are also described.FindingsThe analysis of a number of modern and effective methods showed that decentralized methods have clear advantages over centralized ones, since decentralized methods effectively perform the assigned mission regardless of on the amount of resources used. As for the method of planning the group movement of UAVs, it is worth choosing methods that combine the algorithms of global and local planning. This combination eliminates the possibility of collisions not only with static and dynamic obstacles, but also with other agents of the group.Originality/valueThe results of scientific research progress in the tasks of UAV group control have been summed up.
Collapse
|
4
|
Zhang Z, Xiao T, Qin X. Fly visual evolutionary neural network solving large‐scale global optimization. INT J INTELL SYST 2021. [DOI: 10.1002/int.22564] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Zhuhong Zhang
- Department of Big Data Science and Engineering, College of Big Data and Information Engineering Guizhou University Guiyang Guizhou China
| | - Tianyu Xiao
- Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computation Guizhou University Guiyang Guizhou China
| | - Xiuchang Qin
- Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computation Guizhou University Guiyang Guizhou China
| |
Collapse
|
5
|
Zhang Z, Li L, Lu J. Gradient-based fly immune visual recurrent neural network solving large-scale global optimization. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
Guzman-Pando A, Chacon-Murguia MI. DeepFoveaNet: Deep Fovea Eagle-Eye Bioinspired Model to Detect Moving Objects. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7090-7100. [PMID: 34351859 DOI: 10.1109/tip.2021.3101398] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Birds of prey especially eagles and hawks have a visual acuity two to five times better than humans. Among the peculiar characteristics of their biological vision are that they have two types of foveae; one shallow fovea used in their binocular vision, and a deep fovea for monocular vision. The deep fovea allows these birds to see objects at long distances and to identify them as possible prey. Inspired by the biological functioning of the deep fovea a model called DeepFoveaNet is proposed in this paper. DeepFoveaNet is a convolutional neural network model to detect moving objects in video sequences. DeepFoveaNet emulates the monocular vision of birds of prey through two Encoder-Decoder convolutional neural network modules. This model combines the capacity of magnification of the deep fovea and the context information of the peripheral vision. Unlike algorithms to detect moving objects, ranked in the first places of the Change Detection database (CDnet14), DeepFoveaNet does not depend on previously trained neural networks, neither on a huge number of training images for its training. Besides, its architecture allows it to learn spatiotemporal information of the video. DeepFoveaNet was evaluated in the CDnet14 database achieving high performance and was ranked as one of the ten best algorithms. The characteristics and results of DeepFoveaNet demonstrated that the model is comparable to the state-of-the-art algorithms to detect moving objects, and it can detect very small moving objects through its deep fovea model that other algorithms cannot detect.
Collapse
|
7
|
Li L, Zhang Z, Lu J. Artificial fly visual joint perception neural network inspired by multiple-regional collision detection. Neural Netw 2020; 135:13-28. [PMID: 33338802 DOI: 10.1016/j.neunet.2020.11.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 11/12/2020] [Accepted: 11/30/2020] [Indexed: 10/22/2022]
Abstract
The biological visual system includes multiple types of motion sensitive neurons which preferentially respond to specific perceptual regions. However, it still keeps open how to borrow such neurons to construct bio-inspired computational models for multiple-regional collision detection. To fill this gap, this work proposes a visual joint perception neural network with two subnetworks - presynaptic and postsynaptic neural networks, inspired by the preferentialperception characteristics of three horizontal and vertical motion sensitive neurons. Related to the neural network and three hazard detection mechanisms, an artificial fly visual synthesized collision detection model for multiple-regional collision detection is originally developed to monitor possible danger occurrence in the case where one or more moving objects appear in the whole field of view. The experiments can clearly draw two conclusions: (i) the acquired neural network can effectively display the characteristics of visual movement, and (ii) the collision detection model, which outperforms the compared models, can effectively perform multiple-regional collision detection at a high success rate, and only takes about 0.24s to complete the process of collision detection for each virtual or actual image frame with resolution 110×60.
Collapse
Affiliation(s)
- Lun Li
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China.
| | - Jiaxuan Lu
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China
| |
Collapse
|
8
|
Xu T, Fan J, Fang Q, Zhu Y, Zhao J. A new robot collision detection method: A modified nonlinear disturbance observer based-on neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179392] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Tian Xu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Jizhuang Fan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Qianqian Fang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Yanhe Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Jie Zhao
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, Heilongjiang, China
| |
Collapse
|
9
|
Fu Q, Wang H, Hu C, Yue S. Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review. ARTIFICIAL LIFE 2019; 25:263-311. [PMID: 31397604 DOI: 10.1162/artl_a_00297] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
Collapse
Affiliation(s)
- Qinbing Fu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Hongxin Wang
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Cheng Hu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Shigang Yue
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| |
Collapse
|
10
|
|
11
|
Hu B, Yue S, Zhang Z. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2803-2821. [PMID: 27831890 DOI: 10.1109/tnnls.2016.2592969] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Collapse
Affiliation(s)
- Bin Hu
- College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Shigang Yue
- School of Computer Science, University of Lincoln, Lincoln, U.K
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, China
| |
Collapse
|
12
|
Hu C, Arvin F, Xiong C, Yue S. Bio-Inspired Embedded Vision System for Autonomous Micro-Robots: The LGMD Case. IEEE Trans Cogn Dev Syst 2017. [DOI: 10.1109/tcds.2016.2574624] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Zhao Y, Li W, Shi P. A real-time collision avoidance learning system for Unmanned Surface Vessels. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.12.028] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|