1
|
Hiramoto M, Cline HT. Visual neurons recognize complex image transformations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.10.598314. [PMID: 38915552 PMCID: PMC11195111 DOI: 10.1101/2024.06.10.598314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Natural visual scenes are dominated by sequences of transforming images. Spatial visual information is thought to be processed by detection of elemental stimulus features which are recomposed into scenes. How image information is integrated over time is unclear. We explored visual information encoding in the optic tectum. Unbiased stimulus presentation shows that the majority of tectal neurons recognize image sequences. This is achieved by temporally dynamic response properties, which encode complex image transitions over several hundred milliseconds. Calcium imaging reveals that neurons that encode spatiotemporal image sequences fire in spike sequences that predict a logical diagram of spatiotemporal information processing. Furthermore, the temporal scale of visual information is tuned by experience. This study indicates how neurons recognize dynamic visual scenes that transform over time.
Collapse
Affiliation(s)
- Masaki Hiramoto
- Department of Neuroscience, Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Hollis T Cline
- Department of Neuroscience, Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| |
Collapse
|
2
|
Wang H, Zhang Z. Dragonfly visual evolutionary neural network: A novel bionic optimizer with related LSGO and engineering design optimization. iScience 2024; 27:109040. [PMID: 38375232 PMCID: PMC10875119 DOI: 10.1016/j.isci.2024.109040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/05/2023] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
Biological visual systems intrinsically include multiple kinds of motion-sensitive neurons. Some of them have been successfully used to construct neural computational models for problem-specific engineering applications such as motion detection, object tracking, etc. Nevertheless, it remains unclear how these neurons' response mechanisms can be contributed to the topic of optimization. Hereby, the dragonfly's visual response mechanism is integrated with the inspiration of swarm evolution to develop a dragonfly visual evolutionary neural network for large-scale global optimization (LSGO) problems. Therein, a grayscale image input-based dragonfly visual neural network online outputs multiple global learning rates, and later, such learning rates guide a population evolution-like state update strategy to seek the global optimum. The comparative experiments show that the neural network is a competitive optimizer capable of effectively solving LSGO benchmark suites with 2000 dimensions per example and the design of an operational amplifier.
Collapse
Affiliation(s)
- Heng Wang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, P.R. China
- Tongren Polytechnic College, Tongren, Guizhou 554300, P.R. China
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, P.R. China
- Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, P.R. China
| |
Collapse
|
3
|
Xia L, Meng F. Integrated prediction and control of mobility changes of young talents in the field of science and technology based on convolutional neural network. Heliyon 2024; 10:e25950. [PMID: 38434033 PMCID: PMC10906157 DOI: 10.1016/j.heliyon.2024.e25950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 01/31/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024] Open
Abstract
As the scientific and technological levels continue to rise, the dynamics of young talent within these fields are increasingly significant. Currently, there is a lack of comprehensive models for predicting the movement of young professionals in science and technology. To address this gap, this study introduces an integrated approach to forecasting and managing the flow of these talents, leveraging the power of convolutional neural networks (CNNs). The performance test of the proposed method shows that the prediction accuracy of this method is 76.98%, which is superior to the two comparison methods. In addition, the results showed that the average error of the model was 0.0285 lower than that of the model based on the recurrent prediction error (RPE) algorithm learning algorithm, and the average time was 41.6 s lower than that of the model based on the backpropagation (BP) learning algorithm. In predicting the flow of young talent, the study uses flow characteristics including personal characteristics, occupational characteristics, organizational characteristics and network characteristics. Through the above results, the study found that convolutional neural network can effectively use these features to predict the flow of young talents, and its model is superior to other commonly used models in processing speed and accuracy. The above results indicate that the model can provide organizations and government agencies with useful information about the flow trend of young talents, and help them to formulate better talent management strategies.
Collapse
Affiliation(s)
- Lianfeng Xia
- Henan Polytechnic, Zhengzhou, 450046, China
- Mongolian University of Life Sciences, Ulaanbaatar, 17024, Mongolia
| | - Fanshuai Meng
- Henan Polytechnic, Zhengzhou, 450046, China
- Mongolian University of Life Sciences, Ulaanbaatar, 17024, Mongolia
| |
Collapse
|
4
|
Lei F, Peng Z, Liu M, Peng J, Cutsuridis V, Yue S. A Robust Visual System for Looming Cue Detection Against Translating Motion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8362-8376. [PMID: 35188895 DOI: 10.1109/tnnls.2022.3149832] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This article presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events-the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.
Collapse
|
5
|
Wang H, Wang H, Zhao J, Hu C, Peng J, Yue S. A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:316-330. [PMID: 34264832 DOI: 10.1109/tnnls.2021.3094205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro-robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this article, we propose an STMD-based neural network with feedback connection (feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop and find that it shows a preference for high-velocity objects. Extensive experiments suggest that the feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.
Collapse
|
6
|
Guzman-Pando A, Chacon-Murguia MI. DeepFoveaNet: Deep Fovea Eagle-Eye Bioinspired Model to Detect Moving Objects. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7090-7100. [PMID: 34351859 DOI: 10.1109/tip.2021.3101398] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Birds of prey especially eagles and hawks have a visual acuity two to five times better than humans. Among the peculiar characteristics of their biological vision are that they have two types of foveae; one shallow fovea used in their binocular vision, and a deep fovea for monocular vision. The deep fovea allows these birds to see objects at long distances and to identify them as possible prey. Inspired by the biological functioning of the deep fovea a model called DeepFoveaNet is proposed in this paper. DeepFoveaNet is a convolutional neural network model to detect moving objects in video sequences. DeepFoveaNet emulates the monocular vision of birds of prey through two Encoder-Decoder convolutional neural network modules. This model combines the capacity of magnification of the deep fovea and the context information of the peripheral vision. Unlike algorithms to detect moving objects, ranked in the first places of the Change Detection database (CDnet14), DeepFoveaNet does not depend on previously trained neural networks, neither on a huge number of training images for its training. Besides, its architecture allows it to learn spatiotemporal information of the video. DeepFoveaNet was evaluated in the CDnet14 database achieving high performance and was ranked as one of the ten best algorithms. The characteristics and results of DeepFoveaNet demonstrated that the model is comparable to the state-of-the-art algorithms to detect moving objects, and it can detect very small moving objects through its deep fovea model that other algorithms cannot detect.
Collapse
|
7
|
Hu B, Zhang Z. Bio-inspired visual neural network on spatio-temporal depth rotation perception. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05796-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
8
|
Wang H, Peng J, Zheng X, Yue S. A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:839-853. [PMID: 31056526 DOI: 10.1109/tnnls.2019.2910418] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems-ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.
Collapse
|
9
|
Fu Q, Wang H, Hu C, Yue S. Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review. ARTIFICIAL LIFE 2019; 25:263-311. [PMID: 31397604 DOI: 10.1162/artl_a_00297] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
Collapse
Affiliation(s)
- Qinbing Fu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Hongxin Wang
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Cheng Hu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Shigang Yue
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| |
Collapse
|
10
|
|