1
|
Yu F, Wu Y, Ma S, Xu M, Li H, Qu H, Song C, Wang T, Zhao R, Shi L. Brain-inspired multimodal hybrid neural network for robot place recognition. Sci Robot 2023; 8:eabm6996. [PMID: 37163608 DOI: 10.1126/scirobotics.abm6996] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Place recognition is an essential spatial intelligence capability for robots to understand and navigate the world. However, recognizing places in natural environments remains a challenging task for robots because of resource limitations and changing environments. In contrast, humans and animals can robustly and efficiently recognize hundreds of thousands of places in different conditions. Here, we report a brain-inspired general place recognition system, dubbed NeuroGPR, that enables robots to recognize places by mimicking the neural mechanism of multimodal sensing, encoding, and computing through a continuum of space and time. Our system consists of a multimodal hybrid neural network (MHNN) that encodes and integrates multimodal cues from both conventional and neuromorphic sensors. Specifically, to encode different sensory cues, we built various neural networks of spatial view cells, place cells, head direction cells, and time cells. To integrate these cues, we designed a multiscale liquid state machine that can process and fuse multimodal information effectively and asynchronously using diverse neuronal dynamics and bioinspired inhibitory circuits. We deployed the MHNN on Tianjic, a hybrid neuromorphic chip, and integrated it into a quadruped robot. Our results show that NeuroGPR achieves better performance compared with conventional and existing biologically inspired approaches, exhibiting robustness to diverse environmental uncertainty, including perceptual aliasing, motion blur, light, or weather changes. Running NeuroGPR as an overall multi-neural network workload on Tianjic showcases its advantages with 10.5 times lower latency and 43.6% lower power consumption than the commonly used mobile robot processor Jetson Xavier NX.
Collapse
Affiliation(s)
- Fangwen Yu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yujie Wu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Songchen Ma
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Mingkun Xu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Hongyi Li
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Huanyu Qu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Chenhang Song
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Taoyi Wang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
- THU-CET HIK Joint Research Center for Brain-Inspired Computing, Tsinghua University, Beijing 100084, China
| |
Collapse
|
2
|
Sports Action Recognition Based on GB-BP Neural Network and Big Data Analysis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:1678123. [PMID: 34394333 PMCID: PMC8355973 DOI: 10.1155/2021/1678123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 07/26/2021] [Indexed: 11/18/2022]
Abstract
In recent years, the application of the gradient boosting-back propagation (GB-BP) neural network algorithm in many industries has brought huge benefits, so how to combine the GB-BP neural network algorithm with sports has become a research hotspot. Based on this, this paper studies the application of the GB-BP neural network algorithm in wrestling, designs the sports athletes action recognition and classification model based on the GB-BP neural network algorithm, first analyzes the research status of wrestling action recognition, and then optimizes and improves the shortcomings of action recognition and big data analysis technology. The GB-BP neural network algorithm can realize the accurate recognition and classification of wrestlers' training actions and carry out big data mining analysis with known action recognition, so as to achieve accurate classification. The experimental results show that the model can play a good role in wrestling and effectively improve the efficiency of wrestlers in training.
Collapse
|
4
|
Hausler S, Chen Z, Hasselmo ME, Milford M. Bio-inspired multi-scale fusion. BIOLOGICAL CYBERNETICS 2020; 114:209-229. [PMID: 32322978 DOI: 10.1007/s00422-020-00831-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
We reveal how implementing the homogeneous, multi-scale mapping frameworks observed in the mammalian brain's mapping systems radically improves the performance of a range of current robotic localization techniques. Roboticists have developed a range of predominantly single- or dual-scale heterogeneous mapping approaches (typically locally metric and globally topological) that starkly contrast with neural encoding of space in mammalian brains: a multi-scale map underpinned by spatially responsive cells like the grid cells found in the rodent entorhinal cortex. Yet the full benefits of a homogeneous multi-scale mapping framework remain unknown in both robotics and biology: in robotics because of the focus on single- or two-scale systems and limits in the scalability and open-field nature of current test environments and benchmark datasets; in biology because of technical limitations when recording from rodents during movement over large areas. New global spatial databases with visual information varying over several orders of magnitude in scale enable us to investigate this question for the first time in real-world environments. In particular, we investigate and answer the following questions: why have multi-scale representations, how many scales should there be, what should the size ratio between consecutive scales be and how does the absolute scale size affect performance? We answer these questions by developing and evaluating a homogeneous, multi-scale mapping framework mimicking aspects of the rodent multi-scale map, but using current robotic place recognition techniques at each scale. Results in large-scale real-world environments demonstrate multi-faceted and significant benefits for mapping and localization performance and identify the key factors that determine performance.
Collapse
|