1
|
The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience. INFORMATION 2023. [DOI: 10.3390/info14020082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in this concept paper on the basis of examples drawn from the original code and from some of the most recent related empirical findings on contextual modulation in the brain, highlighting the potential of Grossberg’s pioneering insights and groundbreaking theoretical work for intelligent solutions in the domain of developmental and cognitive robotics.
Collapse
|
2
|
Spatiotemporal Modeling of Grip Forces Captures Proficiency in Manual Robot Control. BIOENGINEERING (BASEL, SWITZERLAND) 2023; 10:bioengineering10010059. [PMID: 36671631 PMCID: PMC9854605 DOI: 10.3390/bioengineering10010059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023]
Abstract
New technologies for monitoring grip forces during hand and finger movements in non-standard task contexts have provided unprecedented functional insights into somatosensory cognition. Somatosensory cognition is the basis of our ability to manipulate and transform objects of the physical world and to grasp them with the right amount of force. In previous work, the wireless tracking of grip-force signals recorded from biosensors in the palm of the human hand has permitted us to unravel some of the functional synergies that underlie perceptual and motor learning under conditions of non-standard and essentially unreliable sensory input. This paper builds on this previous work and discusses further, functionally motivated, analyses of individual grip-force data in manual robot control. Grip forces were recorded from various loci in the dominant and non-dominant hands of individuals with wearable wireless sensor technology. Statistical analyses bring to the fore skill-specific temporal variations in thousands of grip forces of a complete novice and a highly proficient expert in manual robot control. A brain-inspired neural network model that uses the output metric of a self-organizing pap with unsupervised winner-take-all learning was run on the sensor output from both hands of each user. The neural network metric expresses the difference between an input representation and its model representation at any given moment in time and reliably captures the differences between novice and expert performance in terms of grip-force variability.Functionally motivated spatiotemporal analysis of individual average grip forces, computed for time windows of constant size in the output of a restricted amount of task-relevant sensors in the dominant (preferred) hand, reveal finger-specific synergies reflecting robotic task skill. The analyses lead the way towards grip-force monitoring in real time. This will permit tracking task skill evolution in trainees, or identify individual proficiency levels in human robot-interaction, which represents unprecedented challenges for perceptual and motor adaptation in environmental contexts of high sensory uncertainty. Cross-disciplinary insights from systems neuroscience and cognitive behavioral science, and the predictive modeling of operator skills using parsimonious Artificial Intelligence (AI), will contribute towards improving the outcome of new types of surgery, in particular the single-port approaches such as NOTES (Natural Orifice Transluminal Endoscopic Surgery) and SILS (Single-Incision Laparoscopic Surgery).
Collapse
|
3
|
Kong Y, Zhu F, Sun H, Lin Z, Wang Q. A Generic View Planning System Based on Formal Expression of Perception Tasks. ENTROPY (BASEL, SWITZERLAND) 2022; 24:578. [PMID: 35626463 PMCID: PMC9141229 DOI: 10.3390/e24050578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/15/2022] [Accepted: 04/18/2022] [Indexed: 02/04/2023]
Abstract
View planning (VP) is a technique that guides the adjustment of the sensor's postures in multi-view perception tasks. It converts the perception process into active perception, which improves the intelligence and reduces the resource consumption of the robot. We propose a generic VP system for multiple kinds of visual perception. The VP system is built on the basis of the formal description of the visual task, and the next best view is calculated by the system. When dealing with a given visual task, we can simply update its description as the input of the VP system, and obtain the defined best view in real time. Formal description of the perception task includes the task's status, the objects' prior information library, the visual representation status and the optimization goal. The task's status and the visual representation status are updated when data are received at a new view. If the task's status has not reached its goal, candidate views are sorted based on the updated visual representation status, and the next best view that can minimize the entropy of the model space is chosen as the output of the VP system. Experiments of view planning for 3D recognition and reconstruction tasks are conducted, and the result shows that our algorithm has good performance on different tasks.
Collapse
Affiliation(s)
- Yanzi Kong
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Feng Zhu
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Haibo Sun
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Zhiyuan Lin
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qun Wang
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|