1
|
Navarro-Guerrero N, Toprak S, Josifovski J, Jamone L. Visuo-haptic object perception for robots: an overview. Auton Robots 2023. [DOI: 10.1007/s10514-023-10091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
Abstract
AbstractThe object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
Collapse
|
2
|
Li A, Ma X. Scalable Cognitive Developmental Network:a strategy for integrating new perception online using relation evolution SOINN. COGN SYST RES 2023. [DOI: 10.1016/j.cogsys.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
3
|
Mugruza-Vassallo CA, Potter DD, Tsiora S, Macfarlane JA, Maxwell A. Prior context influences motor brain areas in an auditory oddball task and prefrontal cortex multitasking modelling. Brain Inform 2021; 8:5. [PMID: 33745089 PMCID: PMC7982371 DOI: 10.1186/s40708-021-00124-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 12/21/2020] [Indexed: 11/19/2022] Open
Abstract
In this study, the relationship of orienting of attention, motor control and the Stimulus- (SDN) and Goal-Driven Networks (GDN) was explored through an innovative method for fMRI analysis considering all voxels in four experimental conditions: standard target (Goal; G), novel (N), neutral (Z) and noisy target (NG). First, average reaction times (RTs) for each condition were calculated. In the second-level analysis, 'distracted' participants, as indicated by slower RTs, evoked brain activations and differences in both hemispheres' neural networks for selective attention, while the participants, as a whole, demonstrated mainly left cortical and subcortical activations. A context analysis was run in the behaviourally distracted participant group contrasting the trials immediately prior to the G trials, namely one of the Z, N or NG conditions, i.e. Z.G, N.G, NG.G. Results showed different prefrontal activations dependent on prior context in the auditory modality, recruiting between 1 to 10 prefrontal areas. The higher the motor response and influence of the previous novel stimulus, the more prefrontal areas were engaged, which extends the findings of hierarchical studies of prefrontal control of attention and better explains how auditory processing interferes with movement. Also, the current study addressed how subcortical loops and models of previous motor response affected the signal processing of the novel stimulus, when this was presented laterally or simultaneously with the target. This multitasking model could enhance our understanding on how an auditory stimulus is affecting motor responses in a way that is self-induced, by taking into account prior context, as demonstrated in the standard condition and as supported by Pulvinar activations complementing visual findings. Moreover, current BCI works address some multimodal stimulus-driven systems.
Collapse
Affiliation(s)
- Carlos A Mugruza-Vassallo
- Grupo de Investigación de Computación Y Neurociencia Cognitiva, Facultad de Ingeniería Y Gestión, Universidad Nacional Tecnológica de Lima Sur - UNTELS, Lima, Perú.
| | - Douglas D Potter
- Neuroscience and Development Group, Arts and Science, University of Dundee, Dundee, UK
| | - Stamatina Tsiora
- School of Psychology, University of Lincoln, Lincoln, United Kingdom
| | | | - Adele Maxwell
- Neuroscience and Development Group, Arts and Science, University of Dundee, Dundee, UK
| |
Collapse
|
4
|
Sun F, Liu H, Yang C, Fang B. Multimodal Continual Learning Using Online Dictionary Updating. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.2973280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Wan C, Cai P, Guo X, Wang M, Matsuhisa N, Yang L, Lv Z, Luo Y, Loh XJ, Chen X. An artificial sensory neuron with visual-haptic fusion. Nat Commun 2020; 11:4602. [PMID: 32929071 PMCID: PMC7490423 DOI: 10.1038/s41467-020-18375-y] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 08/18/2020] [Indexed: 12/18/2022] Open
Abstract
Human behaviors are extremely sophisticated, relying on the adaptive, plastic and event-driven network of sensory neurons. Such neuronal system analyzes multiple sensory cues efficiently to establish accurate depiction of the environment. Here, we develop a bimodal artificial sensory neuron to implement the sensory fusion processes. Such a bimodal artificial sensory neuron collects optic and pressure information from the photodetector and pressure sensors respectively, transmits the bimodal information through an ionic cable, and integrates them into post-synaptic currents by a synaptic transistor. The sensory neuron can be excited in multiple levels by synchronizing the two sensory cues, which enables the manipulating of skeletal myotubes and a robotic hand. Furthermore, enhanced recognition capability achieved on fused visual/haptic cues is confirmed by simulation of a multi-transparency pattern recognition task. Our biomimetic design has the potential to advance technologies in cyborg and neuromorphic systems by endowing them with supramodal perceptual capabilities.
Collapse
Affiliation(s)
- Changjin Wan
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Pingqiang Cai
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Xintong Guo
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Ming Wang
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Naoji Matsuhisa
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Le Yang
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), 138634, Singapore, Singapore
| | - Zhisheng Lv
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
| | - Yifei Luo
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), 138634, Singapore, Singapore
| | - Xian Jun Loh
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), 138634, Singapore, Singapore
| | - Xiaodong Chen
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 639798, Singapore, Singapore.
| |
Collapse
|
6
|
Brain-Inspired Active Learning Architecture for Procedural Knowledge Understanding Based on Human-Robot Interaction. Cognit Comput 2020. [DOI: 10.1007/s12559-020-09753-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
7
|
Solvi C, Gutierrez Al-Khudhairy S, Chittka L. Bumble bees display cross-modal object recognition between visual and tactile senses. Science 2020; 367:910-912. [PMID: 32079771 DOI: 10.1126/science.aay8064] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 01/15/2020] [Indexed: 11/02/2022]
Abstract
Many animals can associate object shapes with incentives. However, such behavior is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition-experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations.
Collapse
Affiliation(s)
- Cwyn Solvi
- School of Biological and Chemical Sciences, Queen Mary University of London, London E1 4NS, UK. .,Department of Biological Sciences, Macquarie University, North Ryde, NSW 2109, Australia
| | | | - Lars Chittka
- School of Biological and Chemical Sciences, Queen Mary University of London, London E1 4NS, UK
| |
Collapse
|
8
|
Ferreira CD, Gadelha MJN, Fonsêca ÉKGD, Silva JSCD, Torro N, Fernández-Calvo B. Long-term memory of haptic and visual information in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2020; 28:65-77. [PMID: 31891286 DOI: 10.1080/13825585.2019.1710450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The present study examined haptic and visual memory capacity for familiar objects through the application of an intentional free-recall task with three-time intervals in a sample of 78 healthy older adults without cognitive impairment. A wooden box and a turntable were used for the presentation of haptic and visual stimuli, respectively. The procedure consisted of two phases, a study phase that consisted of the presentation of stimuli, and a test phase (free-recall task) performed after one hour, one day or one week. The analysis of covariance (ANCOVA) indicated that there was a main effect only for the time intervals (F (2,71) = 12.511, p = .001, η2 = 0.261), with a lower recall index for the interval of one week compared to the other intervals. We concluded that the memory capacity between the systems (haptic and visual) is similar for long retrieval intervals (hours to days).
Collapse
Affiliation(s)
- Cyntia Diógenes Ferreira
- Laboratory of Cognitive Science and Perception, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| | | | | | | | - Nelson Torro
- Laboratory of Cognitive Science and Perception, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| | - Bernardino Fernández-Calvo
- Laboratory of aging and neuropsychological disorder, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| |
Collapse
|
9
|
A Multiscale Hierarchical Threshold-Based Completed Local Entropy Binary Pattern for Texture Classification. Cognit Comput 2019. [DOI: 10.1007/s12559-019-09673-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|