1
|
Krieger K, Egger J, Kleesiek J, Gunzer M, Chen J. Multisensory Extended Reality Applications Offer Benefits for Volumetric Biomedical Image Analysis in Research and Medicine. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01094-x. [PMID: 38862851 DOI: 10.1007/s10278-024-01094-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 02/13/2024] [Accepted: 02/14/2024] [Indexed: 06/13/2024]
Abstract
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as the best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.
Collapse
Affiliation(s)
- Kathrin Krieger
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany.
- Neuroinformatics Group, Faculity of Technology, Bielefeld University, Inspiration 1, Bielefeld, 33619, NRW, Germany.
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, University of Duisburg-Essen, Girardetstr. 2, Essen, 45131, NRW, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, Essen, 45147, NRW, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, University of Duisburg-Essen, Girardetstr. 2, Essen, 45131, NRW, Germany
| | - Matthias Gunzer
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany
- Institute for Experimental Immunology and Imaging, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, Essen, 45147, NRW, Germany
| | - Jianxu Chen
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany
| |
Collapse
|
2
|
Augière T, Simoneau M, Mercier C. Visuotactile integration in individuals with fibromyalgia. Front Hum Neurosci 2024; 18:1390609. [PMID: 38826615 PMCID: PMC11140151 DOI: 10.3389/fnhum.2024.1390609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 04/29/2024] [Indexed: 06/04/2024] Open
Abstract
Our brain constantly integrates afferent information, such as visual and tactile information, to perceive the world around us. According to the maximum-likelihood estimation (MLE) model, imprecise information will be weighted less than precise, making the multisensory percept as precise as possible. Individuals with fibromyalgia (FM), a chronic pain syndrome, show alterations in the integration of tactile information. This could lead to a decrease in their weight in a multisensory percept or a general disruption of multisensory integration, making it less beneficial. To assess multisensory integration, 15 participants with FM and 18 pain-free controls performed a temporal-order judgment task in which they received pairs of sequential visual, tactile (unisensory conditions), or visuotactile (multisensory condition) stimulations on the index and the thumb of the non-dominant hand and had to determine which finger was stimulated first. The task enabled us to measure the precision and accuracy of the percept in each condition. Results indicate an increase in precision in the visuotactile condition compared to the unimodal conditions in controls only, although we found no intergroup differences. The observed visuotactile precision was correlated to the precision predicted by the MLE model in both groups, suggesting an optimal integration. Finally, the weights of the sensory information were not different between the groups; however, in the group with FM, higher pain intensity was associated with smaller tactile weight. This study shows no alterations of the visuotactile integration in individuals with FM, though pain may influence tactile weight in these participants.
Collapse
Affiliation(s)
- Tania Augière
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- School of Rehabilitation Sciences, Faculty of Medicine, Laval University, Quebec, QC, Canada
| | - Martin Simoneau
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- Department of Kinesiology, Faculty of Medicine, Laval University, Quebec, QC, Canada
| | - Catherine Mercier
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- School of Rehabilitation Sciences, Faculty of Medicine, Laval University, Quebec, QC, Canada
| |
Collapse
|
3
|
Chen Q, Dong Y, Gai Y. Tactile Location Perception Encoded by Gamma-Band Power. Bioengineering (Basel) 2024; 11:377. [PMID: 38671798 PMCID: PMC11048554 DOI: 10.3390/bioengineering11040377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 03/31/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
BACKGROUND The perception of tactile-stimulation locations is an important function of the human somatosensory system during body movements and its interactions with the surroundings. Previous psychophysical and neurophysiological studies have focused on spatial location perception of the upper body. In this study, we recorded single-trial electroencephalography (EEG) responses evoked by four vibrotactile stimulators placed on the buttocks and thighs while the human subject was sitting in a chair with a cushion. METHODS Briefly, 14 human subjects were instructed to sit in a chair for a duration of 1 h or 1 h and 45 min. Two types of cushions were tested with each subject: a foam cushion and an air-cell-based cushion dedicated for wheelchair users to alleviate tissue stress. Vibrotactile stimulations were applied to the sitting interface at the beginning and end of the sitting period. Somatosensory-evoked potentials were obtained using a 32-channel EEG. An artificial neural net was used to predict the tactile locations based on the evoked EEG power. RESULTS We found that single-trial beta (13-30 Hz) and gamma (30-50 Hz) waves can best predict the tactor locations with an accuracy of up to 65%. Female subjects showed the highest performances, while males' sensitivity tended to degrade after the sitting period. A three-way ANOVA analysis indicated that the air-cell cushion maintained location sensitivity better than the foam cushion. CONCLUSION Our finding shows that tactile location information is encoded in EEG responses and provides insights on the fundamental mechanisms of the tactile system, as well as applications in brain-computer interfaces that rely on tactile stimulation.
Collapse
Affiliation(s)
| | | | - Yan Gai
- Biomedical Engineering, School of Science and Engineering, Saint Louis University, 3507 Lindell Blvd, St. Louis, MO 63103, USA; (Q.C.); (Y.D.)
| |
Collapse
|
4
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
5
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Soballa P, Frings C, Schmalbrock P, Merz S. Multisensory integration reduces landmark distortions for tactile but not visual targets. J Neurophysiol 2023; 130:1403-1413. [PMID: 37910559 DOI: 10.1152/jn.00282.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/03/2023] Open
Abstract
Target localization is influenced by the presence of additionally presented nontargets, termed landmarks. In both the visual and tactile modality, these landmarks led to systematic distortions of target localizations often resulting in a shift toward the landmark. This shift has been attributed to averaging the spatial memory of both stimuli. Crucially, everyday experiences often rely on multiple modalities, and multisensory research suggests that inputs from different senses are optimally integrated, not averaged, for accurate perception, resulting in more reliable perception of cross-modal compared with uni-modal stimuli. As this could also lead to a reduced influence of the landmark, we wanted to test whether landmark distortions would be reduced when presented in a different modality or whether landmark distortions were unaffected by the modalities presented. In two experiments (each n = 30) tactile or visual targets were paired with tactile or visual landmarks. Experiment 1 showed that targets were less shifted toward landmarks from the different than the same modality, which was more pronounced for tactile than for visual targets. Experiment 2 aimed to replicate this pattern with increased visual uncertainty to rule out that smaller localization shifts of visual targets due to low uncertainty had led to the results. Still, landmark modality influenced localization shifts for tactile but not visual targets. The data pattern for tactile targets is not in line with memory averaging but seems to reflect the effects of multisensory integration, whereas visual targets were less prone to landmark distortions and do not appear to benefit from multisensory integration.NEW & NOTEWORTHY In the present study, we directly tested the predictions of two different accounts, namely, spatial memory averaging and multisensory integration, concerning the degree of landmark distortions of targets across modalities. We showed that landmark distortions were reduced across modalities compared to distortions within modalities, which is in line with multisensory integration. Crucially, this pattern was more pronounced for tactile than for visual targets.
Collapse
Affiliation(s)
- Paula Soballa
- Department of Psychology, University of Trier, Germany
| | | | | | - Simon Merz
- Department of Psychology, University of Trier, Germany
| |
Collapse
|
7
|
Brizzi G, Sansoni M, Di Lernia D, Frisone F, Tuena C, Riva G. The multisensory mind: a systematic review of multisensory integration processing in Anorexia and Bulimia Nervosa. J Eat Disord 2023; 11:204. [PMID: 37974266 PMCID: PMC10655389 DOI: 10.1186/s40337-023-00930-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 11/12/2023] [Indexed: 11/19/2023] Open
Abstract
Individuals with Anorexia Nervosa and Bulimia Nervosa present alterations in the way they experience their bodies. Body experience results from a multisensory integration process in which information from different sensory domains and spatial reference frames is combined into a coherent percept. Given the critical role of the body in the onset and maintenance of both Anorexia Nervosa and Bulimia Nervosa, we conducted a systematic review to examine multisensory integration abilities of individuals affected by these two conditions and investigate whether they exhibit impairments in crossmodal integration. We searched for studies evaluating crossmodal integration in individuals with a current diagnosis of Anorexia Nervosa and Bulimia Nervosa as compared to healthy individuals from both behavioral and neurobiological perspectives. A search of PubMed, PsycINFO, and Web of Sciences databases was performed to extract relevant articles. Of the 2348 studies retrieved, 911 were unique articles. After the screening, 13 articles were included. Studies revealed multisensory integration abnormalities in patients affected by Anorexia Nervosa; only one included individuals with Bulimia Nervosa and observed less severe impairments compared to healthy controls. Overall, results seemed to support the presence of multisensory deficits in Anorexia Nervosa, especially when integrating interoceptive and exteroceptive information. We proposed the Predictive Coding framework for understanding our findings and suggested future lines of investigation.
Collapse
Affiliation(s)
- Giulia Brizzi
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy.
| | - Maria Sansoni
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| | - Daniele Di Lernia
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
| | - Fabio Frisone
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| |
Collapse
|
8
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
9
|
Navarro-Guerrero N, Toprak S, Josifovski J, Jamone L. Visuo-haptic object perception for robots: an overview. Auton Robots 2023. [DOI: 10.1007/s10514-023-10091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
Abstract
AbstractThe object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
Collapse
|
10
|
Abstract
SUMMARY STATEMENT Simulation-based training using virtual reality head-mounted displays (VR-HMD) is increasingly being used within the field of medical education. This article systematically reviews and appraises the quality of the literature on the use of VR-HMDs in medical education. A search in the databases PubMed/MEDLINE, Embase, ERIC, Scopus, Web of Science, Cochrane Library, and PsychINFO was carried out. Studies were screened according to predefined exclusion criteria, and quality was assessed using the Medical Education Research Study Quality Instrument. In total, 41 articles were included and thematically divided into 5 groups: anatomy, procedural skills, surgical procedures, communication skills, and clinical decision making. Participants highly appreciated using VR-HMD and rated it better than most other training methods. Virtual reality head-mounted display outperformed traditional methods of learning surgical procedures. Although VR-HMD showed promising results when learning anatomy, it was not considered better than other available study materials. No conclusive findings could be synthesized regarding the remaining 3 groups.
Collapse
|
11
|
Adriano A, Rinaldi L, Girelli L. Spatial frequency equalization does not prevent spatial-numerical associations. Psychon Bull Rev 2022; 29:1492-1502. [PMID: 35132580 PMCID: PMC8821778 DOI: 10.3758/s13423-022-02060-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/07/2022] [Indexed: 11/30/2022]
Abstract
There is an intense debate surrounding the origin of spatial-numerical associations (SNAs), according to which small numbers are mapped onto the left side of the space and large numbers onto the right. Despite evidence suggesting that SNAs would emerge as an innate predisposition to map numerical information onto a left-to-right spatially oriented mental representation, alternative accounts have challenged these proposals, maintaining that such a mapping would be the result of a mere spatial frequency (SF) coding of any visual image. That is, any smaller or larger array of objects would naturally contain more low or high SF information and, accordingly, each hemisphere would be preferentially tuned only for one SF range (e.g., right hemisphere tuned for low SF and left hemisphere tuned for high SF). This would determine the typical SNA (e.g., faster RTs for small numerical arrays with the left hand and for large numerical arrays with the right hand). To directly probe the role of SF coding in SNAs, we tested participants in a typical dot-arrays comparison task with two numerical sets: one in which SFs were confounded with numerosity (Experiment 1) and one in which the full SF power spectrum was equalized across all stimuli, keeping this cue uninformative about numerosity (Experiment 2). We found that SNAs emerged in both experiments, independently of whether SF was confounded or not with numerosity. Taken together, these findings suggest that SNAs cannot simply originate from SF power spectrum alone, and, thus, they rule out the brain's asymmetric SF tuning as a primary cause of such an effect.
Collapse
Affiliation(s)
- Andrea Adriano
- Dipartimento di Psicologia, Università degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Edificio U6, 20126, Milano, Italy.
| | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Cognitive Psychology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - Luisa Girelli
- Dipartimento di Psicologia, Università degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Edificio U6, 20126, Milano, Italy
- NeuroMI, Milan Center for Neuroscience, Milano, Italy
| |
Collapse
|
12
|
Camponogara I, Volcic R. Visual uncertainty unveils the distinct role of haptic cues in multisensory grasping. eNeuro 2022; 9:ENEURO.0079-22.2022. [PMID: 35641223 PMCID: PMC9215692 DOI: 10.1523/eneuro.0079-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/26/2022] [Accepted: 05/19/2022] [Indexed: 11/21/2022] Open
Abstract
Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs. peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit irrespective of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of non-visual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.Significance statementThe longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that non-visual inputs serve an important, previously under-appreciated, functional role in grasping.
Collapse
Affiliation(s)
- Ivan Camponogara
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
13
|
Kossowsky H, Farajian M, Nisky I. The Effect of Kinesthetic and Artificial Tactile Noise and Variability on Stiffness Perception. IEEE TRANSACTIONS ON HAPTICS 2022; 15:351-362. [PMID: 35271449 DOI: 10.1109/toh.2022.3158386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robot-assisted minimally invasive surgeries (RAMIS) have many benefits. A disadvantage, however, is the lack of haptic feedback. Haptic feedback is comprised of kinesthetic and tactile information, and we use both to form stiffness perception. Applying both kinesthetic and tactile feedback can enable more precise feedback than kinesthetic feedback alone. However, during remote surgeries, haptic noises and variations can be present. Therefore, toward designing haptic feedback for RAMIS, it is important to understand the effect of haptic manipulations on stiffness perception. We assessed the effect of two manipulations using stiffness discrimination tasks in which participants received force feedback and artificial skin stretch. In Experiment 1, we added sinusoidal noise to the artificial tactile signal, and found that the noise did not affect participants' stiffness perception or uncertainty. In Experiment 2, we varied either the kinesthetic or the artificial tactile information between consecutive interactions with an object. We found that the both forms of variability did not affect stiffness perception, but kinesthetic variability increased participants' uncertainty. We show that haptic feedback, comprised of force feedback and artificial skin stretch, provides robust haptic information even in the presence of noise and variability, and hence can potentially be both beneficial and viable in RAMIS.
Collapse
|
14
|
Muukkonen I, Kilpeläinen M, Turkkila R, Saarela T, Salmela V. Obligatory integration of face features in expression discrimination. VISUAL COGNITION 2022. [DOI: 10.1080/13506285.2022.2046222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- I. Muukkonen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - M. Kilpeläinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - R. Turkkila
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - T. Saarela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - V. Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| |
Collapse
|
15
|
Scarfe P. Experimentally disambiguating models of sensory cue integration. J Vis 2022; 22:5. [PMID: 35019955 PMCID: PMC8762719 DOI: 10.1167/jov.22.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory cue integration is one of the primary areas in which a normative mathematical framework has been used to define the “optimal” way in which to make decisions based upon ambiguous sensory information and compare these predictions to behavior. The conclusion from such studies is that sensory cues are integrated in a statistically optimal fashion. However, numerous alternative computational frameworks exist by which sensory cues could be integrated, many of which could be described as “optimal” based on different criteria. Existing studies rarely assess the evidence relative to different candidate models, resulting in an inability to conclude that sensory cues are integrated according to the experimenter's preferred framework. The aims of the present paper are to summarize and highlight the implicit assumptions rarely acknowledged in testing models of sensory cue integration, as well as to introduce an unbiased and principled method by which to determine, for a given experimental design, the probability with which a population of observers behaving in accordance with one model of sensory integration can be distinguished from the predictions of a set of alternative models.
Collapse
Affiliation(s)
- Peter Scarfe
- Vision and Haptics Laboratory, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,
| |
Collapse
|
16
|
Using Immersive Virtual Reality to Examine How Visual and Tactile Cues Drive the Material-Weight Illusion. Atten Percept Psychophys 2021; 84:509-518. [PMID: 34862589 PMCID: PMC8641965 DOI: 10.3758/s13414-021-02414-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2021] [Indexed: 11/08/2022]
Abstract
The material-weight illusion (MWI) demonstrates how our past experience with material and weight can create expectations that influence the perceived heaviness of an object. Here we used mixed-reality to place touch and vision in conflict, to investigate whether the modality through which materials are presented to a lifter could influence the top-down perceptual processes driving the MWI. University students lifted equally-weighted polystyrene, cork and granite cubes whilst viewing computer-generated images of the cubes in virtual reality (VR). This allowed the visual and tactile material cues to be altered, whilst all other object properties were kept constant. Representation of the objects’ material in VR was manipulated to create four sensory conditions: visual-tactile matched, visual-tactile mismatched, visual differences only and tactile differences only. A robust MWI was induced across all sensory conditions, whereby the polystyrene object felt heavier than the granite object. The strength of the MWI differed across conditions, with tactile material cues having a stronger influence on perceived heaviness than visual material cues. We discuss how these results suggest a mechanism whereby multisensory integration directly impacts how top-down processes shape perception.
Collapse
|
17
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
18
|
Carboni G, Nanayakkara T, Takagi A, Burdet E. Adapting the visuo-haptic perception through muscle coactivation. Sci Rep 2021; 11:21986. [PMID: 34753996 PMCID: PMC8578662 DOI: 10.1038/s41598-021-01344-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 10/19/2021] [Indexed: 11/25/2022] Open
Abstract
While the nervous system can coordinate muscles' activation to shape the mechanical interaction with the environment, it is unclear if and how the arm's coactivation influences visuo-haptic perception and motion planning. Here we show that the nervous system can voluntarily coactivate muscles to improve the quality of the haptic percept. Subjects tracked a randomly moving visual target they were physically coupled to through a virtual elastic band, where the stiffness of the coupling increased with wrist coactivation. Subjects initially relied on vision alone to track the target, but with practice they learned to combine the visual and haptic percepts in a Bayesian manner to improve their tracking performance. This improvement cannot be explained by the stronger mechanical guidance from the elastic band. These results suggest that with practice the nervous system can learn to integrate a novel haptic percept with vision in an optimal fashion.
Collapse
Affiliation(s)
- Gerolamo Carboni
- Imperial College of Science, Technology and Medicine, SW7 2AZ, London, UK.
| | | | - Atsushi Takagi
- NTT Communication Science Laboratories, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Etienne Burdet
- Imperial College of Science, Technology and Medicine, SW7 2AZ, London, UK.
| |
Collapse
|
19
|
Farajian M, Leib R, Kossowsky H, Nisky I. Visual Feedback Weakens the Augmentation of Perceived Stiffness by Artificial Skin Stretch. IEEE TRANSACTIONS ON HAPTICS 2021; 14:686-691. [PMID: 33465030 DOI: 10.1109/toh.2021.3052912] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Tactile stimulation devices are gaining popularity in haptic science and technology-they are lightweight, low-cost, can be wearable, and do not suffer from instability during closed loop interactions with users. Applying tactile stimulation, by means of stretching the fingerpad skin concurrently with kinesthetic force feedback, has been shown to augment the perceived stiffness during interactions with elastic objects. However, to date, the perceptual augmentation due to artificial skin-stretch was studied in the absence of visual feedback. In this article, we tested whether this perceptual augmentation is robust when the stretch is applied in combination with visual displacement feedback. We used a forced-choice stiffness discrimination task with four conditions: force feedback, force feedback with skin-stretch, force and visual feedback, and force and visual feedback with skin-stretch. We found that the visual feedback weakens, but does not eliminate, the skin-stretch induced perceptual effect. Additionally, no effect of visual feedback on the discrimination precision was found.
Collapse
|
20
|
Ogawa N, Narumi T, Hirose M. Effect of Avatar Appearance on Detection Thresholds for Remapped Hand Movements. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3182-3197. [PMID: 31940540 DOI: 10.1109/tvcg.2020.2964758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Hand interaction techniques in virtual reality often exploit visual dominance over proprioception to remap physical hand movements onto different virtual movements. However, when the offset between virtual and physical hands increases, the remapped virtual hand movements are hardly self-attributed, and the users become aware of the remapping. Interestingly, the sense of self-attribution of a body is called the sense of body ownership (SoBO) in the field of psychology, and the realistic the avatar, the stronger is the SoBO. Hence, we hypothesized that realistic avatars (i.e., human hands) can foster self-attribution of the remapped movements better than abstract avatars (i.e., spherical pointers), thus making the remapping less noticeable. In this article, we present an experiment in which participants repeatedly executed reaching movements with their right hand while different amounts of horizontal shifts were applied. We measured the remapping detection thresholds for each combination of shift directions (left or right) and avatar appearances (realistic or abstract). The results show that realistic avatars increased the detection threshold (i.e., lowered sensitivity) by 31.3 percent than the abstract avatars when the leftward shift was applied (i.e., when the hand moved in the direction away from the body-midline). In addition, the proprioceptive drift (i.e., the displacement of self-localization toward an avatar) was larger with realistic avatars for leftward shifts, indicating that visual information was given greater preference during visuo-proprioceptive integration in realistic avatars. Our findings quantifiably show that realistic avatars can make remapping less noticeable for larger mismatches between virtual and physical movements and can potentially improve a wide variety of hand-remapping techniques without changing the mapping itself.
Collapse
|
21
|
de Farias C, Marturi N, Stolkin R, Bekiroglu Y. Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3063074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Adriano A, Girelli L, Rinaldi L. The ratio effect in visual numerosity comparisons is preserved despite spatial frequency equalisation. Vision Res 2021; 183:41-52. [PMID: 33676137 DOI: 10.1016/j.visres.2021.01.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 01/27/2021] [Accepted: 01/29/2021] [Indexed: 11/30/2022]
Abstract
How non-symbolic numerosity is visually extracted remains a matter of intense debate. Most evidence suggests that numerosity is directly extracted on individual objects following Weber's law, at least for a moderate numerical range. Alternative accounts propose that, whatever the range, numerosity is indirectly derived from summary texture-statistics of the raw image such as spatial frequency (SF). Here, to disentangle these accounts, we tested whether the well-known behavioural signature of numerosity encoding (ratio effect) is preserved despite the equalisation of the SF content. In Experiment 1, participants had to select the numerically larger of two briefly presented moderate-range numerical sets (i.e., 8-18 dots) carefully matched for SF; the ratio between numerosities was manipulated by levels of increasing difficulty (e.g., 0.66, 0.75, 0.8). In Experiment 2, participants performed the same task, but they were presented with both the original and SF equalised stimuli. In both experiments, the results clearly showed a ratio-dependence of the performance: numerosity discrimination became harder and slower as the ratio between numerosities increased. Moreover, this effect was found to be independent of the stimulus type, although the overall performance was better with the original rather than the SF equalised stimuli (Experiment 2). Taken together, these findings indicate that the power spectrum per se cannot explain the main behavioural signature of Weber-like encoding of numerosities (the ratio effect), at least over the tested numerical range, partially challenging alternative indirect accounts of numerosity processing.
Collapse
Affiliation(s)
- Andrea Adriano
- Department of Psychology, University of Milano-Bicocca, Italy.
| | - Luisa Girelli
- Department of Psychology, University of Milano-Bicocca, Italy; NeuroMI, Milan Center for Neuroscience, Milano, Italy
| | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
23
|
A Systematic Comparison of Perceptual Performance in Softness Discrimination with Different Fingers. Atten Percept Psychophys 2020; 82:3696-3709. [PMID: 32686066 PMCID: PMC7536162 DOI: 10.3758/s13414-020-02100-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
In studies investigating haptic softness perception, participants are typically instructed to explore soft objects by indenting them with their index finger. In contrast, performance with other fingers has rarely been investigated. We wondered which fingers are used in spontaneous exploration and if performance differences between fingers can explain spontaneous usage. In Experiment 1 participants discriminated the softness of two rubber stimuli with hardly any constraints on finger movements. Results indicate that humans use successive phases of different fingers and finger combinations during an exploration, preferring index, middle, and (to a lesser extent) ring finger. In Experiment 2 we compared discrimination thresholds between conditions, with participants using one of the four fingers of the dominant hand. Participants compared the softness of rubber stimuli in a two-interval forced choice discrimination task. Performance with index and middle finger was better as compared to ring and little finger, the little finger was the worst. In Experiment 3 we again compared discrimination thresholds, but participants were told to use constant peak force. Performance with the little finger was worst, whereas performance for the other fingers did not differ. We conclude that in spontaneous exploration the preference of combinations of index, middle, and partly ring finger seems to be well chosen, as indicated by improved performance with the spontaneously used fingers. Better performance seems to be based on both different motor abilities to produce force, mainly linked to using index and middle finger, and different sensory sensitivities, mainly linked to avoiding the little finger.
Collapse
|
24
|
Risso G, Martoni RM, Erzegovesi S, Bellodi L, Baud-Bovy G. Visuo-tactile shape perception in women with Anorexia Nervosa and healthy women with and without body concerns. Neuropsychologia 2020; 149:107635. [PMID: 33058922 DOI: 10.1016/j.neuropsychologia.2020.107635] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 09/18/2020] [Accepted: 09/19/2020] [Indexed: 10/23/2022]
Abstract
A key feature of Anorexia Nervosa is body image disturbances, the study of which has focused mainly on visual and attitudinal aspects, did not always contain homogeneous groups of patients, and/or did not evaluate body shape concerns of the control group. In this study, we used psychophysical methods to investigate the visual, tactile and bimodal perception of elliptical shapes in a group of patients with Anorexia Nervosa (AN) restricting type and two groups of healthy participants, which differed from each other by the presence of concerns about their own bodies. We used an experimental paradigm designed to test the hypothesis that the perceptual deficits in AN reflect an impairment in multisensory integration. The results showed that the discrimination thresholds of AN patients are larger than those of the two control groups. While all participants overestimated the width of the ellipses, this distortion was more pronounced in AN patients and, to a lesser extent, healthy women concerned about their bodies. All groups integrated visual and tactile information similarly in the bimodal conditions, which does not support the multi-modal integration impairment hypothesis. We interpret these results within an integrated model of perceptual deficits of Anorexia Nervosa based on a model of somatosensation that posits a link between object tactile perception and Mental Body Representations. Finally, we found that the participants' perceptual abilities were correlated with their clinical scores. This result should encourage further studies that aim at evaluating the potential of perceptual indexes as a tool to support clinical practices.
Collapse
Affiliation(s)
- G Risso
- DIBRIS, University of Genova, Genova, Italy; RBCS, Istituto Italiano di Tecnologia, Genova, Italy
| | | | | | - L Bellodi
- Ospedale San Raffaele, Milan, Italy; Faculty of Psychology, Università San Raffaele Vita Salute, Milan, Italy
| | - G Baud-Bovy
- RBCS, Istituto Italiano di Tecnologia, Genova, Italy; Faculty of Psychology, Università San Raffaele Vita Salute, Milan, Italy.
| |
Collapse
|
25
|
Lafleur A, Soulières I, Forgeot d'Arc B. Sense of agency: Sensorimotor signals and social context are differentially weighed at implicit and explicit levels. Conscious Cogn 2020; 84:103004. [PMID: 32818928 DOI: 10.1016/j.concog.2020.103004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2019] [Revised: 07/04/2020] [Accepted: 08/08/2020] [Indexed: 01/05/2023]
Abstract
Sense of agency (SoA) describes the experience of being the author of an action. Cue integration approaches divide SoA into an implicit level, mostly relying on prospective sensorimotor signals, and an explicit level, resulting from an integration of sensorimotor and contextual cues based on their reliability. Integration mechanisms at each level and the contribution of implicit to explicit SoA remain underspecified. In a task of movements with visual outcomes, we tested the effect of social context (contextual cue) and sensory prediction congruency (retrospective sensorimotor cue) over implicit (intentional binding) and explicit (verbal judgments) SoA. Our results suggest that prospective sensorimotor cues determine implicit SoA. At the explicit level, retrospective sensorimotor cues and contextual cues are partly integrated in an additive way, but contextual cues can also act as a heuristic if sensorimotor cues are highly unreliable. We also found no significant association between implicit and explicit SoA.
Collapse
Affiliation(s)
- Alexis Lafleur
- Département de Psychologie, Université du Québec à Montréal, Montréal, QC H2X 3P2, Canada
| | - Isabelle Soulières
- Département de Psychologie, Université du Québec à Montréal, Montréal, QC H2X 3P2, Canada.
| | | |
Collapse
|
26
|
Tsushima Y, Okada S, Kawai Y, Sumita A, Ando H, Miki M. Effect of illumination on perceived temperature. PLoS One 2020; 15:e0236321. [PMID: 32776987 PMCID: PMC7416916 DOI: 10.1371/journal.pone.0236321] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 07/02/2020] [Indexed: 01/04/2023] Open
Abstract
The widely known hue-heat effect, a multisensory phenomenon between vision and thermal sensing, is a hypothesis based on the idea that light and colors affect perceived temperature. However, the application of this effect has not been prevalent in our daily lives. To work towards developing more practical use of the hue-heat effect, we conducted a series of psychophysical experiments to investigate the relationship between perceived temperature and illumination in a well-controlled experimental environment. The results showed that illumination had three types of effects to change our sense of coolness/warmness: creating, eliminating, and exchanging effects. Furthermore, we confirmed the existence of two distinctive time courses for the three effects: creating effect started immediately, but the eliminating effect takes time. These findings provide us with a better understanding of the hue-heat effect and enable us to apply it in everyday life. Paired with the new technologies it can also help with energy conservation.
Collapse
Affiliation(s)
- Yoshiaki Tsushima
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Kyoto, Japan
| | - Sho Okada
- Faculty of Science and Engineering, Doshisha University, Kyoto, Japan
| | - Yuka Kawai
- Faculty of Science and Engineering, Doshisha University, Kyoto, Japan
| | | | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Kyoto, Japan
| | - Mitsunori Miki
- Faculty of Science and Engineering, Doshisha University, Kyoto, Japan
| |
Collapse
|
27
|
Jasmin K, Dick F, Holt LL, Tierney A. Tailored perception: Individuals' speech and music perception strategies fit their perceptual abilities. J Exp Psychol Gen 2020; 149:914-934. [PMID: 31589067 PMCID: PMC7133494 DOI: 10.1037/xge0000688] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 08/09/2019] [Accepted: 08/12/2019] [Indexed: 01/09/2023]
Abstract
Perception involves integration of multiple dimensions that often serve overlapping, redundant functions, for example, pitch, duration, and amplitude in speech. Individuals tend to prioritize these dimensions differently (stable, individualized perceptual strategies), but the reason for this has remained unclear. Here we show that perceptual strategies relate to perceptual abilities. In a speech cue weighting experiment (trial N = 990), we first demonstrate that individuals with a severe deficit for pitch perception (congenital amusics; N = 11) categorize linguistic stimuli similarly to controls (N = 11) when the main distinguishing cue is duration, which they perceive normally. In contrast, in a prosodic task where pitch cues are the main distinguishing factor, we show that amusics place less importance on pitch and instead rely more on duration cues-even when pitch differences in the stimuli are large enough for amusics to discern. In a second experiment testing musical and prosodic phrase interpretation (N = 16 amusics; 15 controls), we found that relying on duration allowed amusics to overcome their pitch deficits to perceive speech and music successfully. We conclude that auditory signals, because of their redundant nature, are robust to impairments for specific dimensions, and that optimal speech and music perception strategies depend not only on invariant acoustic dimensions (the physical signal), but on perceptual dimensions whose precision varies across individuals. Computational models of speech perception (indeed, all types of perception involving redundant cues e.g., vision and touch) should therefore aim to account for the precision of perceptual dimensions and characterize individuals as well as groups. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Fred Dick
- Department of Psychological Sciences
| | | | | |
Collapse
|
28
|
Schneider TR, Buckingham G, Hermsdörfer J. Visual cues, expectations, and sensorimotor memories in the prediction and perception of object dynamics during manipulation. Exp Brain Res 2020; 238:395-409. [PMID: 31932867 PMCID: PMC7007906 DOI: 10.1007/s00221-019-05711-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/12/2019] [Indexed: 10/25/2022]
Abstract
When we grasp and lift novel objects, we rely on visual cues and sensorimotor memories to predictively scale our finger forces and exert compensatory torques according to object properties. Recently, it was shown that object appearance, previous force scaling errors, and previous torque compensation errors strongly impact our percept. However, the influence of visual geometric cues on the perception of object torques and weights in a grasp to lift task is poorly understood. Moreover, little is known about how visual cues, prior expectations, sensory feedback, and sensorimotor memories are integrated for anticipatory torque control and object perception. Here, 12 young and 12 elderly participants repeatedly grasped and lifted an object while trying to prevent object tilt. Before each trial, we randomly repositioned both the object handle, providing a geometric cue on the upcoming torque, as well as a hidden weight, adding an unforeseeable torque variation. Before lifting, subjects indicated their torque expectations, as well as reporting their experience of torque and weight after each lift. Mixed-effect multiple regression models showed that visual shape cues governed anticipatory torque compensation, whereas sensorimotor memories played less of a role. In contrast, the external torque and committed compensation errors at lift-off mainly determined how object torques and weight were perceived. The modest effect of handle position differed for torque and weight perception. Explicit torque expectations were also correlated with anticipatory torque compensation and torque perception. Our main findings generalized across both age groups. Our results suggest distinct weighting of inputs for action and perception according to reliability.
Collapse
Affiliation(s)
- Thomas Rudolf Schneider
- Chair of Human Movement Science, Department of Sport and Health Sciences, Technical University of Munich, Georg-Brauchle-Ring 60/ 62, 80992, Munich, Germany.
| | - Gavin Buckingham
- Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Heavitree Road, Exeter, EX1 2LU, UK
| | - Joachim Hermsdörfer
- Chair of Human Movement Science, Department of Sport and Health Sciences, Technical University of Munich, Georg-Brauchle-Ring 60/ 62, 80992, Munich, Germany
| |
Collapse
|
29
|
Artacho MÁ, Alcántara E, Martínez N. Multisensory Analysis of Consumer-Product Interaction During Ceramic Tile Shopping Experiences. Multisens Res 2020; 33:213-249. [PMID: 31648188 DOI: 10.1163/22134808-20191391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 09/27/2019] [Indexed: 11/19/2022]
Abstract
The need to design products that engage several senses has being increasingly recognised by design and marketing professionals. Many works analyse the impact of sensory stimuli on the hedonic, cognitive, and emotional responses of consumers, as well as on their satisfaction and intention to purchase. However, there is much less information about the utilitarian dimension related to a sensory non-reflective analysis of the tangible elements of the experience, the sequential role played by different senses, and their relative importance. This work analyses the sensorial dimension of consumer interactions in shops. Consumers were filmed in two ceramic tile shops and their behaviour was analysed according to a previously validated checklist. Sequence of actions, their frequency of occurrence, and the duration of inspections were recorded, and consumers were classified according to their sensory exploration strategies. Results show that inspection patterns are intentional but shifting throughout the interaction. Considering the whole sequence, vision is the dominant sense followed by touch. However, sensory dominance varies throughout the sequence. The dominance differences appear between all senses and within the senses of vision, touch and audition. Cluster analysis classified consumers into two groups, those who were more interactive and those who were visual and passive evaluators. These results are very important for understanding consumer interaction patterns, which senses are involved (including their importance and hierarchy), and which sensory properties of tiles are evaluated during the shopping experience. Moreover, this information is crucial for setting design guidelines to improve sensory interactions and bridge sensory demands with product features.
Collapse
Affiliation(s)
- Miguel Ángel Artacho
- 1Department of Engineering Projects, Universitat Politècnica de València, Building 5J, Camino de Vera s/n. 46022 Valencia, Spain
| | - Enrique Alcántara
- 2Institute of Biomechanics of Valencia, Universitat Politècnica de València, Building 9C, Camino de Vera s/n. 46022 Valencia, Spain
| | | |
Collapse
|
30
|
Mueller S, de Haas B, Metzger A, Drewing K, Fiehler K. Neural correlates of top-down modulation of haptic shape versus roughness perception. Hum Brain Mapp 2019; 40:5172-5184. [PMID: 31430005 PMCID: PMC6864886 DOI: 10.1002/hbm.24764] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 07/03/2019] [Accepted: 08/01/2019] [Indexed: 01/10/2023] Open
Abstract
Exploring an object's shape by touch also renders information about its surface roughness. It has been suggested that shape and roughness are processed distinctly in the brain, a result based on comparing brain activation when exploring objects that differed in one of these features. To investigate the neural mechanisms of top‐down control on haptic perception of shape and roughness, we presented the same multidimensional objects but varied the relevance of each feature. Specifically, participants explored two objects that varied in shape (oblongness of cuboids) and surface roughness. They either had to compare the shape or the roughness in an alternative‐forced‐choice‐task. Moreover, we examined whether the activation strength of the identified brain regions as measured by functional magnetic resonance imaging (fMRI) can predict the behavioral performance in the haptic discrimination task. We observed a widespread network of activation for shape and roughness perception comprising bilateral precentral and postcentral gyrus, cerebellum, and insula. Task‐relevance of the object's shape increased activation in the right supramarginal gyrus (SMG/BA 40) and the right precentral gyrus (PreCG/BA 44) suggesting that activation in these areas does not merely reflect stimulus‐driven processes, such as exploring shape, but also entails top‐down controlled processes driven by task‐relevance. Moreover, the strength of the SMG/PreCG activation predicted individual performance in the shape but not in the roughness discrimination task. No activation was found for the reversed contrast (roughness > shape). We conclude that macrogeometric properties, such as shape, can be modulated by top‐down mechanisms whereas roughness, a microgeometric feature, seems to be processed automatically.
Collapse
Affiliation(s)
- Stefanie Mueller
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany.,Leibniz Institute of Psychology Information (ZPID), Trier, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Anna Metzger
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Knut Drewing
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany.,Center for Mind, Brain, and Behavior (CMBB), Marburg University and Justus Liebig University, Giessen, Germany
| |
Collapse
|
31
|
Risso G, Valle G, Iberite F, Strauss I, Stieglitz T, Controzzi M, Clemente F, Granata G, Rossini PM, Micera S, Baud-Bovy G. Optimal integration of intraneural somatosensory feedback with visual information: a single-case study. Sci Rep 2019; 9:7916. [PMID: 31133637 PMCID: PMC6536542 DOI: 10.1038/s41598-019-43815-1] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 04/23/2019] [Indexed: 02/05/2023] Open
Abstract
Providing somatosensory feedback to amputees is a long-standing objective in prosthesis research. Recently, implantable neural interfaces have yielded promising results in this direction. There is now considerable evidence that the nervous system integrates redundant signals optimally, weighting each signal according to its reliability. One question of interest is whether artificial sensory feedback is combined with other sensory information in a natural manner. In this single-case study, we show that an amputee with a bidirectional prosthesis integrated artificial somatosensory feedback and blurred visual information in a statistically optimal fashion when estimating the size of a hand-held object. The patient controlled the opening and closing of the prosthetic hand through surface electromyography, and received intraneural stimulation proportional to the object's size in the ulnar nerve when closing the robotic hand on the object. The intraneural stimulation elicited a vibration sensation in the phantom hand that substituted the missing haptic feedback. This result indicates that sensory substitution based on intraneural feedback can be integrated with visual feedback and make way for a promising method to investigate multimodal integration processes.
Collapse
Affiliation(s)
- G Risso
- Robotics, Brain and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
- DIBRIS, Università degli studi di Genova, Genoa, Italy
| | - G Valle
- Bertarelli Foundation Chair in Translational Neuroengineering, Centre for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy.
| | - F Iberite
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - I Strauss
- Bertarelli Foundation Chair in Translational Neuroengineering, Centre for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - T Stieglitz
- Laboratory for Biomedical Microtechnology, Department of Microsystems Engineering-IMTEK & Bernstein Center, University of Freiburg, Freiburg, D-79110, Germany
| | - M Controzzi
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - F Clemente
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - G Granata
- Institute of Neurology, Catholic University of The Sacred Heart, Policlinic A. Gemelli Foundation, Roma, Italy
| | - P M Rossini
- Institute of Neurology, Catholic University of The Sacred Heart, Policlinic A. Gemelli Foundation, Roma, Italy
| | - S Micera
- Bertarelli Foundation Chair in Translational Neuroengineering, Centre for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- The Biorobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - G Baud-Bovy
- Robotics, Brain and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy.
- Vita-Salute San Raffaele University & Unit of Experimental Psychology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
32
|
Arnold DH, Petrie K, Murray C, Johnston A. Suboptimal human multisensory cue combination. Sci Rep 2019; 9:5155. [PMID: 30914673 PMCID: PMC6435731 DOI: 10.1038/s41598-018-37888-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 11/05/2018] [Indexed: 11/25/2022] Open
Abstract
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
Collapse
Affiliation(s)
- Derek H Arnold
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia.
| | - Kirstie Petrie
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia
| | - Cailem Murray
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia
| | - Alan Johnston
- Experimental Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
33
|
van Polanen V, Tibold R, Nuruki A, Davare M. Visual delay affects force scaling and weight perception during object lifting in virtual reality. J Neurophysiol 2019; 121:1398-1409. [PMID: 30673365 PMCID: PMC6485735 DOI: 10.1152/jn.00396.2018] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.
Collapse
Affiliation(s)
- Vonne van Polanen
- Department of Movement Sciences and Leuven Brain Institute, KU Leuven , Leuven , Belgium
| | - Robert Tibold
- Sobell Department of Motor Neuroscience and Movement Disorders, Institute of Neurology, University College London , London , United Kingdom
| | - Atsuo Nuruki
- Sobell Department of Motor Neuroscience and Movement Disorders, Institute of Neurology, University College London , London , United Kingdom.,Central for General Education, Kagoshima University , Kagoshima , Japan
| | - Marco Davare
- Department of Movement Sciences and Leuven Brain Institute, KU Leuven , Leuven , Belgium.,Sobell Department of Motor Neuroscience and Movement Disorders, Institute of Neurology, University College London , London , United Kingdom
| |
Collapse
|
34
|
Top-down modulation of shape and roughness discrimination in active touch by covert attention. Atten Percept Psychophys 2018; 81:462-475. [PMID: 30506325 DOI: 10.3758/s13414-018-1625-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Due to limitations in perceptual processing, information relevant to momentary task goals is selected from the vast amount of available sensory information by top-down mechanisms (e.g. attention) that can increase perceptual performance. We investigated how covert attention affects perception of 3D objects in active touch. In our experiment, participants simultaneously explored the shape and roughness of two objects in sequence, and were told afterwards to compare the two objects with regard to one of the two features. To direct the focus of covert attention to the different features we manipulated the expectation of a shape or roughness judgment by varying the frequency of trials for each task (20%, 50%, 80%), then we measured discrimination thresholds. We found higher discrimination thresholds for both shape and roughness perception when the task was unexpected, compared to the conditions in which the task was expected (or both tasks were expected equally). Our results suggest that active touch perception is modulated by expectations about the task. This implies that despite fundamental differences, active and passive touch are affected by feature selective covert attention in a similar way.
Collapse
|
35
|
Raveh E, Portnoy S, Friedman J. Myoelectric Prosthesis Users Improve Performance Time and Accuracy Using Vibrotactile Feedback When Visual Feedback Is Disturbed. Arch Phys Med Rehabil 2018; 99:2263-2270. [DOI: 10.1016/j.apmr.2018.05.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 05/01/2018] [Accepted: 05/09/2018] [Indexed: 11/28/2022]
|
36
|
Intra-auditory integration between pitch and loudness in humans: Evidence of super-optimal integration at moderate uncertainty in auditory signals. Sci Rep 2018; 8:13708. [PMID: 30209342 PMCID: PMC6135783 DOI: 10.1038/s41598-018-31792-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 08/21/2018] [Indexed: 11/08/2022] Open
Abstract
When a person plays a musical instrument, sound is produced and the integrated frequency and intensity produced are perceived aurally. The central nervous system (CNS) receives defective afferent signals from auditory systems and delivers imperfect efferent signals to the motor system due to the noise in both systems. However, it is still little known about auditory-motor interactions for successful performance. Here, we investigated auditory-motor interactions as multi-sensory input and multi-motor output system. Subjects performed a constant force production task using four fingers in three different auditory feedback conditions, where either the frequency (F), intensity (I), or both frequency and intensity (FI) of an auditory tone changed with sum of finger forces. Four levels of uncertainty (high, moderate-high, moderate-low, and low) were conditioned by manipulating the feedback gain of the produced force. We observed performance enhancement under the FI condition compared to either F or I alone at moderate-high uncertainty. Interestingly, the performance enhancement was greater than the prediction of the Bayesian model, suggesting super-optimality. We also observed deteriorated synergistic multi-finger interactions as the level of uncertainty increased, suggesting that the CNS responded to increased uncertainty by changing control strategy of multi-finger actions.
Collapse
|
37
|
Tang R, Ren S, Enns JT, Whitwell RL. The left hand disrupts subsequent right hand grasping when their actions overlap. Acta Psychol (Amst) 2018; 188:131-138. [PMID: 29933175 DOI: 10.1016/j.actpsy.2018.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 03/21/2018] [Accepted: 04/26/2018] [Indexed: 10/28/2022] Open
Abstract
Adaptive motor control is premised on the principle of movement minimization, which in turn is premised on a form of sensorimotor memory. But what is the nature of this memory and under what conditions does it operate? Here, we test the limits of sensorimotor memory in an intermanual context by testing the effect that the action performed by the left hand has on subsequent right hand grasps. Target feature-overlap predicts that sensorimotor memory is engaged when task-relevant sensory features of the target are similar across actions; partial effector-overlap predicts that sensorimotor memory is engaged when there is similarity in the task-relevant effectors used to perform an action; and the action-goal conjunction hypotheses predicts that sensorimotor memories are engaged when the action goal and the action type overlap. In three experiments, participants used their left hand to reach out and pick up an object, manually estimate its size, pinch it, look at it, or merely rest the left hand before reaching out to pick up a second object with their right hand. The in-flight anticipatory grip aperture of right-hand grasps was only influenced when it was preceded by grasps performed by the left-hand. Overlap in the sizes of the objects, partial overlap in the effectors used, and in the availability of haptic feedback bore no influence on this metric. These results support the hypothesis that intermanual transfer of sensorimotor memory on grasp execution is dependent on a conjunction of action type and goal.
Collapse
|
38
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
39
|
Dibavar MR. Infants' intermodal numerical knowledge. Infant Behav Dev 2018; 52:32-44. [PMID: 29807236 DOI: 10.1016/j.infbeh.2018.04.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 04/28/2018] [Accepted: 04/30/2018] [Indexed: 11/28/2022]
Abstract
Two-system theory as the dominant approach in the field of infant numerical representation is characterized by three features: precise representation of small sets of objects, approximate representation of large magnitudes and failure to compare small and large sets. Comparison of single- and multimodal numerical abilities suggests that infants' performance in multimodal conditions is consistent with these three features. Nevertheless, the influence of multimodal stimulation on infants' numerical representation is characterized by preventing the formation of perceptual overlaps across different sensory modalities which can lead to an understanding of numerical values of small sets and also by creating a conceptual overlap about numbers that increases infants' accuracy for discriminating quantities when numerical information is presented bimodally and synchronously. Such multisensory benefits provide numerical capabilities beyond what is depicted by the two-system view.
Collapse
|
40
|
Zheng B, Wang X, Zheng Y, Feng J. 3D-printed model improves clinical assessment of surgeons on anatomy. J Robot Surg 2018; 13:61-67. [PMID: 29693206 DOI: 10.1007/s11701-018-0809-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 04/16/2018] [Indexed: 10/17/2022]
Abstract
Performing surgical procedures often requires a surgeon to develop a skill to create 3-dimensional (3D) mental model on patient's anatomy. Question remains whether the touching on the 3D printed model can facilitate learning of patient anatomy than viewing the rendered virtual on-screen model. The printed and the virtual 3D model were developed from CT films taken from a 4-year-old girl, who had dysplasia of the hip in the left hip. Eleven subjects were called to report measures on six key anatomical features on the hips. The reporting time and the accuracy were compared between the two models, along with the gaze characteristics of subjects while inspecting the models. The variables were analysed using a 2 × 2 within subject ANOVA to examine the difference between viewing the models (on-screen vs. printed-out) and the side of the hip (right vs. left). Interacting with the printed 3D model required shorter times and yielded more accurate visual judgments than viewing the virtual models on most of the anatomical features. Subjects performed a fewer number of fixations but with a longer mean fixation duration when interacting the printed than inspecting the virtual on-screen 3D model. Results confirmed the value of the printed 3D model on improving the clinical judgement on patient anatomy. Confidence in collecting information from the physical world and the cross-model sensor integration may explain why participants performed better with the printed model compared to the virtual model.
Collapse
Affiliation(s)
- Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162 Heritage Medical Research Centre, 8440 112 St. NW., Edmonton, AB, T6G 2E1, Canada.
| | - Xiaolin Wang
- Department of Paediatric Surgery, Tongji Hospital, Huazhong University of Science & Technology, Wuhan, China
| | - Yixiong Zheng
- Department of Surgery, The 2nd Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Jiexiong Feng
- Department of Paediatric Surgery, Tongji Hospital, Huazhong University of Science & Technology, Wuhan, China
| |
Collapse
|
41
|
Does hearing aid use affect audiovisual integration in mild hearing impairment? Exp Brain Res 2018; 236:1161-1179. [PMID: 29453491 DOI: 10.1007/s00221-018-5206-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 02/11/2018] [Indexed: 10/18/2022]
Abstract
There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.
Collapse
|
42
|
Billino J, Drewing K. Age Effects on Visuo-Haptic Length Discrimination: Evidence for Optimal Integration of Senses in Senior Adults. Multisens Res 2018; 31:273-300. [PMID: 31264626 DOI: 10.1163/22134808-00002601] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 07/25/2017] [Indexed: 11/19/2022]
Abstract
Demographic changes in most developed societies have fostered research on functional aging. While cognitive changes have been characterized elaborately, understanding of perceptual aging lacks behind. We investigated age effects on the mechanisms of how multiple sources of sensory information are merged into a common percept. We studied visuo-haptic integration in a length discrimination task. A total of 24 young (20-25 years) and 27 senior (69-77 years) adults compared standard stimuli to appropriate sets of comparison stimuli. Standard stimuli were explored under visual, haptic, or visuo-haptic conditions. The task procedure allowed introducing an intersensory conflict by anamorphic lenses. Comparison stimuli were exclusively explored haptically. We derived psychometric functions for each condition, determining points of subjective equality and discrimination thresholds. We notably evaluated visuo-haptic perception by different models of multisensory processing, i.e., the Maximum-Likelihood-Estimate model of optimal cue integration, a suboptimal integration model, and a cue switching model. Our results support robust visuo-haptic integration across the adult lifespan. We found suboptimal weighted averaging of sensory sources in young adults, however, senior adults exploited differential sensory reliabilities more efficiently to optimize thresholds. Indeed, evaluation of the MLE model indicates that young adults underweighted visual cues by more than 30%; in contrast, visual weights of senior adults deviated only by about 3% from predictions. We suggest that close to optimal multisensory integration might contribute to successful compensation for age-related sensory losses and provides a critical resource. Differentiation between multisensory integration during healthy aging and age-related pathological challenges on the sensory systems awaits further exploration.
Collapse
Affiliation(s)
- Jutta Billino
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Knut Drewing
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| |
Collapse
|
43
|
Toprak S, Navarro-Guerrero N, Wermter S. Evaluating Integration Strategies for Visuo-Haptic Object Recognition. Cognit Comput 2017; 10:408-425. [PMID: 29881470 PMCID: PMC5971043 DOI: 10.1007/s12559-017-9536-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 12/05/2017] [Indexed: 11/24/2022]
Abstract
In computational systems for visuo-haptic object recognition, vision and haptics are often modeled as separate processes. But this is far from what really happens in the human brain, where cross- as well as multimodal interactions take place between the two sensory modalities. Generally, three main principles can be identified as underlying the processing of the visual and haptic object-related stimuli in the brain: (1) hierarchical processing, (2) the divergence of the processing onto substreams for object shape and material perception, and (3) the experience-driven self-organization of the integratory neural circuits. The question arises whether an object recognition system can benefit in terms of performance from adopting these brain-inspired processing principles for the integration of the visual and haptic inputs. To address this, we compare the integration strategy that incorporates all three principles to the two commonly used integration strategies in the literature. We collected data with a NAO robot enhanced with inexpensive contact microphones as tactile sensors. The results of our experiments involving every-day objects indicate that (1) the contact microphones are a good alternative to capturing tactile information and that (2) organizing the processing of the visual and haptic inputs hierarchically and in two pre-processing streams is helpful performance-wise. Nevertheless, further research is needed to effectively quantify the role of each identified principle by itself as well as in combination with others.
Collapse
Affiliation(s)
- Sibel Toprak
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| | - Nicolás Navarro-Guerrero
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| | - Stefan Wermter
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| |
Collapse
|
44
|
Adaptation to proprioceptive targets following visuomotor adaptation. Exp Brain Res 2017; 236:419-432. [PMID: 29209829 DOI: 10.1007/s00221-017-5141-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
Abstract
In the following study, we asked if reaches to proprioceptive targets are updated following reach training with a gradually introduced visuomotor perturbation. Subjects trained to reach with distorted hand-cursor feedback, such that they saw a cursor that was rotated or translated relative to their actual hand movement. Following reach training trials with the cursor, subjects reached to Visual (V), Proprioceptive (P) and Visual + Proprioceptive (VP) targets with no visual feedback of their hand. Comparison of reach endpoints revealed that reaches to VP targets followed similar trends as reaches to P targets, regardless of the training distortion introduced. After reaching with a rotated cursor, subjects adapted their reaches to all target types in a similar manner. However, after reaching with a translated cursor, subjects adapted their reach to V targets only. Taken together, these results show that following training with a visuomotor distortion, subjects primarily rely on proprioceptive information when reaching to VP targets. Furthermore, results indicate that reach adaptation to P targets depends on the distortion presented. Training with a rotation distortion leads to changes in reaches to both V and P targets, while a translation distortion, which introduces a constant discrepancy between visual and proprioceptive estimates of hand position throughout the reach, affects changes to V but not P targets.
Collapse
|
45
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
46
|
Fengler I, Nava E, Villwock AK, Büchner A, Lenarz T, Röder B. Multisensory emotion perception in congenitally, early, and late deaf CI users. PLoS One 2017; 12:e0185821. [PMID: 29023525 PMCID: PMC5638301 DOI: 10.1371/journal.pone.0185821] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 09/20/2017] [Indexed: 11/20/2022] Open
Abstract
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Collapse
Affiliation(s)
- Ineke Fengler
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Elena Nava
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Agnes K. Villwock
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Andreas Büchner
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Thomas Lenarz
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| |
Collapse
|
47
|
Affiliation(s)
- Miguel P. Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California 93106-9660
| |
Collapse
|
48
|
Igarashi Y, Omori K, Arai T, Aizawa Y. Illusory visual-depth reversal can modulate sensations of contact surface. Exp Brain Res 2017; 235:3013-3022. [PMID: 28721518 DOI: 10.1007/s00221-017-5034-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Accepted: 07/15/2017] [Indexed: 11/24/2022]
Abstract
To perceive the external world stably, humans must integrate and manage continuous streams of information from various sensory modalities, in addition to drawing on past experiences and knowledge. In this study, we introduce a novel visuo-tactile illusion elicited by a visual-depth-reversal stimulus. The stimulus (a model of a building) was constructed so as to produce the same retinal image as an opaque cuboid, although it actually consisted of only three PVC boards forming a three-dimensional corner with the hollow inside facing the observer. Participants holding the model in their palm, therefore, observed, with both eyes or one eye, a building model that could be interpreted as either a concave or a convex cuboid. That is, tactile information from the contact surface contradicted the visual interpretation of a convex cuboid. Questionnaire and experimental results, however, showed that the building model was stably viewed as a standing cuboid, particularly under monocular observation. Participants also reported feeling a stable touch of the shrinking base of the apparently standing building model, thus ignoring the veridical contact surface. Given that the visual-tactile information was unchanged with or without the illusion and that the experimental task was tactile estimation, it is remarkable that participants failed to perceive actual touch based on the object's appearance. Results indicate the complexity and flexibility of visual-tactile integration processes. We also discuss the possibility that object knowledge influences visual-tactile integration.
Collapse
Affiliation(s)
- Yuka Igarashi
- Department of Human Science, Faculty of Human Sciences, Kanagawa University, 3-27-1, Rokkakubashi, Kanagawa, Yokohama, Kanagawa, 221-8686, Japan.
| | - Keiko Omori
- College of Humanities and Sciences, Nihon University, Tokyo, Japan
| | - Tetsuya Arai
- Department of Human Science, Faculty of Human Sciences, Kanagawa University, 3-27-1, Rokkakubashi, Kanagawa, Yokohama, Kanagawa, 221-8686, Japan.,Faculty of Human Sciences, Bunkyo University, Koshigaya, Japan
| | - Yasunori Aizawa
- College of Humanities and Sciences, Nihon University, Tokyo, Japan
| |
Collapse
|
49
|
Trust in haptic assistance: weighting visual and haptic cues based on error history. Exp Brain Res 2017; 235:2533-2546. [PMID: 28534068 PMCID: PMC5502061 DOI: 10.1007/s00221-017-4986-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Accepted: 05/10/2017] [Indexed: 11/25/2022]
Abstract
To effectively interpret and interact with the world, humans weight redundant estimates from different sensory cues to form one coherent, integrated estimate. Recent advancements in physical assistance systems, where guiding forces are computed by an intelligent agent, enable the presentation of augmented cues. It is unknown, however, if cue weighting can be extended to augmented cues. Previous research has shown that cue weighting is determined by the reliability (inversely related to uncertainty) of cues within a trial, yet augmented cues may also be affected by errors that vary over trials. In this study, we investigate whether people can learn to appropriately weight a haptic cue from an intelligent assistance system based on its error history. Subjects held a haptic device and reached to a hidden target using a visual (Gaussian distributed dots) and haptic (force channel) cue. The error of the augmented haptic cue varied from trial to trial based on a Gaussian distribution. Subjects learned to estimate the target location by weighting the visual and augmented haptic cues based on their perceptual uncertainty and experienced errors. With both cues available, subjects were able to find the target with an improved or equal performance compared to what was possible with one cue alone. Our results show that the brain can learn to reweight augmented cues from intelligent agents, akin to previous observations of the reweighting of naturally occurring cues. In addition, these results suggest that the weighting of a cue is not only affected by its within-trial reliability but also the history of errors.
Collapse
|
50
|
Chen X, McNamara TP, Kelly JW, Wolbers T. Cue combination in human spatial navigation. Cogn Psychol 2017; 95:105-144. [PMID: 28478330 DOI: 10.1016/j.cogpsych.2017.04.003] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 04/09/2017] [Accepted: 04/12/2017] [Indexed: 11/28/2022]
Abstract
This project investigated the ways in which visual cues and bodily cues from self-motion are combined in spatial navigation. Participants completed a homing task in an immersive virtual environment. In Experiments 1A and 1B, the reliability of visual cues and self-motion cues was manipulated independently and within-participants. Results showed that participants weighted visual cues and self-motion cues based on their relative reliability and integrated these two cue types optimally or near-optimally according to Bayesian principles under most conditions. In Experiment 2, the stability of visual cues was manipulated across trials. Results indicated that cue instability affected cue weights indirectly by influencing cue reliability. Experiment 3 was designed to mislead participants about cue reliability by providing distorted feedback on the accuracy of their performance. Participants received feedback that their performance with visual cues was better and that their performance with self-motion cues was worse than it actually was or received the inverse feedback. Positive feedback on the accuracy of performance with a given cue improved the relative precision of performance with that cue. Bayesian principles still held for the most part. Experiment 4 examined the relations among the variability of performance, rated confidence in performance, cue weights, and spatial abilities. Participants took part in the homing task over two days and rated confidence in their performance after every trial. Cue relative confidence and cue relative reliability had unique contributions to observed cue weights. The variability of performance was less stable than rated confidence over time. Participants with higher mental rotation scores performed relatively better with self-motion cues than visual cues. Across all four experiments, consistent correlations were found between observed weights assigned to cues and relative reliability of cues, demonstrating that the cue-weighting process followed Bayesian principles. Results also pointed to the important role of subjective evaluation of performance in the cue-weighting process and led to a new conceptualization of cue reliability in human spatial navigation.
Collapse
Affiliation(s)
- Xiaoli Chen
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.
| | | | | | - Thomas Wolbers
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| |
Collapse
|