1
|
Héroux ME, Fisher G, Axelson LH, Butler AA, Gandevia SC. How we perceive the width of grasped objects: Insights into the central processes that govern proprioceptive judgements. J Physiol 2024; 602:2899-2916. [PMID: 38734987 DOI: 10.1113/jp286322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 04/09/2024] [Indexed: 05/13/2024] Open
Abstract
Low-level proprioceptive judgements involve a single frame of reference, whereas high-level proprioceptive judgements are made across different frames of reference. The present study systematically compared low-level (grasp → $\rightarrow$ grasp) and high-level (vision → $\rightarrow$ grasp, grasp → $\rightarrow$ vision) proprioceptive tasks, and quantified the consistency of grasp → $\rightarrow$ vision and possible reciprocal nature of related high-level proprioceptive tasks. Experiment 1 (n = 30) compared performance across vision → $\rightarrow$ grasp, a grasp → $\rightarrow$ vision and a grasp → $\rightarrow$ grasp tasks. Experiment 2 (n = 30) compared performance on the grasp → $\rightarrow$ vision task between hands and over time. Participants were accurate (mean absolute error 0.27 cm [0.20 to 0.34]; mean [95% CI]) and precise (R 2 $R^2$ = 0.95 [0.93 to 0.96]) for grasp → $\rightarrow$ grasp judgements, with a strong correlation between outcomes (r = -0.85 [-0.93 to -0.70]). Accuracy and precision decreased in the two high-level tasks (R 2 $R^2$ = 0.86 and 0.89; mean absolute error = 1.34 and 1.41 cm), with most participants overestimating perceived width for the vision → $\rightarrow$ grasp task and underestimating it for grasp → $\rightarrow$ vision task. There was minimal correlation between accuracy and precision for these two tasks. Converging evidence indicated performance was largely reciprocal (inverse) between the vision → $\rightarrow$ grasp and grasp → $\rightarrow$ vision tasks. Performance on the grasp → $\rightarrow$ vision task was consistent between dominant and non-dominant hands, and across repeated sessions a day or week apart. Overall, there are fundamental differences between low- and high-level proprioceptive judgements that reflect fundamental differences in the cortical processes that underpin these perceptions. Moreover, the central transformations that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. KEY POINTS: Low-level proprioceptive judgements involve a single frame of reference (e.g. indicating the width of a grasped object by selecting from a series of objects of different width), whereas high-level proprioceptive judgements are made across different frames of reference (e.g. indicating the width of a grasped object by selecting from a series of visible lines of different length). We highlight fundamental differences in the precision and accuracy of low- and high-level proprioceptive judgements. We provide converging evidence that the neural transformations between frames of reference that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. This stability is likely key to precise judgements and accurate predictions in high-level proprioception.
Collapse
Affiliation(s)
- Martin E Héroux
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| | - Georgia Fisher
- Neuroscience Research Australia, Randwick, Australia
- Australian Institute of Health Innovation, Macquarie University, Macquarie Park, Australia
| | | | - Annie A Butler
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| | - Simon C Gandevia
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| |
Collapse
|
2
|
Zhang Z, Gao C, Zhao S, Wu J, Fukuyama H, Murai T. Salient Properties in Bimanual Haptic Volume Perception: Influence of Object Shape, Finger Pair, and Schizotypal Personality Traits. IEEE TRANSACTIONS ON HAPTICS 2021; 14:816-824. [PMID: 33961565 DOI: 10.1109/toh.2021.3077882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Bimanual haptic volume perception refers to somatosensory access to volume information through both hands, and the characteristics that influence this perception remain unclear. This article investigated the influence of target object shapes and finger pairs on bimanual haptic perception; in addition, associations of bimanual haptic impairment and schizotypal features in nonpsychotic individuals were investigated. Twenty blindfolded participants bimanually discriminated volume variations in regular solid objects under different shape (tetrahedron, cube, or sphere) and finger pair (high- or low-sensitivity pairs) conditions using a newly developed bimanual haptic volume presentation device. Discrimination thresholds were then associated with schizotypal traits using the Chinese version of the Schizotypal Personality Questionnaire. Target object shape and finger pairs significantly influenced bimanual haptic volume perception. Volume discrimination thresholds were significantly higher with the tetrahedron stimuli than the cubic or spherical stimuli in high-sensitivity pair conditions, but no significant differences among shapes were found in low-sensitivity pair conditions. Moreover, volume discrimination thresholds with high-sensitivity pairs were correlated with the paranoid score of the schizotypal personality questionnaire. The findings provide initial evidence toward understanding the nature of bimanual haptic volume perception, including the properties of objects, individuals, and object-individual interfaces.
Collapse
|
3
|
Memeo M, Jacono M, Sandini G, Brayda L. Enabling visually impaired people to learn three-dimensional tactile graphics with a 3DOF haptic mouse. J Neuroeng Rehabil 2021; 18:146. [PMID: 34563218 PMCID: PMC8467032 DOI: 10.1186/s12984-021-00935-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
Background In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. Methods The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. Results In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. Conclusions The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.
Collapse
Affiliation(s)
- Mariacarla Memeo
- Robotics, Brain and Cognitive Sciences Department Now with Cognition, Motion and Cognitive Science (CMON) Unit, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy.,University of Genoa, Genoa, Italy
| | - Marco Jacono
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy
| | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy
| | - Luca Brayda
- Acoesis srl, Via Enrico Melen 83, Genoa, Italy.
| |
Collapse
|
4
|
Haladjian HH, Anstis S, Wexler M, Cavanagh P. The Tactile Quartet: Comparing Ambiguous Apparent Motion in Tactile and Visual Stimuli. Perception 2019; 49:61-80. [PMID: 31707914 DOI: 10.1177/0301006619886237] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the visual quartet, alternating diagonal pairs of dots produce apparent motion horizontally or vertically, depending on proximity. Here, we studied a tactile quartet where vibrating tactors were attached to the thumbs and index fingers of both hands. Apparent motion was felt either within hands (from index finger to thumb) or between hands. Participants adjusted the distance between their hands to find the point where motion changed directions. Surprisingly, switchovers occurred when between-hand distances were as much as twice that of within-hand distances—a general bias that was also found for tactile judgments of static distances. This expansion of within-hand felt distances was again seen when lights were placed on the hands rather than vibrating tactors. Importantly, switchover points were similar when the hands were placed at different depths, indicating that representations governing tactile motion were in perceptual three-dimensional space, not retinal two-dimensional space. This was true whether the quartets were visual stimuli on the hands or were purely visual on a monitor, suggesting that proximity is generally determined in three-dimensional coordinates for motion perception. Finally, the similarity of visual and tactile results suggests a common computation for apparent motion, albeit with different built-in distance biases for separate modalities.
Collapse
Affiliation(s)
- Harry H Haladjian
- Laboratoire Psychologie de la Perception, CNRS UMR 8424, Université Paris Descartes, France
| | - Stuart Anstis
- Department of Psychology, University of California, San Diego, CA, USA
| | - Mark Wexler
- Laboratoire Psychologie de la Perception, CNRS UMR 8424, Université Paris Descartes, France
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, CNRS UMR 8424, Université Paris Descartes, France; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA; Department of Psychology, York University, Glendon College, North York, ON, Canada
| |
Collapse
|
5
|
Butler AA, Héroux ME, van Eijk T, Gandevia SC. Stability of perception of the hand's aperture in a grasp. J Physiol 2019; 597:5973-5984. [PMID: 31671476 DOI: 10.1113/jp278630] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/29/2019] [Indexed: 01/18/2023] Open
Abstract
KEY POINTS How we judge the location of our body parts can be affected by a range of factors that change how our brain interprets proprioceptive signals. We examined the effect of several such factors on how we perceive an object's width and the spacing between our thumb and fingers when grasping. Grasp-related perceptions were slightly wider when using all digits, in line with our tendency to grasp larger objects with the entire hand. Surprisingly, these perceptions were not affected by the frames of reference for judgements (object width versus grasp aperture), whether the object was grasped actively or passively, or the strength of the grasp. These results show that the brain maintains a largely stable representation of the hand when grasping stationary objects. This stability may underpin our dexterity when grasping a vast array of objects. ABSTRACT Various factors can alter how the brain interprets proprioceptive signals, leading to errors in how we perceive our body and execute motor tasks. This study determined the effect of critical factors on hand-based perceptions. In Experiment 1, 20 participants grasped without lifting an unseen 6.5 cm-wide object with two grasp configurations: thumb and all fingers, and thumb and index finger. Participants reported perceived grasp aperture (body reference frame) or perceived object width (external reference frame) using visual charts. In Experiment 2, 20 participants grasped the object with three grasp intensities (1, 5 and 15% maximal grasp force) actively or passively and reported perceived grasp aperture. A follow-up experiment addressed whether results from Experiment 2 were influenced by the external force applied during passive grasp. Overall, there was a mean difference of 0.38 cm (95% confidence interval (CI), 0.12 to 0.63) between the two grasp configurations (all digits compared to thumb and index finger). Perceived object width compared to perceived grasp aperture differed by only -0.04 cm (95% CI, -0.30 to 0.21). There was no real effect of grasp intensity on perceived grasp aperture (-0.01 cm; 95% CI, -0.03 to 0.01) or grasp type (active versus passive; 0.18 cm; 95% CI, -0.19 to 0.55). Overall, grasp-related perceptions are slightly wider when using all digits, in line with our tendency to grasp larger objects with the entire hand. The other factors - frame of reference, grasp intensity and grasp type - had no meaningful effect on these perceptions. These results provide evidence that the brain maintains a largely stable representation of the hand.
Collapse
Affiliation(s)
- Annie A Butler
- Neuroscience Research Australia, Randwick, New South Wales, 2031, Australia.,University of New South Wales, Kensington, New South Wales, 2032, Australia
| | - Martin E Héroux
- Neuroscience Research Australia, Randwick, New South Wales, 2031, Australia.,University of New South Wales, Kensington, New South Wales, 2032, Australia
| | - Tess van Eijk
- Neuroscience Research Australia, Randwick, New South Wales, 2031, Australia.,Radboud University Medical Center, Nijmegen, The Netherlands
| | - Simon C Gandevia
- Neuroscience Research Australia, Randwick, New South Wales, 2031, Australia.,University of New South Wales, Kensington, New South Wales, 2032, Australia
| |
Collapse
|
6
|
Abstract
Humans exhibit a remarkable ability to discriminate variations in object volume based on natural haptic perception. The discrimination thresholds for the haptic volume perception of the whole hand are well known, but the discrimination thresholds for haptic volume perception of fingers and phalanges are still unknown. In the present study, two psychophysical experiments were performed to investigate haptic volume perception in various fingers and phalanges. The configurations of both experiments were completely dependent on haptic volume perception from the fingers and phalanges. The participants were asked to blindly discriminate the volume variation of regular solid objects in a random order by using the distal phalanx, medial phalanx, and proximal phalanx of their index finger, middle finger, ring finger, and little finger. The discrimination threshold of haptic volume perception gradually decreases from the little finger to the index finger as well as from the proximal phalanx to the distal phalanx. Overall, both the shape of the target and the part of the finger in contact with the target significantly influence the precision of haptic perception of volume. This substantial data set provides detailed and compelling perspectives on the haptic system, including for discrimination of the spatial size of objects and for performing more general perceptual processes.
Collapse
|
7
|
Santello M, Bianchi M, Gabiccini M, Ricciardi E, Salvietti G, Prattichizzo D, Ernst M, Moscatelli A, Jörntell H, Kappers AML, Kyriakopoulos K, Albu-Schäffer A, Castellini C, Bicchi A. Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands. Phys Life Rev 2016; 17:1-23. [PMID: 26923030 DOI: 10.1016/j.plrev.2016.02.001] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Accepted: 02/02/2016] [Indexed: 12/30/2022]
Abstract
The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project "The Hand Embodied" (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies.
Collapse
Affiliation(s)
- Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA.
| | - Matteo Bianchi
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy; Advanced Robotics Department, Istituto Italiano di Tecnologia (IIT), Genova, Italy
| | - Marco Gabiccini
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy; Advanced Robotics Department, Istituto Italiano di Tecnologia (IIT), Genova, Italy; Department of Civil and Industrial Engineering, University of Pisa, Pisa, Italy
| | - Emiliano Ricciardi
- Molecular Mind Laboratory, Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa, Italy; Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Gionata Salvietti
- Department of Information Engineering and Mathematics, University of Siena, Siena, Italy
| | - Domenico Prattichizzo
- Department of Information Engineering and Mathematics, University of Siena, Siena, Italy; Advanced Robotics Department, Istituto Italiano di Tecnologia (IIT), Genova, Italy
| | - Marc Ernst
- Department of Cognitive Neuroscience and CITEC, Bielefeld University, Bielefeld, Germany
| | - Alessandro Moscatelli
- Department of Cognitive Neuroscience and CITEC, Bielefeld University, Bielefeld, Germany; Department of Systems Medicine and Centre of Space Bio-Medicine, Università di Roma "Tor Vergata", 00173, Rome, Italy
| | - Henrik Jörntell
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Lund University, Lund, Sweden
| | | | - Kostas Kyriakopoulos
- School of Mechanical Engineering, National Technical University of Athens, Greece
| | - Alin Albu-Schäffer
- DLR - German Aerospace Center, Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Claudio Castellini
- DLR - German Aerospace Center, Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Antonio Bicchi
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy; Advanced Robotics Department, Istituto Italiano di Tecnologia (IIT), Genova, Italy.
| |
Collapse
|