1
|
Cesanek E, Shivkumar S, Ingram JN, Wolpert DM. Ouvrai opens access to remote virtual reality studies of human behavioural neuroscience. Nat Hum Behav 2024:10.1038/s41562-024-01834-7. [PMID: 38671286 DOI: 10.1038/s41562-024-01834-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 01/18/2024] [Indexed: 04/28/2024]
Abstract
Modern virtual reality (VR) devices record six-degree-of-freedom kinematic data with high spatial and temporal resolution and display high-resolution stereoscopic three-dimensional graphics. These capabilities make VR a powerful tool for many types of behavioural research, including studies of sensorimotor, perceptual and cognitive functions. Here we introduce Ouvrai, an open-source solution that facilitates the design and execution of remote VR studies, capitalizing on the surge in VR headset ownership. This tool allows researchers to develop sophisticated experiments using cutting-edge web technologies such as WebXR to enable browser-based VR, without compromising on experimental design. Ouvrai's features include easy installation, intuitive JavaScript templates, a component library managing front- and backend processes and a streamlined workflow. It integrates with Firebase, Prolific and Amazon Mechanical Turk and provides data processing utilities for analysis. Unlike other tools, Ouvrai remains free, with researchers managing their web hosting and cloud database via personal Firebase accounts. Ouvrai is not limited to VR studies; researchers can also develop and run desktop or touchscreen studies using the same streamlined workflow. Through three distinct motor learning experiments, we confirm Ouvrai's efficiency and viability for conducting remote VR studies.
Collapse
Affiliation(s)
- Evan Cesanek
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Sabyasachi Shivkumar
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - James N Ingram
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - Daniel M Wolpert
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
Morgenstern Y, Storrs KR, Schmidt F, Hartmann F, Tiedemann H, Wagemans J, Fleming RW. High-level aftereffects reveal the role of statistical features in visual shape encoding. Curr Biol 2024; 34:1098-1106.e5. [PMID: 38218184 PMCID: PMC10931819 DOI: 10.1016/j.cub.2023.12.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 11/13/2023] [Accepted: 12/13/2023] [Indexed: 01/15/2024]
Abstract
Visual shape perception is central to many everyday tasks, from object recognition to grasping and handling tools.1,2,3,4,5,6,7,8,9,10 Yet how shape is encoded in the visual system remains poorly understood. Here, we probed shape representations using visual aftereffects-perceptual distortions that occur following extended exposure to a stimulus.11,12,13,14,15,16,17 Such effects are thought to be caused by adaptation in neural populations that encode both simple, low-level stimulus characteristics17,18,19,20 and more abstract, high-level object features.21,22,23 To tease these two contributions apart, we used machine-learning methods to synthesize novel shapes in a multidimensional shape space, derived from a large database of natural shapes.24 Stimuli were carefully selected such that low-level and high-level adaptation models made distinct predictions about the shapes that observers would perceive following adaptation. We found that adaptation along vector trajectories in the high-level shape space predicted shape aftereffects better than simple low-level processes. Our findings reveal the central role of high-level statistical features in the visual representation of shape. The findings also hint that human vision is attuned to the distribution of shapes experienced in the natural environment.
Collapse
Affiliation(s)
- Yaniv Morgenstern
- Erasmus University Rotterdam, Department of Psychology, Burgemeester Oudlaan 50, 3062PA Rotterdam, the Netherlands; University of Leuven (KU Leuven), Brain and Cognition, Tiensestraat 102, 3000 Leuven, Belgium.
| | - Katherine R Storrs
- Justus Liebig University Giessen, Department of Psychology, Otto-Behaghel-Str. 10, 3000 Giessen, Germany; University of Auckland, School of Psychology, 23 Symonds Street, Auckland 1010, New Zealand
| | - Filipp Schmidt
- Justus Liebig University Giessen, Department of Psychology, Otto-Behaghel-Str. 10, 3000 Giessen, Germany; University of Marburg and Justus Liebig University Giessen, Center for Mind, Brain and Behavior (CMBB), Hans-Meerwein-Str. 6, 35032 Marburg, Germany
| | - Frieder Hartmann
- Justus Liebig University Giessen, Department of Psychology, Otto-Behaghel-Str. 10, 3000 Giessen, Germany
| | - Henning Tiedemann
- Justus Liebig University Giessen, Department of Psychology, Otto-Behaghel-Str. 10, 3000 Giessen, Germany
| | - Johan Wagemans
- University of Leuven (KU Leuven), Brain and Cognition, Tiensestraat 102, 3000 Leuven, Belgium
| | - Roland W Fleming
- Justus Liebig University Giessen, Department of Psychology, Otto-Behaghel-Str. 10, 3000 Giessen, Germany; University of Marburg and Justus Liebig University Giessen, Center for Mind, Brain and Behavior (CMBB), Hans-Meerwein-Str. 6, 35032 Marburg, Germany
| |
Collapse
|
3
|
Klein LK, Maiello G, Stubbs K, Proklova D, Chen J, Paulun VC, Culham JC, Fleming RW. Distinct Neural Components of Visually Guided Grasping during Planning and Execution. J Neurosci 2023; 43:8504-8514. [PMID: 37848285 PMCID: PMC10711727 DOI: 10.1523/jneurosci.0335-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/18/2023] [Accepted: 09/06/2023] [Indexed: 10/19/2023] Open
Abstract
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
| | - Guido Maiello
- School of Psychology, University of Southampton, Southampton SO17 1PS, United Kingdom
| | - Kevin Stubbs
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Daria Proklova
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Guangzhou 510631, China
| | - Vivian C Paulun
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Giessen, Germany, 35390
| |
Collapse
|
4
|
Mastinu E, Coletti A, Mohammad SHA, van den Berg J, Cipriani C. HANDdata - first-person dataset including proximity and kinematics measurements from reach-to-grasp actions. Sci Data 2023; 10:405. [PMID: 37355716 PMCID: PMC10290694 DOI: 10.1038/s41597-023-02313-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 06/14/2023] [Indexed: 06/26/2023] Open
Abstract
HANDdata is a dataset designed to provide hand kinematics and proximity vision data during reach to grasp actions of non-virtual objects, specifically tailored for autonomous grasping of a robotic hand, and with particular attention to the reaching phase. Thus, we sought to capture target object characteristics from radar and time-of-flight proximity sensors, as well as details of the reach-to-grasp action by looking at wrist and fingers kinematics, and at hand-object interaction main events. We structured the data collection as a sequence of static and grasping tasks, organized by increasing levels of complexity. HANDdata is a first-person, reach-to-grasp dataset that includes almost 6000 human-object interactions from 29 healthy adults, with 10 standardized objects of 5 different shapes and 2 kinds of materials. We believe that such data collection can be of value for researchers interested in autonomous grasping robots for healthcare and industrial applications, as well as for those interested in radar-based computer vision and in basic aspects of sensorimotor control and manipulation.
Collapse
Affiliation(s)
- Enzo Mastinu
- BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy.
| | - Anna Coletti
- BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | | | | | | |
Collapse
|
5
|
Derzsi Z, Volcic R. Not only perception but also grasping actions can obey Weber's law. Cognition 2023; 237:105465. [PMID: 37150154 DOI: 10.1016/j.cognition.2023.105465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/09/2023]
Abstract
Weber's law, the principle that the uncertainty of perceptual estimates increases proportionally with object size, is regularly violated when considering the uncertainty of the grip aperture during grasping movements. The origins of this perception-action dissociation are debated and are attributed to various reasons, including different coding of visual size information for perception and action, biomechanical factors, the use of positional information to guide grasping, or, sensorimotor calibration. Here, we contrasted these accounts and compared perceptual and grasping uncertainties by asking people to indicate the visually perceived center of differently sized objects (Perception condition) or to grasp and lift the same objects with the requirement to achieve a balanced lift (Action condition). We found that the variability (uncertainty) of contact positions increased as a function of object size in both perception and action. The adherence of the Action condition to Weber's law and the consequent absence of a perception-action dissociation contradict the predictions based on different coding of visual size information and sensorimotor calibration. These findings provide clear evidence that human perceptual and visuomotor systems rely on the same visual information and suggest that the previously reported violations of Weber's law in grasping movements should be attributed to other factors.
Collapse
Affiliation(s)
- Zoltan Derzsi
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Brain and Health, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
6
|
Having several options does not increase the time it takes to make a movement to an adequate end point. Exp Brain Res 2022; 240:1849-1871. [PMID: 35551429 PMCID: PMC9142465 DOI: 10.1007/s00221-022-06376-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 04/19/2022] [Indexed: 12/02/2022]
Abstract
Throughout the day, people constantly make choices such as where to direct their gaze or place their foot. When making such movement choices, there are usually multiple acceptable options, although some are more advantageous than others. How much time does it take to make such choices and to what extent is the most advantageous option chosen from the available alternatives? To find out, we asked participants to collect points by tapping on any of several targets with their index finger. It did not take participants more time to direct their movements to an advantageous target when there were more options. Participants chose targets that were advantageous because they were easier to reach. Targets could be easier to reach because the finger was already moving in their direction when they appeared, or because they were larger or oriented along the movement direction so that the finger could move faster towards them without missing them. When the target’s colour indicated that it was worth more points they chose it slightly less fast, presumably because it generally takes longer to respond to colour than to respond to attributes such as size. They also chose it less often than they probably should have, presumably because the advantage of choosing it was established arbitrarily. We conclude that having many options does not increase the time it takes to move to an adequate target.
Collapse
|
7
|
Preißler L, Jovanovic B, Munzert J, Schmidt F, Fleming RW, Schwarzer G. Effects of visual and visual-haptic perception of material rigidity on reaching and grasping in the course of development. Acta Psychol (Amst) 2021; 221:103457. [PMID: 34883348 DOI: 10.1016/j.actpsy.2021.103457] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022] Open
Abstract
The development of material property perception for grasping objects is not well explored during early childhood. Therefore, we investigated infants', 3-year-old children's, and adults' unimanual grasping behavior and reaching kinematics for objects of different rigidity using a 3D motion capture system. In Experiment 1, 11-month-old infants and for purposes of comparison adults, and in Experiment 2, 3-year old children were encouraged to lift relatively heavy objects with one of two handles differing in rigidity after visual (Condition 1) and visual-haptic exploration (Condition 2). Experiment 1 revealed that 11-months-olds, after visual object exploration, showed no significant material preference, and thus did not consider the material to facilitate grasping. After visual-haptic object exploration and when grasping the contralateral handles, infants showed an unexpected preference for the soft handles, which were harder to use to lift the object. In contrast, adults generally grasped the rigid handle exploiting their knowledge about efficient and functional grasping in both conditions. Reaching kinematics were barely affected by rigidity, but rather by condition and age. Experiment 2 revealed that 3-year-olds no longer exhibit a preference for grasping soft handles, but still no adult-like preference for rigid handles in both conditions. This suggests that material rigidity plays a minor role in infants' grasping behavior when only visual material information is available. Also, 3-year-olds seem to be on an intermediate level in the development from (1) preferring the pleasant sensation of a soft fabric, to (2) preferring the efficient rigid handle.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| | - Bianca Jovanovic
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| | - Jörn Munzert
- Department of Sports Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394 Giessen, Germany.
| | - Filipp Schmidt
- Department of General Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F2, 35394 Giessen, Germany.
| | - Roland W Fleming
- Department of General Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F2, 35394 Giessen, Germany.
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| |
Collapse
|
8
|
Morgenstern Y, Hartmann F, Schmidt F, Tiedemann H, Prokott E, Maiello G, Fleming RW. An image-computable model of human visual shape similarity. PLoS Comput Biol 2021; 17:e1008981. [PMID: 34061825 PMCID: PMC8195351 DOI: 10.1371/journal.pcbi.1008981] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 06/11/2021] [Accepted: 04/19/2021] [Indexed: 11/19/2022] Open
Abstract
Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain. The ability to describe and compare shapes is crucial in many scientific domains from visual object recognition to computational morphology and computer graphics. Across disciplines, considerable effort has been devoted to the study of shape and its influence on object recognition, yet an important stumbling block is the quantitative characterization of shape similarity. Here we develop a psychophysically validated model that takes as input an object’s shape boundary and provides a high-dimensional output that can be used for predicting visual shape similarity. With this precise control of shape similarity, the model’s description of shape is a powerful tool that can be used across the neurosciences and artificial intelligence to test role of shape in perception and the brain.
Collapse
Affiliation(s)
- Yaniv Morgenstern
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
- * E-mail:
| | - Frieder Hartmann
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Filipp Schmidt
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Henning Tiedemann
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Eugen Prokott
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus-Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
9
|
Mroczkowski CA, Niechwiej-Szwedo E. Stereopsis contributes to the predictive control of grip forces during prehension. Exp Brain Res 2021; 239:1345-1358. [PMID: 33661370 DOI: 10.1007/s00221-021-06052-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 01/29/2021] [Indexed: 11/26/2022]
Abstract
Binocular viewing is associated with a superior prehensile performance, which is particularly evident in the latter part of the reach as the hand approaches and makes contact with the target object. However, the visuomotor mechanisms through which binocular vision serves prehension are not fully understood. This study assessed the role of stereopsis in the predictive control of grasping by measuring grip force. Twenty participants performed a precision reach-to-grasp task in four viewing conditions: binocular, monocular, and with reduced stereoacuity (200 arc sec, > 400 arc sec). Monocular, compared to binocular viewing, was associated with a fourfold increase in grasp errors, a 56% increase in grasp duration, 22% decrease in grip force at 50 ms following grasp initiation, and the time of peak force occurred 40% later after grasp initiation (all p < 0.05). Grasp performance was also disrupted when viewing with reduced stereoacuity. Notably, grip force at the time of object lift-off was comparable between all viewing conditions. These results demonstrate that binocular stereopsis contributes to the efficient programming of grip forces. Specifically, stereopsis may provide important sensory information that enables the central nervous system to engage in predictive control of grasping.
Collapse
Affiliation(s)
- Corey A Mroczkowski
- Department of Kinesiology, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 5G1, Canada
| | - Ewa Niechwiej-Szwedo
- Department of Kinesiology, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 5G1, Canada.
| |
Collapse
|
10
|
Xu C, Wang Y, Gerling GJ. An elasticity-curvature illusion decouples cutaneous and proprioceptive cues in active exploration of soft objects. PLoS Comput Biol 2021; 17:e1008848. [PMID: 33750948 PMCID: PMC8016306 DOI: 10.1371/journal.pcbi.1008848] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 04/01/2021] [Accepted: 03/03/2021] [Indexed: 11/18/2022] Open
Abstract
Our sense of touch helps us encounter the richness of our natural world. Across a myriad of contexts and repetitions, we have learned to deploy certain exploratory movements in order to elicit perceptual cues that are salient and efficient. The task of identifying optimal exploration strategies and somatosensory cues that underlie our softness perception remains relevant and incomplete. Leveraging psychophysical evaluations combined with computational finite element modeling of skin contact mechanics, we investigate an illusion phenomenon in exploring softness; where small-compliant and large-stiff spheres are indiscriminable. By modulating contact interactions at the finger pad, we find this elasticity-curvature illusion is observable in passive touch, when the finger is constrained to be stationary and only cutaneous responses from mechanosensitive afferents are perceptible. However, these spheres become readily discriminable when explored volitionally with musculoskeletal proprioception available. We subsequently exploit this phenomenon to dissociate relative contributions from cutaneous and proprioceptive signals in encoding our percept of material softness. Our findings shed light on how we volitionally explore soft objects, i.e., by controlling surface contact force to optimally elicit and integrate proprioceptive inputs amidst indiscriminable cutaneous contact cues. Moreover, in passive touch, e.g., for touch-enabled displays grounded to the finger, we find those spheres are discriminable when rates of change in cutaneous contact are varied between the stimuli, to supplant proprioceptive feedback.
Collapse
Affiliation(s)
- Chang Xu
- School of Engineering and Applied Science, University of Virginia, Charlottesville, Virginia, United States of America
| | - Yuxiang Wang
- School of Engineering and Applied Science, University of Virginia, Charlottesville, Virginia, United States of America
| | - Gregory J. Gerling
- School of Engineering and Applied Science, University of Virginia, Charlottesville, Virginia, United States of America
| |
Collapse
|
11
|
Klein LK, Maiello G, Fleming RW, Voudouris D. Friction is preferred over grasp configuration in precision grip grasping. J Neurophysiol 2021; 125:1330-1338. [PMID: 33596725 DOI: 10.1152/jn.00021.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
How humans visually select where to grasp an object depends on many factors, including grasp stability and preferred grasp configuration. We examined how endpoints are selected when these two factors are brought into conflict: Do people favor stable grasps or do they prefer their natural grasp configurations? Participants reached to grasp one of three cuboids oriented so that its two corners were either aligned with, or rotated away from, each individual's natural grasp axis (NGA). All objects were made of brass (mass: 420 g), but the surfaces of their sides were manipulated to alter friction: 1) all-brass, 2) two opposing sides covered with wood, and the other two remained of brass, or 3) two opposing sides covered with sandpaper, and the two remaining brass sides smeared with Vaseline. Grasps were evaluated as either clockwise (thumb to the left of finger in frontal plane) or counterclockwise of the NGA. Grasp endpoints depended on both object orientation and surface material. For the all-brass object, grasps were bimodally distributed in the NGA-aligned condition but predominantly clockwise in the NGA-unaligned condition. These data reflected participants' natural grasp configuration independently of surface material. When grasping objects with different surface materials, endpoint selection changed: Participants sacrificed their usual grasp configuration to choose the more stable object sides. A model in which surface material shifts participants' preferred grip angle proportionally to the perceived friction of the surfaces accounts for our results. Our findings demonstrate that a stable grasp is more important than a biomechanically comfortable grasp configuration.NEW & NOTEWORTHY When grasping an object, humans can place their fingers at several positions on its surface. The selection of these endpoints depends on many factors, with two of the most important being grasp stability and grasp configuration. We put these two factors in conflict and examine which is considered more important. Our results highlight that humans are not reluctant to adopt unusual grasp configurations to satisfy grasp stability.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Germany
| | - Dimitris Voudouris
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
12
|
Maiello G, Schepko M, Klein LK, Paulun VC, Fleming RW. Humans Can Visually Judge Grasp Quality and Refine Their Judgments Through Visual and Haptic Feedback. Front Neurosci 2021; 14:591898. [PMID: 33510608 PMCID: PMC7835720 DOI: 10.3389/fnins.2020.591898] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 11/16/2020] [Indexed: 12/30/2022] Open
Abstract
How humans visually select where to grasp objects is determined by the physical object properties (e.g., size, shape, weight), the degrees of freedom of the arm and hand, as well as the task to be performed. We recently demonstrated that human grasps are near-optimal with respect to a weighted combination of different cost functions that make grasps uncomfortable, unstable, or impossible, e.g., due to unnatural grasp apertures or large torques. Here, we ask whether humans can consciously access these rules. We test if humans can explicitly judge grasp quality derived from rules regarding grasp size, orientation, torque, and visibility. More specifically, we test if grasp quality can be inferred (i) by using visual cues and motor imagery alone, (ii) from watching grasps executed by others, and (iii) through performing grasps, i.e., receiving visual, proprioceptive and haptic feedback. Stimuli were novel objects made of 10 cubes of brass and wood (side length 2.5 cm) in various configurations. On each object, one near-optimal and one sub-optimal grasp were selected based on one cost function (e.g., torque), while the other constraints (grasp size, orientation, and visibility) were kept approximately constant or counterbalanced. Participants were visually cued to the location of the selected grasps on each object and verbally reported which of the two grasps was best. Across three experiments, participants were required to either (i) passively view the static objects and imagine executing the two competing grasps, (ii) passively view videos of other participants grasping the objects, or (iii) actively grasp the objects themselves. Our results show that, for a majority of tested objects, participants could already judge grasp optimality from simply viewing the objects and imagining to grasp them, but were significantly better in the video and grasping session. These findings suggest that humans can determine grasp quality even without performing the grasp-perhaps through motor imagery-and can further refine their understanding of how to correctly grasp an object through sensorimotor feedback but also by passively viewing others grasp objects.
Collapse
Affiliation(s)
- Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany,*Correspondence: Guido Maiello,
| | - Marcel Schepko
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Lina K. Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Vivian C. Paulun
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany,Center for Mind, Brain and Behavior, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|