1
|
Gerharz L, Brenner E, Billino J, Voudouris D. Age effects on predictive eye movements for action. J Vis 2024; 24:8. [PMID: 38856982 PMCID: PMC11166221 DOI: 10.1167/jov.24.6.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 04/22/2024] [Indexed: 06/11/2024] Open
Abstract
When interacting with the environment, humans typically shift their gaze to where information is to be found that is useful for the upcoming action. With increasing age, people become slower both in processing sensory information and in performing their movements. One way to compensate for this slowing down could be to rely more on predictive strategies. To examine whether we could find evidence for this, we asked younger (19-29 years) and older (55-72 years) healthy adults to perform a reaching task wherein they hit a visual target that appeared at one of two possible locations. In separate blocks of trials, the target could appear always at the same location (predictable), mainly at one of the locations (biased), or at either location randomly (unpredictable). As one might expect, saccades toward predictable targets had shorter latencies than those toward less predictable targets, irrespective of age. Older adults took longer to initiate saccades toward the target location than younger adults, even when the likely target location could be deduced. Thus we found no evidence of them relying more on predictive gaze. Moreover, both younger and older participants performed more saccades when the target location was less predictable, but again no age-related differences were found. Thus we found no tendency for older adults to rely more on prediction.
Collapse
Affiliation(s)
- Leonard Gerharz
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- https://orcid.org/0009-0006-0487-2609
| | - Eli Brenner
- Department of Human Movement Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
2
|
Cheema N, Yielder P, Sanmugananthan P, Ambalavanar U, Murphy B. Impact of subclinical neck pain on eye and hand movements in goal-directed upper limb aiming movements. Hum Mov Sci 2024; 96:103238. [PMID: 38824805 DOI: 10.1016/j.humov.2024.103238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 04/13/2024] [Accepted: 05/20/2024] [Indexed: 06/04/2024]
Abstract
Individuals with untreated, mild-to-moderate recurrent neck pain or stiffness (subclinical neck pain (SCNP)) have been shown to have impairments in upper limb proprioception, and altered cerebellar processing. It is probable that aiming trajectories will be impacted since individuals with SCNP cannot rely on accurate proprioceptive feedback or feedforward processing (body schema) for movement planning and execution, due to altered afferent input from the neck. SCNP participants may thus rely more on visual feedback, to accommodate for impaired cerebellar processing. This quasi-experimental study sought to determine whether upper limb kinematics and oculomotor processes were impacted in those with SCNP. 25 SCNP and 25 control participants who were right-hand dominant performed bidirectional aiming movements using two different weighted styli (light or heavy) while wearing an eye-tracking device. Those with SCNP had a greater time to and time after peak velocity, which corresponded with a longer upper limb movement and reaction time, seen as greater constant error, less undershoot in the upwards direction and greater undershoot in the downwards direction compared to controls. SCNP participants also showed a trend towards a quicker ocular reaction and movement time compared to controls, while the movement distance was fairly similar between groups. This study indicates that SCNP alters aiming performances, with greater reliance on visual feedback, likely due to altered proprioceptive input leading to altered cerebellar processing.
Collapse
Affiliation(s)
- Navika Cheema
- Faculty of Health Sciences, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
| | - Paul Yielder
- Faculty of Health Sciences, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
| | | | - Ushani Ambalavanar
- Faculty of Health Sciences, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
| | - Bernadette Murphy
- Faculty of Health Sciences, Ontario Tech University, Oshawa, ON L1G 0C5, Canada.
| |
Collapse
|
3
|
Wang G, Zheng C, Wu X, Deng Z, Sperandio I, Goodale MA, Chen J. The contribution of semantic distance knowledge to size constancy in perception and grasping when visual cues are limited. Neuropsychologia 2024; 196:108838. [PMID: 38401629 DOI: 10.1016/j.neuropsychologia.2024.108838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/04/2024] [Accepted: 02/21/2024] [Indexed: 02/26/2024]
Abstract
To achieve a stable perception of object size in spite of variations in viewing distance, our visual system needs to combine retinal image information and distance cues. Previous research has shown that, not only retinal cues, but also extraretinal sensory signals can provide reliable information about depth and that different neural networks (perception versus action) can exhibit preferences in the use of these different sources of information during size-distance computations. Semantic knowledge of distance, a purely cognitive signal, can also provide distance information. Do the perception and action systems show differences in their ability to use this information in calculating object size and distance? To address this question, we presented 'glow-in-the-dark' objects of different physical sizes at different real distances in a completely dark room. Participants viewed the objects monocularly through a 1-mm pinhole. They either estimated the size and distance of the objects or attempted to grasp them. Semantic knowledge was manipulated by providing an auditory cue about the actual distance of the object: "20 cm", "30 cm", and "40 cm". We found that semantic knowledge of distance contributed to some extent to size constancy operations during perceptual estimation and grasping, but size constancy was never fully restored. Importantly, the contribution of knowledge about distance to size constancy was equivalent between perception and action. Overall, our study reveals similarities and differences between the perception and action systems in the use of semantic distance knowledge and suggests that this cognitive signal is useful but not a reliable depth cue for size constancy under restricted viewing conditions.
Collapse
Affiliation(s)
- Gexiu Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Chao Zheng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Xiaoqian Wu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, 38068, Italy
| | - Melvyn A Goodale
- Western Institute for Neuroscience and the Department of Psychology, The University of Western Ontario, London, ON, N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China; Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, Guangdong Province, 510631, China.
| |
Collapse
|
4
|
Derzsi Z, Volcic R. Not only perception but also grasping actions can obey Weber's law. Cognition 2023; 237:105465. [PMID: 37150154 DOI: 10.1016/j.cognition.2023.105465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/09/2023]
Abstract
Weber's law, the principle that the uncertainty of perceptual estimates increases proportionally with object size, is regularly violated when considering the uncertainty of the grip aperture during grasping movements. The origins of this perception-action dissociation are debated and are attributed to various reasons, including different coding of visual size information for perception and action, biomechanical factors, the use of positional information to guide grasping, or, sensorimotor calibration. Here, we contrasted these accounts and compared perceptual and grasping uncertainties by asking people to indicate the visually perceived center of differently sized objects (Perception condition) or to grasp and lift the same objects with the requirement to achieve a balanced lift (Action condition). We found that the variability (uncertainty) of contact positions increased as a function of object size in both perception and action. The adherence of the Action condition to Weber's law and the consequent absence of a perception-action dissociation contradict the predictions based on different coding of visual size information and sensorimotor calibration. These findings provide clear evidence that human perceptual and visuomotor systems rely on the same visual information and suggest that the previously reported violations of Weber's law in grasping movements should be attributed to other factors.
Collapse
Affiliation(s)
- Zoltan Derzsi
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Brain and Health, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
5
|
Camponogara I, Volcic R. Visual uncertainty unveils the distinct role of haptic cues in multisensory grasping. eNeuro 2022; 9:ENEURO.0079-22.2022. [PMID: 35641223 PMCID: PMC9215692 DOI: 10.1523/eneuro.0079-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/26/2022] [Accepted: 05/19/2022] [Indexed: 11/21/2022] Open
Abstract
Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs. peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit irrespective of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of non-visual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.Significance statementThe longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that non-visual inputs serve an important, previously under-appreciated, functional role in grasping.
Collapse
Affiliation(s)
- Ivan Camponogara
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
6
|
Beyvers MC, Fraser LE, Fiehler K. Linking Signal Relevancy and Intensity in Predictive Tactile Suppression. Front Hum Neurosci 2022; 16:795886. [PMID: 35280202 PMCID: PMC8908965 DOI: 10.3389/fnhum.2022.795886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 01/31/2022] [Indexed: 11/30/2022] Open
Abstract
Predictable somatosensory feedback leads to a reduction in tactile sensitivity. This phenomenon, called tactile suppression, relies on a mechanism that uses an efference copy of motor commands to help select relevant aspects of incoming sensory signals. We investigated whether tactile suppression is modulated by (a) the task-relevancy of the predicted consequences of movement and (b) the intensity of related somatosensory feedback signals. Participants reached to a target region in the air in front of a screen; visual or tactile feedback indicated the reach was successful. Furthermore, tactile feedback intensity (strong vs. weak) varied across two groups of participants. We measured tactile suppression by comparing detection thresholds for a probing vibration applied to the finger either early or late during reach and at rest. As expected, we found an overall decrease in late-reach suppression, as no touch was involved at the end of the reach. We observed an increase in the degree of tactile suppression when strong tactile feedback was given at the end of the reach, compared to when weak tactile feedback or visual feedback was given. Our results suggest that the extent of tactile suppression can be adapted to different demands of somatosensory processing. Downregulation of this mechanism is invoked only when the consequences of missing a weak movement sequence are severe for the task. The decisive factor for the presence of tactile suppression seems not to be the predicted action effect as such, but the need to detect and process anticipated feedback signals occurring during movement.
Collapse
Affiliation(s)
- Marie C. Beyvers
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Lindsey E. Fraser
- Center for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Giessen, Germany
- *Correspondence: Katja Fiehler,
| |
Collapse
|
7
|
A brief glimpse at a haptic target is sufficient for multisensory integration in reaching movements. Vision Res 2021; 185:50-57. [PMID: 33895647 DOI: 10.1016/j.visres.2021.03.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 01/26/2021] [Accepted: 03/31/2021] [Indexed: 11/22/2022]
Abstract
Goal-directed aiming movements toward visuo-haptic targets (i.e., seen and handheld targets) are generally more precise than those toward visual only or haptic only targets. This multisensory advantage stems from a continuous inflow of haptic and visual target information during the movement planning and execution phases. However, in everyday life, multisensory movements often occur without the support of continuous visual information. Here we investigated whether and to what extent limiting visual information to the initial stage of the action still leads to a multisensory advantage. Participants were asked to reach a handheld target while vision was briefly provided during the movement planning phase (50 ms, 100 ms, 200 ms of vision before movement onset), or during the planning and early execution phases (400 ms of vision), or during the entire movement. Additional conditions were performed in which only haptic target information was provided, or, only vision was provided either briefly (50 ms, 100 ms, 200 ms, 400 ms) or throughout the entire movement. Results showed that 50 ms of vision before movement onset were sufficient to trigger a direction-specific visuo-haptic integration process that increased endpoint precision. We conclude that, when a continuous support of vision is not available, endpoint precision is determined by the less recent, but most reliable multisensory information rather than by the latest unisensory (haptic) inputs.
Collapse
|
8
|
Klein LK, Maiello G, Fleming RW, Voudouris D. Friction is preferred over grasp configuration in precision grip grasping. J Neurophysiol 2021; 125:1330-1338. [PMID: 33596725 DOI: 10.1152/jn.00021.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
How humans visually select where to grasp an object depends on many factors, including grasp stability and preferred grasp configuration. We examined how endpoints are selected when these two factors are brought into conflict: Do people favor stable grasps or do they prefer their natural grasp configurations? Participants reached to grasp one of three cuboids oriented so that its two corners were either aligned with, or rotated away from, each individual's natural grasp axis (NGA). All objects were made of brass (mass: 420 g), but the surfaces of their sides were manipulated to alter friction: 1) all-brass, 2) two opposing sides covered with wood, and the other two remained of brass, or 3) two opposing sides covered with sandpaper, and the two remaining brass sides smeared with Vaseline. Grasps were evaluated as either clockwise (thumb to the left of finger in frontal plane) or counterclockwise of the NGA. Grasp endpoints depended on both object orientation and surface material. For the all-brass object, grasps were bimodally distributed in the NGA-aligned condition but predominantly clockwise in the NGA-unaligned condition. These data reflected participants' natural grasp configuration independently of surface material. When grasping objects with different surface materials, endpoint selection changed: Participants sacrificed their usual grasp configuration to choose the more stable object sides. A model in which surface material shifts participants' preferred grip angle proportionally to the perceived friction of the surfaces accounts for our results. Our findings demonstrate that a stable grasp is more important than a biomechanically comfortable grasp configuration.NEW & NOTEWORTHY When grasping an object, humans can place their fingers at several positions on its surface. The selection of these endpoints depends on many factors, with two of the most important being grasp stability and grasp configuration. We put these two factors in conflict and examine which is considered more important. Our results highlight that humans are not reluctant to adopt unusual grasp configurations to satisfy grasp stability.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Germany
| | - Dimitris Voudouris
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
9
|
Camponogara I, Volcic R. Integration of haptics and vision in human multisensory grasping. Cortex 2020; 135:173-185. [PMID: 33383479 DOI: 10.1016/j.cortex.2020.11.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/21/2020] [Accepted: 11/17/2020] [Indexed: 11/18/2022]
Abstract
Grasping actions are directed not only toward objects we see but also toward objects we both see and touch (multisensory grasping). In this latter case, the integration of visual and haptic inputs improves movement performance compared to each sense alone. This performance advantage could be due to the integration of all the redundant positional and size cues or to the integration of only a subset of these cues. Here we selectively provided specific cues to tease apart how these different sensory sources contribute to visuo-haptic multisensory grasping. We demonstrate that the availability of the haptic positional cue together with the visual cues is sufficient to achieve the same grasping performance as when all cues are available. These findings provide strong evidence that the human sensorimotor system relies on non-visual sensory inputs and open new perspectives on their role in supporting vision during both development and adulthood.
Collapse
Affiliation(s)
- Ivan Camponogara
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Robert Volcic
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
10
|
Klein LK, Maiello G, Paulun VC, Fleming RW. Predicting precision grip grasp locations on three-dimensional objects. PLoS Comput Biol 2020; 16:e1008081. [PMID: 32750070 PMCID: PMC7428291 DOI: 10.1371/journal.pcbi.1008081] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 08/14/2020] [Accepted: 06/22/2020] [Indexed: 11/18/2022] Open
Abstract
We rarely experience difficulty picking up objects, yet of all potential contact points on the surface, only a small proportion yield effective grasps. Here, we present extensive behavioral data alongside a normative model that correctly predicts human precision grasping of unfamiliar 3D objects. We tracked participants' forefinger and thumb as they picked up objects of 10 wood and brass cubes configured to tease apart effects of shape, weight, orientation, and mass distribution. Grasps were highly systematic and consistent across repetitions and participants. We employed these data to construct a model which combines five cost functions related to force closure, torque, natural grasp axis, grasp aperture, and visibility. Even without free parameters, the model predicts individual grasps almost as well as different individuals predict one another's, but fitting weights reveals the relative importance of the different constraints. The model also accurately predicts human grasps on novel 3D-printed objects with more naturalistic geometries and is robust to perturbations in its key parameters. Together, the findings provide a unified account of how we successfully grasp objects of different 3D shape, orientation, mass, and mass distribution.
Collapse
Affiliation(s)
- Lina K. Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- * E-mail:
| | - Vivian C. Paulun
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|