1
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Mahon BZ, Almeida J. Reciprocal interactions among parietal and occipito-temporal representations support everyday object-directed actions. Neuropsychologia 2024; 198:108841. [PMID: 38430962 DOI: 10.1016/j.neuropsychologia.2024.108841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/19/2024] [Accepted: 02/25/2024] [Indexed: 03/05/2024]
Abstract
Everyday interactions with common manipulable objects require the integration of conceptual knowledge about objects and actions with real-time sensory information about the position, orientation and volumetric structure of the grasp target. The ability to successfully interact with everyday objects involves analysis of visual form and shape, surface texture, material properties, conceptual attributes such as identity, function and typical context, and visuomotor processing supporting hand transport, grasp form, and object manipulation. Functionally separable brain regions across the dorsal and ventral visual pathways support the processing of these different object properties and, in cohort, are necessary for functional object use. Object-directed grasps display end-state-comfort: they anticipate in form and force the shape and material properties of the grasp target, and how the object will be manipulated after it is grasped. End-state-comfort is the default for everyday interactions with manipulable objects and implies integration of information across the ventral and dorsal visual pathways. We propose a model of how visuomotor and action representations in parietal cortex interact with object representations in ventral and lateral occipito-temporal cortex. One pathway, from the supramarginal gyrus to the middle and inferior temporal gyrus, supports the integration of action-related information, including hand and limb position (supramarginal gyrus) with conceptual attributes and an appreciation of the action goal (middle temporal gyrus). A second pathway, from posterior IPS to the fusiform gyrus and collateral sulcus supports the integration of grasp parameters (IPS) with the surface texture and material properties (e.g., weight distribution) of the grasp target. Reciprocal interactions among these regions are part of a broader network of regions that support everyday functional object interactions.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, USA; Neuroscience Institute, Carnegie Mellon University, USA; Department of Neurosurgery, University of Rochester Medical Center, USA.
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
3
|
Mudrik L, Hirschhorn R, Korisky U. Taking consciousness for real: Increasing the ecological validity of the study of conscious vs. unconscious processes. Neuron 2024; 112:1642-1656. [PMID: 38653247 PMCID: PMC11100345 DOI: 10.1016/j.neuron.2024.03.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/23/2024] [Accepted: 03/29/2024] [Indexed: 04/25/2024]
Abstract
The study of consciousness has developed well-controlled, rigorous methods for manipulating and measuring consciousness. Yet, in the process, experimental paradigms grew farther away from everyday conscious and unconscious processes, which raises the concern of ecological validity. In this review, we suggest that the field can benefit from adopting a more ecological approach, akin to other fields of cognitive science. There, this approach challenged some existing hypotheses, yielded stronger effects, and enabled new research questions. We argue that such a move is critical for studying consciousness, where experimental paradigms tend to be artificial and small effect sizes are relatively prevalent. We identify three paths for doing so-changing the stimuli and experimental settings, changing the measures, and changing the research questions themselves-and review works that have already started implementing such approaches. While acknowledging the inherent challenges, we call for increasing ecological validity in consciousness studies.
Collapse
Affiliation(s)
- Liad Mudrik
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| | - Rony Hirschhorn
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Uri Korisky
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
4
|
Lavoie E, Hebert JS, Chapman CS. Comparing eye-hand coordination between controller-mediated virtual reality, and a real-world object interaction task. J Vis 2024; 24:9. [PMID: 38393742 PMCID: PMC10905649 DOI: 10.1167/jov.24.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/30/2023] [Indexed: 02/25/2024] Open
Abstract
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Glenrose Rehabiliation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
5
|
Whitwell RL, Hasan HA, MacNeil RR, Enns JT. Coming to grips with reality: Real grasps, but not pantomimed grasps, resist a simultaneous tilt illusion. Neuropsychologia 2023; 191:108726. [PMID: 37931746 DOI: 10.1016/j.neuropsychologia.2023.108726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 10/27/2023] [Accepted: 11/03/2023] [Indexed: 11/08/2023]
Abstract
Investigations of grasping real, 3D objects subjected to illusory effects from a pictorial background often choose in-flight grasp aperture as the primary variable to test the hypothesis that the visuomotor system resists the illusion. Here we test an equally important feature of grasps that has received less attention: in-flight grasp orientation. The current study tested a variant of the simultaneous tilt illusion using a mirror-apparatus to manipulate the availability of haptic feedback. Participants performed grasps with haptic feedback (real grasps) and without it (pantomime grasps), reaching for the reflection of a real, 3D bar atop a background grating that induced a 1.1° bias in the perceived orientation of the bar in a separate sample of participants. Analysis of the hand's in-flight grasp orientation at early, late, and end stages of the reach showed that at no point were the real grasps biased by the illusion. In contrast, pantomimed grasps were affected by the illusion at the late and end stages of the reach. At each stage, the effect on the real grasps was significantly weaker than the effect of the illusion as measured by the mean point of subjective equality (PSE) in a two-alternative forced-choice task. In contrast, the effect on the pantomime grasps was statistically indistinguishable from the mean PSE at all three stages of the reach. These findings reinforce the idea that in-flight grasp orientation, like grasp aperture to pictorial illusions of target size, is refractory to pictorial backgrounds that bias perceived orientation.
Collapse
Affiliation(s)
- R L Whitwell
- Department of Physiology & Pharmacology, The University of Western University, Canada; Department of Psychology, The University of Western University, Canada.
| | - H A Hasan
- Department of Psychology, The University of British Columbia, Canada
| | - R R MacNeil
- Department of Psychology, The University of British Columbia, Canada
| | - J T Enns
- Department of Psychology, The University of British Columbia, Canada
| |
Collapse
|
6
|
Przybylski L, Kroliczak G. The functional organization of skilled actions in the adextral and atypical brain. Neuropsychologia 2023; 191:108735. [PMID: 37984793 DOI: 10.1016/j.neuropsychologia.2023.108735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/21/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
When planning functional grasps of tools, right-handed individuals (dextrals) show mostly left-lateralized neural activity in the praxis representation network (PRN), regardless of the used hand. Here we studied whether or not similar cerebral asymmetries are evident in non-righthanded individuals (adextrals). Sixty two participants, 28 righthanders and 34 non-righthanders (21 lefthanders, 13 mixedhanders), planned functional grasps of tools vs. grasps of control objects, and subsequently performed their pantomimed executions, in an event-related functional magnetic resonance imaging (fMRI) project. Both hands were tested, separately in two different sessions, counterbalanced across participants. After accounting for non-functional components of the prospective grasp, planning functional grasps of tools was associated with greater engagement of the same, left-hemisphere occipito-temporal, parietal and frontal areas of PRN, regardless of hand and handedness. Only when the analyses involved signal changes referenced to resting baseline intervals, differences between adextrals and dextrals emerged. Whereas in the left hemisphere the neural activity was equivalent in both groups (except for the occipito-temporo-parietal junction), its increases in the right occipito-temporal cortex, medial intraparietal sulcus (area MIP), the supramarginal gyrus (area PFt/PF), and middle frontal gyrus (area p9-46v) were significantly greater in adextrals. The inverse contrast was empty. Notably, when individuals with atypical and typical hemispheric phenotypes were directly compared, planning functional (vs. control) grasps invoked, instead, significant clusters located nearly exclusively in the left hemisphere of the typical phenotype. Previous studies interpret similar right-sided vs. left-sided increases in neural activity for skilled actions as handedness dependent, i.e., located in the hemisphere dominant for manual skills. Yet, none of the effects observed here can be purely handedness dependent because there were mixed-handed individuals among adextrals, and numerous mixed-handed and left-handed individuals possess the typical phenotype. Thus, our results clearly show that hand dominance has limited power in driving the cerebral organization of motor cognitive functions.
Collapse
Affiliation(s)
- Lukasz Przybylski
- Action & Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Poznan, Poland
| | - Gregory Kroliczak
- Action & Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Poznan, Poland; Cognitive Neuroscience Center, Adam Mickiewicz University, Poznan, Poland.
| |
Collapse
|
7
|
Milner AD. Melvyn A. Goodale: A visual neuroscientist in action. Neuropsychologia 2023; 188:108637. [PMID: 37402417 DOI: 10.1016/j.neuropsychologia.2023.108637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/16/2023] [Accepted: 07/01/2023] [Indexed: 07/06/2023]
Abstract
Mel Goodale has had a multi-faceted career in cognitive neuroscience, principally in the areas of perception, visually-guided action, and visual consciousness. This short article presents a personal reflection on his career from the point of view of a long-time colleague and friend.
Collapse
Affiliation(s)
- A David Milner
- Dept of Psychology, University of Durham, Science Laboratories, South Road, Durham, DH1 3LE, UK.
| |
Collapse
|
8
|
Robles CM, Anderson B, Dukelow SP, Striemer CL. Assessment and recovery of visually guided reaching deficits following cerebellar stroke. Neuropsychologia 2023; 188:108662. [PMID: 37598808 DOI: 10.1016/j.neuropsychologia.2023.108662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 08/17/2023] [Indexed: 08/22/2023]
Abstract
The cerebellum is known to play an important role in the coordination and timing of limb movements. The present study focused on how reach kinematics are affected by cerebellar lesions to quantify both the presence of motor impairment, and recovery of motor function over time. In the current study, 12 patients with isolated cerebellar stroke completed clinical measures of cognitive and motor function, as well as a visually guided reaching (VGR) task using the Kinarm exoskeleton at baseline (∼2 weeks), as well as 6, 12, and 24-weeks post-stroke. During the VGR task, patients made unassisted reaches with visual feedback from a central 'start' position to one of eight targets arranged in a circle. At baseline, 6/12 patients were impaired across several parameters of the VGR task compared to a Kinarm normative sample (n = 307), revealing deficits in both feed-forward and feedback control. The only clinical measures that consistently demonstrated impairment were the Purdue Pegboard Task (PPT; 9/12 patients) and the Montreal Cognitive Assessment (6/11 patients). Overall, patients who were impaired at baseline showed significant recovery by the 24-week follow-up for both VGR and the PPT. A lesion overlap analysis indicated that the regions most commonly damaged in 5/12 patients (42% overlap) were lobule IX and Crus II of the right cerebellum. A lesion subtraction analysis comparing patients who were impaired (n = 6) vs. unimpaired (n = 6) on the VGR task at baseline showed that the region most commonly damaged in impaired patients was lobule VIII of the right cerebellum (40% overlap). Our results lend further support to the notion that the cerebellum is involved in both feedforward and feedback control during reaching, and that cerebellar patients tend to recover relatively quickly overall. In addition, we argue that future research should study the effects of cerebellar damage on visuomotor control from a perception-action theoretical framework to better understand how the cerebellum works with the dorsal stream to control visually guided action.
Collapse
Affiliation(s)
- Chella M Robles
- Department of Psychology, MacEwan University, Edmonton, Alberta, Canada
| | - Britt Anderson
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Sean P Dukelow
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada
| | - Christopher L Striemer
- Department of Psychology, MacEwan University, Edmonton, Alberta, Canada; Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.
| |
Collapse
|
9
|
Brock K, Vine SJ, Ross JM, Trevarthen M, Harris DJ. Movement kinematic and postural control differences when performing a visuomotor skill in real and virtual environments. Exp Brain Res 2023:10.1007/s00221-023-06639-0. [PMID: 37222777 DOI: 10.1007/s00221-023-06639-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 05/15/2023] [Indexed: 05/25/2023]
Abstract
Immersive technologies, like virtual and mixed reality, pose a novel challenge for our sensorimotor systems as they deliver simulated sensory inputs that may not match those of the natural environment. These include reduced fields of view, missing or inaccurate haptic information, and distortions of 3D space; differences that may impact the control of motor actions. For instance, reach-to-grasp movements without end-point haptic feedback are characterised by slower and more exaggerated movements. A general uncertainty about sensory input may also induce a more conscious form of movement control. We tested whether a more complex skill like golf putting was also characterized by more consciously controlled movement. In a repeated-measures design, kinematics of the putter swing and postural control were compared between (i) real-world putting, (ii) VR putting, and (iii) VR putting with haptic feedback from a real ball (i.e., mixed reality). Differences in putter swing were observed both between the real world and VR, and between VR conditions with and without haptic information. Further, clear differences in postural control emerged between real and virtual putting, with both VR conditions characterised by larger postural movements, which were more regular and less complex, suggesting a more conscious form of balance control. Conversely, participants actually reported less conscious awareness of their movements in VR. These findings highlight how fundamental movement differences may exist between virtual and natural environments, which may pose challenges for transfer of learning within applications to motor rehabilitation and sport.
Collapse
Affiliation(s)
- K Brock
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - S J Vine
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - J M Ross
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - M Trevarthen
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - D J Harris
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| |
Collapse
|
10
|
Lausberg H, Dvoretska D, Ptito A. Production of co-speech gestures in the right hemisphere: Evidence from individuals with complete or anterior callosotomy. Neuropsychologia 2023; 180:108484. [PMID: 36638861 DOI: 10.1016/j.neuropsychologia.2023.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 01/04/2023] [Accepted: 01/09/2023] [Indexed: 01/12/2023]
Abstract
INTRODUCTION A right-hand preference for co-speech gestures in right-handed neurotypical individuals as well as the co-occurrence of speech and gesture has induced neuropsychological research to primarily target the left hemisphere when investigating co-speech gesture production. However, the substantial number of spontaneous left-hand gestures in right-handed individuals has, thus far, been unexplained. Recent studies in individuals with complete callosotomy and exclusive left hemisphere speech production show a reliable left-hand preference for co-speech gestures, indicating a right hemispheric generation. However, the findings raise the issue if the separate right hemisphere is able to also generate representational gestures. The present study challenges the proposition of a specific right hemispheric contribution to gesture production by differentiating gesture types including representational ones in individuals with complete callosotomy and by including individuals with anterior callosotomy in whom neural reorganization is less extensive. METHODS Three right-handed individuals with complete commissurotomy (A.A., N.G., G.C.) and three right-handed individuals with anterior callosotomy (C.E., S.R., L. D), all with left hemisphere language dominance, and a matched right-handed neurotypical control group (n = 10) were examined in an experimental setting, including re-narration of a nonverbal animated cartoon and responding to intelligence questions. The participants' video-taped hand movement behavior was analyzed by two independent certified raters with the NEUROGES-ELAN system for nonverbal behavior and gesture. Unimanual right-hand and left-hand gestures were classified into eight gesture types. RESULTS The individuals with complete and anterior callosotomy performed unimanual co-speech gestures with the left as well as the right hand, with no significant preference of one hand for gestures overall. Concerning the specific gesture types, the group with complete callosotomy showed a significant right-hand preference for pantomime gestures, which also applied to the callosotomy total group. The group with anterior callosotomy displayed a significant left-hand preference for form presentation gestures. As a trend, the callosotomy total group differed from the neurotypical group as they performed more left-hand egocentric deictic and left-hand form presentation gestures. DISCUSSION The present study replicates the finding of a substantial left-hand use for unimanual co-speech gestures in individuals with complete callosotomy. The proposition of a right hemispheric contribution to gesture production independent from left hemispheric language production is corroborated by the finding that individuals with anterior callosotomy show a similar pattern of hand use for gestures. Representational gestures were displayed with either hand, suggesting that in particular right hemispheric spatial cognition can be directly expressed in gesture. The significant right-hand preference for pantomime gesture was outstanding and compatible with the established left hemispheric specialization for tool use praxis. The findings shed a new light on the left-hand gestures in neurotypical individuals, suggesting that these can be generated in the right hemisphere.
Collapse
Affiliation(s)
- Hedda Lausberg
- Department of Neurology, Psychosomatic Medicine, and Psychiatry, German Sport University, Cologne, Germany.
| | - Daniela Dvoretska
- Department of Neurology, Psychosomatic Medicine, and Psychiatry, German Sport University, Cologne, Germany
| | - Alain Ptito
- Montreal Neurological Institute, McGill University and McGill University Health Centre Research Institute, Montreal, Quebec, Canada
| |
Collapse
|
11
|
Rzepka AM, Hussey KJ, Maltz MV, Babin K, Wilcox LM, Culham JC. Familiar size affects perception differently in virtual reality and the real world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210464. [PMID: 36511414 PMCID: PMC9745877 DOI: 10.1098/rstb.2021.0464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
The promise of virtual reality (VR) as a tool for perceptual and cognitive research rests on the assumption that perception in virtual environments generalizes to the real world. Here, we conducted two experiments to compare size and distance perception between VR and physical reality (Maltz et al. 2021 J. Vis. 21, 1-18). In experiment 1, we used VR to present dice and Rubik's cubes at their typical sizes or reversed sizes at distances that maintained a constant visual angle. After viewing the stimuli binocularly (to provide vergence and disparity information) or monocularly, participants manually estimated perceived size and distance. Unlike physical reality, where participants relied less on familiar size and more on presented size during binocular versus monocular viewing, in VR participants relied heavily on familiar size regardless of the availability of binocular cues. In experiment 2, we demonstrated that the effects in VR generalized to other stimuli and to a higher quality VR headset. These results suggest that the use of binocular cues and familiar size differs substantially between virtual and physical reality. A deeper understanding of perceptual differences is necessary before assuming that research outcomes from VR will generalize to the real world. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Anna M. Rzepka
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Kieran J. Hussey
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Margaret V. Maltz
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Karsten Babin
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Laurie M. Wilcox
- Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Jody C. Culham
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7,Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| |
Collapse
|
12
|
Sartin S, Ranzini M, Scarpazza C, Monaco S. Cortical areas involved in grasping and reaching actions with and without visual information: An ALE meta-analysis of neuroimaging studies. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 4:100070. [PMID: 36632448 PMCID: PMC9826890 DOI: 10.1016/j.crneur.2022.100070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 11/23/2022] [Accepted: 12/18/2022] [Indexed: 12/31/2022] Open
Abstract
The functional specialization of the ventral stream in Perception and the dorsal stream in Action is the cornerstone of the leading model proposed by Goodale and Milner in 1992. This model is based on neuropsychological evidence and has been a matter of debate for almost three decades, during which the dual-visual stream hypothesis has received much attention, including support and criticism. The advent of functional magnetic resonance imaging (fMRI) has allowed investigating the brain areas involved in Perception and Action, and provided useful data on the functional specialization of the two streams. Research on this topic has been quite prolific, yet no meta-analysis so far has explored the spatial convergence in the involvement of the two streams in Action. The present meta-analysis (N = 53 fMRI and PET studies) was designed to reveal the specific neural activations associated with Action (i.e., grasping and reaching movements), and the extent to which visual information affects the involvement of the two streams during motor control. Our results provide a comprehensive view of the consistent and spatially convergent neural correlates of Action based on neuroimaging studies conducted over the past two decades. In particular, occipital-temporal areas showed higher activation likelihood in the Vision compared to the No vision condition, but no difference between reach and grasp actions. Frontal-parietal areas were consistently involved in both reach and grasp actions regardless of visual availability. We discuss our results in light of the well-established dual-visual stream model and frame these findings in the context of recent discoveries obtained with advanced fMRI methods, such as multivoxel pattern analysis.
Collapse
Affiliation(s)
- Samantha Sartin
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy
| | | | - Cristina Scarpazza
- Department of General Psychology, University of Padua, Italy,IRCCS San Camillo Hospital, Venice, Italy
| | - Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy,Corresponding author. CIMeC - Center for Mind/Brain Sciences, University of Trento, Via delle Regole 101, 38123, Trento, Italy.
| |
Collapse
|
13
|
Bhatia K, Löwenkamp C, Franz VH. Grasping follows Weber's law: How to use response variability as a proxy for JND. J Vis 2022; 22:13. [DOI: 10.1167/jov.22.12.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Kriti Bhatia
- Experimental Cognitive Science, University of Tübingen, Tübingen, Germany
| | | | - Volker H. Franz
- Experimental Cognitive Science, University of Tübingen, Tübingen, Germany
| |
Collapse
|
14
|
Landwehr K. Bimanual thumb-index finger indications of noncorresponding extents. Atten Percept Psychophys 2022; 84:289-299. [PMID: 34341939 PMCID: PMC8795064 DOI: 10.3758/s13414-021-02360-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2021] [Indexed: 11/15/2022]
Abstract
Two experiments tested a prediction derived from the recent finding that the Oppel-Kundt illusion - the overestimation of a filled extent relative to an empty one - was much attenuated when the empty part of a bipartite row of dots was vertical and the filled part horizontal, suggesting that the Horizontal-vertical illusion - the overestimation of vertical extents relative to horizontal ones - only acted on the empty part of an Oppel-Kundt figure. Observers had to bimanually indicate the sizes of the two parts of an Oppel-Kundt figure, which were arranged one above the other with one part vertical and the other part tilted -45°, 0°, or 45°. Results conformed to the prediction but response bias was greater when observers had been instructed to point to the extents' endpoints than when instructed to estimate the extents' lengths, suggesting that different concepts and motor programs had been activated.
Collapse
Affiliation(s)
- Klaus Landwehr
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, 55099, Mainz, Germany.
| |
Collapse
|
15
|
The contributions of the ventral and the dorsal visual streams to the automatic processing of action relations of familiar and unfamiliar object pairs. Neuroimage 2021. [DOI: 10.1016/j.neuroimage.2021.118629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
16
|
Chen PW, Klaesner J, Zwir I, Morgan KA. Detecting clinical practice guideline-recommended wheelchair propulsion patterns with wearable devices following a wheelchair propulsion intervention. Assist Technol 2021; 35:193-201. [PMID: 34814806 DOI: 10.1080/10400435.2021.2010146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Wheelchair propulsion interventions typically teach manual wheelchair users to perform wheelchair propulsion biomechanics as recommended by the Clinical Practice Guidelines (CPG). Outcome measures for these interventions are primarily laboratory based. Discrepancies remain between manual wheelchair propulsion (MWP) in laboratory-based examinations and propulsion in the real-world. Current developments in machine learning (ML) allow for monitoring of MWP in the real world. In this study, we collected data from participants enrolled in two wheelchair propulsion interventions, then built an ML algorithm to distinguish CPG recommended MWP patterns from non-CPG-recommended patterns. Eight primary manual wheelchair users did not initially follow CPG recommendations but learned and performed CPG propulsion after the interventions. Participants each wore two inertial measurement units as they propelled their wheelchairs on a roller system, indoors overground, and outdoors. ML models were trained to classify propulsion patterns as following the CPG or not following the CPG. Video recordings were used for reference. For indoor detection, we found that a subject-independent model was able to achieve 85% accuracy. For outdoor detection, we found that the subject-independent model achieved 75.4% accuracy. These results provide further evidence that CPG and non-CPG-recommended MWP patterns can be predicted with wearable sensors using an ML algorithm.
Collapse
Affiliation(s)
- Pin-Wei Chen
- Program in Occupational Therapy, Washington University School of Medicine, St. Louis, USA
| | - Joe Klaesner
- Program in Physical Therapy, Washington University School of Medicine, St. Louis, USA
| | - Igor Zwir
- Department of Psychiatry, Washington University School of Medicine, St. Louis, USA
| | - Kerri A Morgan
- Program in Occupational Therapy, Washington University School of Medicine, St. Louis, USA
| |
Collapse
|
17
|
Eye-hand coordination: memory-guided grasping during obstacle avoidance. Exp Brain Res 2021; 240:453-466. [PMID: 34787684 DOI: 10.1007/s00221-021-06271-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
When reaching to grasp previously seen, now out-of-view objects, we rely on stored perceptual representations to guide our actions, likely encoded by the ventral visual stream. So-called memory-guided actions are numerous in daily life, for instance, as we reach to grasp a coffee cup hidden behind our morning newspaper. Little research has examined obstacle avoidance during memory-guided grasping, though it is possible obstacles with increased perceptual salience will provoke exacerbated avoidance maneuvers, like exaggerated deviations in eye and hand position away from obtrusive obstacles. We examined the obstacle avoidance strategies adopted as subjects reached to grasp a 3D target object under visually-guided (closed loop or open loop with full vision prior to movement onset) and memory-guided (short- or long-delay) conditions. On any given trial, subjects reached between a pair of flanker obstacles to grasp a target object. The positions and widths of the obstacles were manipulated, though their inner edges remained a constant distance apart. While reach and grasp behavior was consistent with the obstacle avoidance literature, in that reach, grasp, and gaze positions were biased away from obstacles most obtrusive to the reaching hand, our results reveal distinctive avoidance approaches undertaken depend on the availability of visual feedback. Contrary to expectation, we found subjects reaching to grasp after a long delay in the absence of visual feedback failed to modify their final fixation and grasp positions to accommodate the different positions of obstacles, demonstrating a more moderate, rather than exaggerative, obstacle avoidance strategy.
Collapse
|
18
|
Taniguchi S, Higashi Y, Kataoka H, Nakajima H, Shimokawa T. Functional Connectivity and Networks Underlying Complex Tool-Use Movement in Assembly Workers: An fMRI Study. Front Hum Neurosci 2021; 15:707502. [PMID: 34776900 PMCID: PMC8581229 DOI: 10.3389/fnhum.2021.707502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 09/07/2021] [Indexed: 11/29/2022] Open
Abstract
The aim of this study was to identify the functional connectivity and networks utilized during tool-use in real assembly workers. These brain networks have not been elucidated because the use of tools in real-life settings is more complex than that in experimental environments. We evaluated task-related functional magnetic resonance imaging in 13 assembly workers (trained workers, TW) and 27 age-matched volunteers (untrained workers, UTW) during a tool-use pantomiming task, and resting-state functional connectivity was also analyzed. Two-way repeated-measures analysis of covariance was conducted with the group as a between-subject factor (TW > UTW) and condition (task > resting) as a repeated measure, controlling for assembly time and accuracy as covariates. We identified two patterns of functional connectivity in the whole brain within three networks that distinguished TW from UTW. TW had higher connectivity than UTW between the left middle temporal gyrus and right cerebellum Crus II (false discovery rate corrected p-value, p-FDR = 0.002) as well as between the left supplementary motor area and the pars triangularis of the right inferior frontal gyrus (p-FDR = 0.010). These network integrities may allow for TW to perform rapid tool-use. In contrast, UTW showed a stronger integrity compared to TW between the left paracentral lobule and right angular gyrus (p-FDR = 0.004), which may reflect a greater reliance on sensorimotor input to acquire complex tool-use ability than that of TW. Additionally, the fronto-parietal network was identified as a common network between groups. These findings support our hypothesis that assembly workers have stronger connectivity in tool-specific motor regions and the cerebellum, whereas UTW have greater involvement of sensorimotor networks during a tool-use task.
Collapse
Affiliation(s)
- Seira Taniguchi
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Japan
| | | | | | | | - Tetsuya Shimokawa
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Japan
| |
Collapse
|
19
|
Langdon A, Botvinick M, Nakahara H, Tanaka K, Matsumoto M, Kanai R. Meta-learning, social cognition and consciousness in brains and machines. Neural Netw 2021; 145:80-89. [PMID: 34735893 DOI: 10.1016/j.neunet.2021.10.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 09/20/2021] [Accepted: 10/01/2021] [Indexed: 12/11/2022]
Abstract
The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields.
Collapse
Affiliation(s)
- Angela Langdon
- Princeton Neuroscience Institute, Princeton University, USA
| | - Matthew Botvinick
- DeepMind, London, UK; Gatsby Computational Neuroscience Unit, University College London, London, UK
| | | | - Keiji Tanaka
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Masayuki Matsumoto
- Division of Biomedical Science, Faculty of Medicine, University of Tsukuba, Ibaraki, Japan; Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan; Transborder Medical Research Center, University of Tsukuba, Ibaraki, Japan
| | | |
Collapse
|
20
|
van Polanen V. Grasp aperture corrections in reach-to-grasp movements do not reliably alter size perception. PLoS One 2021; 16:e0248084. [PMID: 34520478 PMCID: PMC8439486 DOI: 10.1371/journal.pone.0248084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 08/17/2021] [Indexed: 11/18/2022] Open
Abstract
When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.
Collapse
Affiliation(s)
- Vonne van Polanen
- Movement Control and Neuroplasticity Research Group, Department of Movement Sciences, Biomedical Sciences group, KU Leuven, Leuven, Belgium
- Leuven Brain Institute, KU Leuven, Leuven, Belgium
- * E-mail:
| |
Collapse
|
21
|
Brown AR, Pouw W, Brentari D, Goldin-Meadow S. People Are Less Susceptible to Illusion When They Use Their Hands to Communicate Rather Than Estimate. Psychol Sci 2021; 32:1227-1237. [PMID: 34240647 DOI: 10.1177/0956797621991552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.
Collapse
Affiliation(s)
- Amanda R Brown
- Department of Comparative Human Development, The University of Chicago.,School of Social Welfare, The University of Kansas
| | - Wim Pouw
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | | - Susan Goldin-Meadow
- Department of Comparative Human Development, The University of Chicago.,Department of Psychology, The University of Chicago
| |
Collapse
|
22
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
23
|
Kryklywy JH, Roach VA, Todd RM. Assessing the efficacy of tablet-based simulations for learning pseudo-surgical instrumentation. PLoS One 2021; 16:e0245330. [PMID: 33444407 PMCID: PMC7808648 DOI: 10.1371/journal.pone.0245330] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 12/29/2020] [Indexed: 11/18/2022] Open
Abstract
Nurses and surgeons must identify and handle specialized instruments with high temporal and spatial precision. It is crucial that they are trained effectively. Traditional training methods include supervised practices and text-based study, which may expose patients to undue risk during practice procedures and lack motor/haptic training respectively. Tablet-based simulations have been proposed to mediate some of these limitations. We implemented a learning task that simulates surgical instrumentation nomenclature encountered by novice perioperative nurses. Learning was assessed following training in three distinct conditions: tablet-based simulations, text-based study, and real-world practice. Immediately following a 30-minute training period, instrument identification was performed with comparable accuracy and response times following tablet-based versus text-based training, with both being inferior to real-world practice. Following a week without practice, response times were equivalent between real-world and tablet-based practice. While tablet-based training does not achieve equivalent results in instrument identification accuracy as real-world practice, more practice repetitions in simulated environments may help reduce performance decline. This project has established a technological framework to assess how we can implement simulated educational environments in a maximally beneficial manner.
Collapse
Affiliation(s)
- James H. Kryklywy
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Victoria A. Roach
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Rochester, Michigan, United States of America
- Department of Surgery, Oakland University William Beaumont School of Medicine, Rochester, Michigan, United States of America
| | - Rebecca M. Todd
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Dajvad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
24
|
Sivakumar P, Quinlan DJ, Stubbs KM, Culham JC. Grasping performance depends upon the richness of hand feedback. Exp Brain Res 2021; 239:835-846. [PMID: 33403432 DOI: 10.1007/s00221-020-06025-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 12/19/2020] [Indexed: 11/28/2022]
Abstract
Although visual feedback of the hand allows fast and accurate grasping actions, little is known about whether the nature of feedback of the hand affects performance. We investigated kinematics during precision grasping (with the index finger and thumb) when participants received different levels of hand feedback, with or without visual feedback of the target. Specifically, we compared performance when participants saw (1) no hand feedback; (2) only the two critical points on the index finger and thumb tips; (3) 21 points on all digit tips and hand joints; (4) 21 points connected by a "skeleton", or (5) full feedback of the hand wearing a glove. When less hand feedback was available, participants took longer to execute the movement because they allowed more time to slow the reach and close the hand. When target feedback was unavailable, participants took longer to plan the movement and reached with higher velocity. We were particularly interested in investigating maximum grip aperture (MGA), which can reflect the margin of error that participants allow to compensate for uncertainty. A trend suggested that MGA was smallest when ample feedback was available (skeleton and full hand feedback, regardless of target feedback) and when only essential information about hand and target was provided (2-point hand feedback + target feedback) but increased when non-essential points were included (21-point feedback). These results suggest that visual feedback of the hand affects grasping performance and that, while more feedback is usually beneficial, this is not necessarily always the case.
Collapse
Affiliation(s)
- Prajith Sivakumar
- Department of Biology, University of Western Ontario, London, Canada.,Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada
| | - Derek J Quinlan
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, Huron University College, London, ON, Canada
| | - Kevin M Stubbs
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, University of Western Ontario, London, ON, Canada
| | - Jody C Culham
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada. .,Department of Psychology, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
25
|
Whitwell RL, Katz NJ, Goodale MA, Enns JT. The Role of Haptic Expectations in Reaching to Grasp: From Pantomime to Natural Grasps and Back Again. Front Psychol 2020; 11:588428. [PMID: 33391110 PMCID: PMC7773727 DOI: 10.3389/fpsyg.2020.588428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 11/13/2022] Open
Abstract
When we reach to pick up an object, our actions are effortlessly informed by the object's spatial information, the position of our limbs, stored knowledge of the object's material properties, and what we want to do with the object. A substantial body of evidence suggests that grasps are under the control of "automatic, unconscious" sensorimotor modules housed in the "dorsal stream" of the posterior parietal cortex. Visual online feedback has a strong effect on the hand's in-flight grasp aperture. Previous work of ours exploited this effect to show that grasps are refractory to cued expectations for visual feedback. Nonetheless, when we reach out to pretend to grasp an object (pantomime grasp), our actions are performed with greater cognitive effort and they engage structures outside of the dorsal stream, including the ventral stream. Here we ask whether our previous finding would extend to cued expectations for haptic feedback. Our method involved a mirror apparatus that allowed participants to see a "virtual" target cylinder as a reflection in the mirror at the start of all trials. On "haptic feedback" trials, participants reached behind the mirror to grasp a size-matched cylinder, spatially coincident with the virtual one. On "no-haptic feedback" trials, participants reached behind the mirror and grasped into "thin air" because no cylinder was present. To manipulate haptic expectation, we organized the haptic conditions into blocked, alternating, and randomized schedules with and without verbal cues about the availability of haptic feedback. Replicating earlier work, we found the strongest haptic effects with the blocked schedules and the weakest effects in the randomized uncued schedule. Crucially, the haptic effects in the cued randomized schedule was intermediate. An analysis of the influence of the upcoming and immediately preceding haptic feedback condition in the cued and uncued random schedules showed that cuing the upcoming haptic condition shifted the haptic influence on grip aperture from the immediately preceding trial to the upcoming trial. These findings indicate that, unlike cues to the availability of visual feedback, participants take advantage of cues to the availability of haptic feedback, flexibly engaging pantomime, and natural modes of grasping to optimize the movement.
Collapse
Affiliation(s)
- Robert L Whitwell
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| | - Nathan J Katz
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - Melvyn A Goodale
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - James T Enns
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
26
|
Ilardi CR, Iavarone A, Villano I, Rapuano M, Ruggiero G, Iachini T, Chieffi S. Egocentric and allocentric spatial representations in a patient with Bálint-like syndrome: A single-case study. Cortex 2020; 135:10-16. [PMID: 33341593 DOI: 10.1016/j.cortex.2020.11.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 10/22/2022]
Abstract
Previous studies suggested that egocentric and allocentric spatial representations are supported by neural networks in the occipito-parietal (dorsal) and occipito-temporal (ventral) streams, respectively. The present study aimed to explore the integrity of ego- and allo-centric spatial representations in a patient (GP) who presented bilateral occipito-parietal damage consistent with the picture of a Bálint-like syndrome. GP and healthy controls were asked to provide memory-based spatial judgments on triads of objects after a short (1.5sec) or long (5sec) delay. The results showed that GP's performance was selectively impaired in the Ego/1.5sec delay condition. As a whole, our findings suggest that GP's spared ventral stream could generate short- and long-term allocentric representations. Furthermore, the stored perceptual representation processed within the ventral stream might have been used to generate long-term egocentric representation. Conversely, the generation of short-term egocentric representation appeared to be selectively undermined by the damage of the dorsal stream.
Collapse
Affiliation(s)
- Ciro Rosario Ilardi
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy; Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | | | - Ines Villano
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Mariachiara Rapuano
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Sergio Chieffi
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| |
Collapse
|
27
|
Uccelli S, Palumbo L, Harrison NR, Bruno N. Asymmetric effects of graspable distractor disks on motor preparation of successive grasps: A behavioural and event-related potential (ERP) study. Int J Psychophysiol 2020; 158:318-330. [PMID: 33164874 DOI: 10.1016/j.ijpsycho.2020.10.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 07/29/2020] [Accepted: 10/20/2020] [Indexed: 10/23/2022]
Abstract
There is evidence that seeing a graspable object automatically elicits a preparatory motor process. However, it is unclear whether this implicit visuomotor process might influence the preparation of a successive grasp for a different object. We addressed the issue by implementing a combined behavioural and electrophysiological paradigm. Participants performed pantomimed grasps directed to small or large disks with either a two (pincer) or a five-finger (pentapod) grip, after the presentation of congruent (same size) or incongruent (different size) distractor disks. Preview reaction times (PRTs) and response-locked lateralized readiness potentials (R-LRPs) were recorded as online indices of motor preparation. Results revealed asymmetric effects of the distractors on PRTs and R-LRPs. For pincer grip disks, incongruent distractors were associated with longer PRTs and a delayed R-LRP peak. For pentapod grip disks, conversely, incongruent distractors were associated with shorter PRTs and a delayed R-LRP onset. Supporting an interpretation of these effects as tapping into motor preparation, we did not observe modulations of stimulus-locked LRP's (sensitive to sensory processing), or of the P300 component (related to reallocating attentional resources). These results challenge models (i.e., the "dorsal amnesia" hypothesis) which assume that visuomotor information presented before a grasp will not affect how we later perform that grasp.
Collapse
Affiliation(s)
| | - Letizia Palumbo
- Liverpool Hope University, United Kingdom of Great Britain and Northern Ireland
| | - Neil R Harrison
- Liverpool Hope University, United Kingdom of Great Britain and Northern Ireland
| | | |
Collapse
|
28
|
Carther-Krone TA, Senanayake SA, Marotta JJ. The influence of the Sander parallelogram illusion and early, middle and late vision on goal-directed reaching and grasping. Exp Brain Res 2020; 238:2993-3003. [PMID: 33095294 DOI: 10.1007/s00221-020-05960-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 10/13/2020] [Indexed: 10/23/2022]
Abstract
Vision is one of the most robust sensory inputs used for the execution of goal-directed actions. Despite a history of extensive visuomotor research, how individuals process visual context for the execution of movements continues to be debated. This experiment examines how early, middle and late visuomotor control is impacted by illusory characteristics in a reaching and grasping task. Participants either manually estimated or reached out and picked up a three-dimensional target bar resting on a two-dimensional picture of the Sander parallelogram illusion. Participants performed their grasps within a predefined time movement window based on their own average grasp time, allowing for the manipulation of visual feedback. On some trials, vision was only available before the response cue (an auditory tone), while on others vision was occluded until the response cue, becoming available for either the full, early, middle or late portions of the movement. While results showed that the effect of the illusion was stronger on manual estimations than on grasping, maximum grip apertures in the occluded vision and early vision grasping conditions were also consistent to a lesser extent with the illusion. The late vision condition showed longer movement time, wrist deceleration period, time to maximum grip aperture and lower maximum velocity. These findings indicate that visual context affects visuomotor control distinctly depending on when vision is available, and supports the notion that human vision is comprised of two functionally and anatomically distinct systems.
Collapse
Affiliation(s)
- Tiffany A Carther-Krone
- Perception and Action Lab, Department of Psychology, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada.
| | - Shannon A Senanayake
- Perception and Action Lab, Department of Psychology, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Jonathan J Marotta
- Perception and Action Lab, Department of Psychology, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| |
Collapse
|
29
|
Ozana A, Ganel T. A double dissociation between action and perception in bimanual grasping: evidence from the Ponzo and the Wundt-Jastrow illusions. Sci Rep 2020; 10:14665. [PMID: 32887921 PMCID: PMC7473850 DOI: 10.1038/s41598-020-71734-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/24/2020] [Indexed: 11/11/2022] Open
Abstract
Research on visuomotor control suggests that visually guided actions toward objects rely on functionally distinct computations with respect to perception. For example, a double dissociation between grasping and between perceptual estimates was reported in previous experiments that pit real against illusory object size differences in the context of the Ponzo illusion. While most previous research on the relation between action and perception focused on one-handed grasping, everyday visuomotor interactions also entail the simultaneous use of both hands to grasp objects that are larger in size. Here, we examined whether this double dissociation extends to bimanual movement control. In Experiment 1, participants were presented with different-sized objects embedded in the Ponzo Illusion. In Experiment 2, we tested whether the dissociation between perception and action extends to a different illusion, the Wundt-Jastrow illusion, which has not been previously used in grasping experiments. In both experiments, bimanual grasping trajectories reflected the differences in physical size between the objects; At the same time, perceptual estimates reflected the differences in illusory size between the objects. These results suggest that the double dissociation between action and perception generalizes to bimanual movement control. Unlike conscious perception, bimanual grasping movements are tuned to real-world metrics, and can potentially resist irrelevant information on relative size and depth.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
30
|
Mazurek KA, Schieber MH. Injecting Information into the Mammalian Cortex: Progress, Challenges, and Promise. Neuroscientist 2020; 27:129-142. [PMID: 32648527 DOI: 10.1177/1073858420936253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
For 150 years artificial stimulation has been used to study the function of the nervous system. Such stimulation-whether electrical or optogenetic-eventually may be used in neuroprosthetic devices to replace lost sensory inputs and to otherwise introduce information into the nervous system. Efforts toward this goal can be classified broadly as either biomimetic or arbitrary. Biomimetic stimulation aims to mimic patterns of natural neural activity, so that the subject immediately experiences the artificial stimulation as if it were natural sensation. Arbitrary stimulation, in contrast, makes no attempt to mimic natural patterns of neural activity. Instead, different stimuli-at different locations and/or in different patterns-are assigned different meanings randomly. The subject's time and effort then are required to learn to interpret different stimuli, a process that engages the brain's inherent plasticity. Here we will examine progress in using artificial stimulation to inject information into the cerebral cortex and discuss the challenges for and the promise of future development.
Collapse
Affiliation(s)
- Kevin A Mazurek
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
| | - Marc H Schieber
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.,Department of Neurology, University of Rochester, Rochester, NY, USA.,Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
| |
Collapse
|
31
|
Real and Imagined Grasping Movements Differently Activate the Human Dorsomedial Parietal Cortex. Neuroscience 2020; 434:22-34. [DOI: 10.1016/j.neuroscience.2020.03.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 03/09/2020] [Accepted: 03/10/2020] [Indexed: 11/24/2022]
|
32
|
Hand-use norms for Dutch and English manual action verbs: Implicit measures from a pantomime task. Behav Res Methods 2020; 52:1744-1767. [PMID: 32185639 DOI: 10.3758/s13428-020-01347-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many studies use manual action verbs to test whether people use neural systems for controlling manual actions to understand language about those actions. Yet, few of these studies empirically establish how people use their hands to perform the actions described by those verbs, relying instead on explicit self-report measures. Here, participants pantomimed the manual actions described by a large set of Dutch (N = 251) and English (N = 250) verbs, allowing us to approximate the extent to which people use each of their hands to perform these actions. After the pantomime task, participants also provided explicit ratings of each of these actions. The results from the pantomime task showed that most manual actions cannot be described accurately as either "unimanual" or "bimanual." With a few exceptions, unimanual action verbs do not describe actions that are performed with only one hand, and bimanual verbs do not describe actions that are performed by using both hands equally. Instead, individual actions vary continuously in the extent to which people use their non-dominant hand to perform them, and in the extent to which people consistently prefer one hand or the other to perform them. Finally, by comparing participants' implicit behavior to their explicit ratings, we found that participants' self-report showed only limited correspondence with their observed motor behavior. We provide all of our measures in both raw and summary format, offering researchers a precision tool for constructing stimulus sets for experiments on embodied cognition.
Collapse
|
33
|
Two types of memory-based (pantomime) reaches distinguished by gaze anchoring in reach-to-grasp tasks. Behav Brain Res 2020; 381:112438. [PMID: 31857149 DOI: 10.1016/j.bbr.2019.112438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/13/2019] [Accepted: 12/14/2019] [Indexed: 11/24/2022]
Abstract
Comparisons of target-based reaching vs memory-based (pantomime) reaching have been used to obtain insight into the visuomotor control of reaching. The present study examined the contribution of gaze anchoring, reaching to a target that is under continuous gaze, to both target-based and memory-based reaching. Participants made target-based reaches for discs located on a table or food items located on a pedestal or they replaced the objects. They then made memory-based reaches in which they pantomimed their target-based reaches. Participants were fitted with hand sensors for kinematic tracking and an eye tracker to monitor gaze. When making target-based reaches, participants directed gaze to the target location from reach onset to offset without interrupting saccades. Similar gaze anchoring was present for memory-based reaches when the surface upon which the target had been placed remained. When the target and its surface were both removed there was no systematic relationship between gaze and the reach. Gaze anchoring was also present when participants replaced a target on a surface, a movement featuring a reach but little grasp. That memory-based reaches can be either gaze anchor-associated or gaze anchor-independent is discussed in relation to contemporary views of the neural control of reaching.
Collapse
|
34
|
Kanai R, Chang A, Yu Y, Magrans de Abril I, Biehl M, Guttenberg N. Information generation as a functional basis of consciousness. Neurosci Conscious 2019; 2019:niz016. [PMID: 31798969 PMCID: PMC6884095 DOI: 10.1093/nc/niz016] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 10/14/2019] [Accepted: 10/22/2019] [Indexed: 01/27/2023] Open
Abstract
What is the biological advantage of having consciousness? Functions of consciousness have been elusive due to the subjective nature of consciousness and ample empirical evidence showing the presence of many nonconscious cognitive performances in the human brain. Drawing upon empirical literature, here, we propose that a core function of consciousness be the ability to internally generate representations of events possibly detached from the current sensory input. Such representations are constructed by generative models learned through sensory-motor interactions with the environment. We argue that the ability to generate information underlies a variety of cognitive functions associated with consciousness such as intention, imagination, planning, short-term memory, attention, curiosity, and creativity, all of which contribute to non-reflexive behavior. According to this view, consciousness emerged in evolution when organisms gained the ability to perform internal simulations using internal models, which endowed them with flexible intelligent behavior. To illustrate the notion of information generation, we take variational autoencoders (VAEs) as an analogy and show that information generation corresponds the decoding (or decompression) part of VAEs. In biological brains, we propose that information generation corresponds to top-down predictions in the predictive coding framework. This is compatible with empirical observations that recurrent feedback activations are linked with consciousness whereas feedforward processing alone seems to occur without evoking conscious experience. Taken together, the information generation hypothesis captures many aspects of existing ideas about potential functions of consciousness and provides new perspectives on the functional roles of consciousness.
Collapse
Affiliation(s)
- Ryota Kanai
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| | - Acer Chang
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| | - Yen Yu
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| | - Ildefons Magrans de Abril
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| | - Martin Biehl
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| | - Nicholas Guttenberg
- Basic Research Group, Araya, Inc., P.O. Box 577 ARK Mori Building 24 F, 1-12-32 Akasaka, Minato-ku, Tokyo, 107-6024, Japan
| |
Collapse
|
35
|
Blouin J, Saradjian AH, Pialasse JP, Manson GA, Mouchnino L, Simoneau M. Two Neural Circuits to Point Towards Home Position After Passive Body Displacements. Front Neural Circuits 2019; 13:70. [PMID: 31736717 PMCID: PMC6831616 DOI: 10.3389/fncir.2019.00070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/15/2019] [Indexed: 12/02/2022] Open
Abstract
A challenge in motor control research is to understand the mechanisms underlying the transformation of sensory information into arm motor commands. Here, we investigated these transformation mechanisms for movements whose targets were defined by information issued from body rotations in the dark (i.e., idiothetic information). Immediately after being rotated, participants reproduced the amplitude of their perceived rotation using their arm (Experiment 1). The cortical activation during movement planning was analyzed using electroencephalography and source analyses. Task-related activities were found in regions of interest (ROIs) located in the prefrontal cortex (PFC), dorsal premotor cortex, dorsal region of the anterior cingulate cortex (ACC) and the sensorimotor cortex. Importantly, critical regions for the cognitive encoding of space did not show significant task-related activities. These results suggest that arm movements were planned using a sensorimotor-type of spatial representation. However, when a 8 s delay was introduced between body rotation and the arm movement (Experiment 2), we found that areas involved in the cognitive encoding of space [e.g., ventral premotor cortex (vPM), rostral ACC, inferior and superior posterior parietal cortex (PPC)] showed task-related activities. Overall, our results suggest that the use of a cognitive-type of representation for planning arm movement after body motion is necessary when relevant spatial information must be stored before triggering the movement.
Collapse
Affiliation(s)
- Jean Blouin
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Anahid H Saradjian
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | | | - Gerome A Manson
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France.,Centre for Motor Control, University of Toronto, Toronto, ON, Canada
| | - Laurence Mouchnino
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Martin Simoneau
- Faculté de Médecine, Département de Kinésiologie, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| |
Collapse
|
36
|
Singh S, Mandziak A, Barr K, Blackwell AA, Mohajerani MH, Wallace DG, Whishaw IQ. Human string-pulling with and without a string: movement, sensory control, and memory. Exp Brain Res 2019; 237:3431-3447. [DOI: 10.1007/s00221-019-05684-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 11/07/2019] [Indexed: 01/04/2023]
|
37
|
Antipointing Reaches Do Not Adhere to Width-Based Manipulations of Fitts' (1954) Equation. Motor Control 2019; 24:222-237. [PMID: 31693993 DOI: 10.1123/mc.2019-0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 09/03/2019] [Accepted: 09/24/2019] [Indexed: 11/18/2022]
Abstract
Reaches with overlapping stimulus-response spatial relations (propointing) adhere to speed-accuracy relations as defined by Paul Fitts' index of difficulty equation (IDFitts: in bits of information). This movement principle is attributed to response mediation via the "fast" visuomotor networks of the dorsal visual pathway. It is, however, unclear whether the executive demands of dissociating stimulus-response spatial relations by reaching mirror-symmetrical to a target (antipointing) elicits similar adherence to Fitts' equation. Here, pro- and antipointing responses were directed to a constant target amplitude with varying target widths to provide IDFitts values of 3.0, 3.5, 4.3, and 6.3 bits. Propointing movement times linearly increased with IDFitts-a result attributed to visually based trajectory corrections. In contrast, antipointing movement times, deceleration times, and endpoint precision did not adhere to Fitts' equation. These results indicate that antipointing renders a "slow" and offline mode of control mediated by the visuoperceptual networks of the ventral visual pathway.
Collapse
|
38
|
Harris DJ, Buckingham G, Wilson MR, Vine SJ. Virtually the same? How impaired sensory information in virtual reality may disrupt vision for action. Exp Brain Res 2019; 237:2761-2766. [PMID: 31485708 PMCID: PMC6794235 DOI: 10.1007/s00221-019-05642-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 08/30/2019] [Indexed: 12/25/2022]
Abstract
Virtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but also raise deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review, we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.
Collapse
Affiliation(s)
- David J. Harris
- School of Sport and Health Sciences, University of Exeter, St Luke’s Campus, Exeter, EX1 2LU UK
| | - Gavin Buckingham
- School of Sport and Health Sciences, University of Exeter, St Luke’s Campus, Exeter, EX1 2LU UK
| | - Mark R. Wilson
- School of Sport and Health Sciences, University of Exeter, St Luke’s Campus, Exeter, EX1 2LU UK
| | - Samuel J. Vine
- School of Sport and Health Sciences, University of Exeter, St Luke’s Campus, Exeter, EX1 2LU UK
| |
Collapse
|
39
|
Smeets JBJ, van der Kooij K, Brenner E. A review of grasping as the movements of digits in space. J Neurophysiol 2019; 122:1578-1597. [DOI: 10.1152/jn.00123.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.
Collapse
Affiliation(s)
- Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Katinka van der Kooij
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
40
|
Mazurek KA, Richardson D, Abraham N, Foxe JJ, Freedman EG. Utilizing High-Density Electroencephalography and Motion Capture Technology to Characterize Sensorimotor Integration While Performing Complex Actions. IEEE Trans Neural Syst Rehabil Eng 2019; 28:287-296. [PMID: 31567095 DOI: 10.1109/tnsre.2019.2941574] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Studies of sensorimotor integration often use sensory stimuli that require a simple motor response, such as a reach or a grasp. Recent advances in neural recording techniques, motion capture technologies, and time-synchronization methods enable studying sensorimotor integration using more complex sensory stimuli and performed actions. Here, we demonstrate that prehensile actions that require using complex sensory instructions for manipulating different objects can be characterized using high-density electroencephalography and motion capture systems. In 20 participants, we presented stimuli in different sensory modalities (visual, auditory) containing different contextual information about the object with which to interact. Neural signals recorded near motor cortex and posterior parietal cortex discharged based on both the instruction delivered and object manipulated. Additionally, kinematics of the wrist movements could be discriminated between participants. These findings demonstrate a proof-of-concept behavioral paradigm for studying sensorimotor integration of multidimensional sensory stimuli to perform complex movements. The designed framework will prove vital for studying neural control of movements in clinical populations in which sensorimotor integration is impaired due to information no longer being communicated correctly between brain regions (e.g. stroke). Such a framework is the first step towards developing a neural rehabilitative system for restoring function more effectively.
Collapse
|
41
|
Scharoun Benson SM, Bryden PJ, Roy EA. Age-group differences in beginning-state comfort reveal an increase in motor planning capabilities. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2019. [DOI: 10.1177/0165025419865620] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Objects can be grasped in different ways to ensure a movement plan is aligned with the intended action. The current study assessed grasp posture in joint action object manipulation in children (ages 6–11, n = 68), young adults ( n = 21), and older adults ( n = 23). Participants performed two actions (pickup and pass; pickup and pass for use) within two movement contexts (using a dowel as if it were the actual object; actual object use), using two objects (glass and hammer) that differed in use-dependent experience. Beginning-state comfort (prioritizing a comfortable initial hand posture for an object recipient) was assessed. Taken together, findings support the notion that the ability to anticipate the intended action, and thus consider an action partner in one’s action plan, increases with age. With age and use-dependent experience, it can be argued that there is a shift from stimulus-driven, familiar responses, to considering affordances and task demands. Together, findings add to our understanding of changes in motor planning capabilities across the life span.
Collapse
|
42
|
Garcea FE, Almeida J, Sims MH, Nunno A, Meyers SP, Li YM, Walter K, Pilcher WH, Mahon BZ. Domain-Specific Diaschisis: Lesions to Parietal Action Areas Modulate Neural Responses to Tools in the Ventral Stream. Cereb Cortex 2019; 29:3168-3181. [PMID: 30169596 PMCID: PMC6933536 DOI: 10.1093/cercor/bhy183] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 07/04/2018] [Indexed: 12/31/2022] Open
Abstract
Neural responses to small manipulable objects ("tools") in high-level visual areas in ventral temporal cortex (VTC) provide an opportunity to test how anatomically remote regions modulate ventral stream processing in a domain-specific manner. Prior patient studies indicate that grasp-relevant information can be computed about objects by dorsal stream structures independently of processing in VTC. Prior functional neuroimaging studies indicate privileged functional connectivity between regions of VTC exhibiting tool preferences and regions of parietal cortex supporting object-directed action. Here we test whether lesions to parietal cortex modulate tool preferences within ventral and lateral temporal cortex. We found that lesions to the left anterior intraparietal sulcus, a region that supports hand-shaping during object grasping and manipulation, modulate tool preferences in left VTC and in the left posterior middle temporal gyrus. Control analyses demonstrated that neural responses to "place" stimuli in left VTC were unaffected by lesions to parietal cortex, indicating domain-specific consequences for ventral stream neural responses in the setting of parietal lesions. These findings provide causal evidence that neural specificity for "tools" in ventral and lateral temporal lobe areas may arise, in part, from online inputs to VTC from parietal areas that receive inputs via the dorsal visual pathway.
Collapse
Affiliation(s)
- Frank E Garcea
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- Moss Rehabilitation Research Institute, 50 Township Line Road, Elkins Park, PA, USA
| | - Jorge Almeida
- University of Coimbra, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
- University of Coimbra, Proaction Laboratory, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
| | - Maxwell H Sims
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Andrew Nunno
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Steven P Meyers
- University of Rochester Medical Center, Department of Imaging Sciences, 601 Elmwood Avenue, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Yan Michael Li
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Kevin Walter
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Webster H Pilcher
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Bradford Z Mahon
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Neurology, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, USA
| |
Collapse
|
43
|
Shea N, Frith CD. The Global Workspace Needs Metacognition. Trends Cogn Sci 2019; 23:560-571. [DOI: 10.1016/j.tics.2019.04.007] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 02/12/2019] [Accepted: 04/22/2019] [Indexed: 12/20/2022]
|
44
|
A pantomiming priming study on the grasp and functional use actions of tools. Exp Brain Res 2019; 237:2155-2165. [PMID: 31203403 DOI: 10.1007/s00221-019-05581-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Accepted: 06/11/2019] [Indexed: 12/31/2022]
Abstract
It has previously been demonstrated that tool recognition is facilitated by the repeated visual presentation of object features affording actions, such as those related to grasping and their functional use. It is unclear, however, if this can also facilitate pantomiming. Participants were presented with an image of a prime followed by a target tool and were required to pantomime the appropriate action for each one. The grasp and functional use attributes of the target tool were either the same or different to the prime. Contrary to expectations, participants were slower at pantomiming the target tool relative to the prime regardless of whether the grasp and function of the tool were the same or different-except when the prime and target tools consisted of identical images of the same exemplar. We also found a decrease in accuracy of performing functional use actions for the target tool relative to the prime when the two differed in functional use but not grasp. We reconcile differences between our findings and those that have performed priming studies on tool recognition with differences in task demands and known differences in how the brain recognises tools and performs actions to make use of them.
Collapse
|
45
|
Active visuomotor interactions with virtual objects on touchscreens adhere to Weber's law. PSYCHOLOGICAL RESEARCH 2019; 84:2144-2156. [PMID: 31203455 DOI: 10.1007/s00426-019-01210-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 06/05/2019] [Indexed: 10/26/2022]
Abstract
Recent findings suggest that the functional separation between vision-for-action and vision-for-perception does not generalize to situations in which two-dimensional (2D), virtual objects, are used as targets. For example, unlike grasping movements directed at real, three-dimensional (3D) objects, the trajectories of grasping movements directed at 2D objects adhere to the psychophysical principle of Weber's law, indicating relative and less efficient processing of their size. Such inefficiency could be attributed to the fact that everyday interactions with touchscreens do not usually entail grasping movements. It is possible, therefore, that more typical interactions with virtual objects, which involve active manipulation of their size or location on a touchscreen, could be performed efficiently and in an absolute manner, and would violate Weber's law. We examined this hypothesis in three experiments in which participants performed active interactions with virtual objects. In Experiment 1, participants made swiping gestures to move virtual objects across the touchscreen. In Experiment 2, participants touched the edges of virtual objects to enlarge their size. In Experiment 3, participants freely enlarged the size of virtual objects, without being required to touch their edges upon contact. In all experiments, the resolution of grip aperture decreased with the size of the target object, adhering to Weber's law. These results suggest that active interactions with 2D objects on touchscreens are not performed in a natural, absolute manner which characterize visuomotor control of real objects.
Collapse
|
46
|
Ozana A, Ganel T. Obeying the law: speed-precision tradeoffs and the adherence to Weber's law in 2D grasping. Exp Brain Res 2019; 237:2011-2021. [PMID: 31161415 DOI: 10.1007/s00221-019-05572-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 05/29/2019] [Indexed: 11/30/2022]
Abstract
Visually guided actions toward two-dimensional (2D) and three-dimensional (3D) objects show different patterns of adherence to Weber's law. In 3D grasping, Just Noticeable Differences (JNDs) do not scale with object size, violating Weber's law. Conversely, JNDs in 2D grasping increase with size, showing a pattern of scaler variability between aperture and JND, as predicted by Weber's law. In the current study, we tested whether such scaler variability in 2D grasping reflects genuine adherence to Weber's law. Alternatively, it could be potentially accounted for by a speed-precision tradeoff effect due to an increase in aperture velocity with size. In two experiments, we modified the relation between aperture velocity and size in 2D grasping and tested whether movement trajectories still adhere to Weber's law. In Experiment 1, we aimed to equate aperture velocities between different-sized objects by pre-adjusting the initial finger aperture to match the target's size. In Experiment 2, we reversed the relation between size and velocity by asking participants to hold their fingers wide open prior to grasp, resulting in faster velocities for smaller rather than for larger objects. The results of the two experiments showed that although aperture velocities did not increase with size, adherence to Weber's law was still maintained. These results indicate that the adherence to Weber's law during 2D grasping cannot be accounted for by a speed-precision tradeoff effect, but rather represents genuine reliance on relative, perceptually based computations in visuomotor interactions with 2D objects.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
47
|
Ganel T, Goodale MA. Still holding after all these years: An action-perception dissociation in patient DF. Neuropsychologia 2019; 128:249-254. [DOI: 10.1016/j.neuropsychologia.2017.09.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 09/17/2017] [Indexed: 10/18/2022]
|
48
|
Mazurek KA, Schieber MH. How is electrical stimulation of the brain experienced, and how can we tell? Selected considerations on sensorimotor function and speech. Cogn Neuropsychol 2019; 36:103-116. [PMID: 31076014 PMCID: PMC6744321 DOI: 10.1080/02643294.2019.1609918] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 04/14/2019] [Accepted: 04/15/2019] [Indexed: 01/05/2023]
Abstract
Electrical stimulation of the nervous system is a powerful tool for localizing and examining the function of numerous brain regions. Delivered to certain regions of the cerebral cortex, electrical stimulation can evoke a variety of first-order effects, including observable movements or an urge to move, or somatosensory, visual, or auditory percepts. In still other regions the subject may be oblivious to the stimulation. Often overlooked, however, is whether the subject is aware of the stimulation, and if so, how the stimulation is experienced by the subject. In this review of how electrical stimulation has been used to study selected aspects of sensorimotor and language function, we raise questions that future studies might address concerning the subjects' second-order experiences of intention and agency regarding evoked movements, of the naturalness of evoked sensory percepts, and of other qualia that might be evoked in the absence of an overt first-order experience.
Collapse
Affiliation(s)
- Kevin A. Mazurek
- Department of Neurology, University of Rochester, Rochester, NY
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY
| | - Marc H. Schieber
- Department of Neurology, University of Rochester, Rochester, NY
- Department of Neuroscience, University of Rochester, Rochester, NY
- Department of Biomedical Engineering, University of Rochester, Rochester, NY
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY
| |
Collapse
|
49
|
Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions. Cortex 2019; 119:555-568. [PMID: 30987739 DOI: 10.1016/j.cortex.2019.02.030] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 01/29/2019] [Accepted: 02/12/2019] [Indexed: 12/21/2022]
Abstract
Patients with visual agnosia show severe deficits in recognizing two-dimensional (2-D) images of objects, despite the fact that early visual processes such as figure-ground segmentation, and stereopsis, are largely intact. Strikingly, however, these patients can nevertheless show a preservation in their ability to recognize real-world objects -a phenomenon known as the 'real-object advantage' (ROA) in agnosia. To uncover the mechanisms that support the ROA, patients were asked to identify objects whose size was congruent or incongruent with typical real-world size, presented in different display formats (real objects, 2-D and 3-D images). While recognition of images was extremely poor, real object recognition was surprisingly preserved, but only when physical size matched real-world size. Analogous display format and size manipulations did not influence the recognition of common geometric shapes that lacked real-world size associations. These neuropsychological data provide evidence for a surprising preservation of size-coding of real-world-sized tangible objects in patients for whom ventral contributions to image processing are severely disrupted. We propose that object size information is largely mediated by dorsal visual cortex and that this information, together with detailed representation of object shape which is also subserved by dorsal cortex, serve as the basis of the ROA.
Collapse
|
50
|
Grant S, Conway ML. Some binocular advantages for planning reach, but not grasp, components of prehension. Exp Brain Res 2019; 237:1239-1255. [PMID: 30850853 PMCID: PMC6557882 DOI: 10.1007/s00221-019-05503-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 02/25/2019] [Indexed: 11/04/2022]
Abstract
Proficient (fast, accurate, precise) hand actions for reaching-to-grasp 3D objects are known to benefit significantly from the use of binocular vision compared to one eye alone. We examined whether these binocular advantages derive from increased reliability in encoding the goal object’s properties for feedforward planning of prehension movements or from enhanced feedback mediating their online control. Adult participants reached for, precision grasped and lifted cylindrical table-top objects (two sizes, 2 distances) using binocular vision or only their dominant/sighting eye or their non-dominant eye to program and fully execute their movements or using each of the three viewing conditions only to plan their reach-to-grasp during a 1 s preview, with vision occluded just before movement onset. Various kinematic measures of reaching and grasping proficiency, including corrective error rates, were quantified and compared by view, feedback and object type. Some significant benefits of binocular over monocular vision when they were just available for pre-movement planning were retained for the reach regardless of target distance, including higher peak velocities, straighter paths and shorter low velocity approach times, although these latter were contaminated by more velocity corrections and by poorer coordination with object contact. By contrast, virtually all binocular advantages for grasping, including improvements in peak grip aperture scaling, the accuracy and precision of digit placements at object contact and shorter grip application times preceding the lift, were eliminated with no feedback available, outcomes that were influenced by the object’s size. We argue that vergence cues can improve the reliability of binocular internal representations of object distance for the feedforward programming of hand transport, whereas the major benefits of binocular vision for enhancing grasping performance derive exclusively from its continuous presence online.
Collapse
Affiliation(s)
- Simon Grant
- Applied Vision Research Centre, City, University of London, Northampton Square, London, EC1V 0HB, UK.
| | - Miriam L Conway
- Applied Vision Research Centre, City, University of London, Northampton Square, London, EC1V 0HB, UK
| |
Collapse
|