1
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Uccelli S, Bruno N. The effect of the Uznadze illusion is temporally dynamic in closed-loop but temporally constant in open-loop grasping. Q J Exp Psychol (Hove) 2024; 77:1238-1249. [PMID: 37784227 DOI: 10.1177/17470218231206907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Although it is known that the availability of visual feedback modulates grasping kinematics, it is unclear whether this extends to both the early and late stages of the movement. We tackled this issue by exposing participants to the Uznadze illusion (a medium stimulus appears larger or smaller after exposure to smaller or larger inducers). After seeing smaller or larger discs, participants grasped a medium disc with (closed-loop [CL]) or without (open-loop [OL]) visual feedback. Our main aim was to assess whether the time course of the illusion from the movement onset up to the grasp differed between OL and CL. Moreover, we compared OL and CL illusory effects on maximum grip aperture (MGA) and tested whether preparation time, movement time, and time to MGA predicted illusion magnitude. Results revealed that CL illusory effects decreased over movement time, whereas OL ones remained constant. At the time of MGA, OL, and CL effects were, however, of similar size. Although OL grasps were longer to prepare and showed earlier and larger MGAs, such differences had little impact on modulating the illusion. These results suggest that the early stage of grasping is sensitive to the Uznadze illusion both under CL and OL conditions, whereas the late phase is sensitive to it only under OL conditions. We discuss these findings within the framework of theoretical models on the functional properties of the dorsal stream for visually guided actions.
Collapse
Affiliation(s)
- Stefano Uccelli
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
- University of Parma, Parma, Italy
| | | |
Collapse
|
3
|
Threethipthikoon T, Li Z, Shigemasu H. Orientation representation in human visual cortices: contributions of non-visual information and action-related process. Front Psychol 2023; 14:1231109. [PMID: 38106392 PMCID: PMC10722153 DOI: 10.3389/fpsyg.2023.1231109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/15/2023] [Indexed: 12/19/2023] Open
Abstract
Orientation processing in the human brain plays a crucial role in guiding grasping actions toward an object. Remarkably, despite the absence of visual input, the human visual cortex can still process orientation information. Instead of visual input, non-visual information, including tactile and proprioceptive sensory input from the hand and arm, as well as feedback from action-related processes, may contribute to orientation processing. However, the precise mechanisms by which the visual cortices process orientation information in the context of non-visual sensory input and action-related processes remain to be elucidated. Thus, our study examined the orientation representation within the visual cortices by analyzing the blood-oxygenation-level-dependent (BOLD) signals under four action conditions: direct grasp (DG), air grasp (AG), non-grasp (NG), and uninformed grasp (UG). The images of the cylindrical object were shown at +45° or - 45° orientations, corresponding to those of the real object to be grasped with the whole-hand gesture. Participants judged their orientation under all conditions. Grasping was performed without online visual feedback of the hand and object. The purpose of this design was to investigate the visual areas under conditions involving tactile feedback, proprioception, and action-related processes. To address this, a multivariate pattern analysis was used to examine the differences among the cortical patterns of the four action conditions in orientation representation by classification. Overall, significant decoding accuracy over chance level was discovered for the DG; however, during AG, only the early visual areas showed significant accuracy, suggesting that the object's tactile feedback influences the orientation process in higher visual areas. The NG showed no statistical significance in any area, indicating that without the grasping action, visual input does not contribute to cortical pattern representation. Interestingly, only the dorsal and ventral divisions of the third visual area (V3d and V3v) showed significant decoding accuracy during the UG despite the absence of visual instructions, suggesting that the orientation representation was derived from action-related processes in V3d and visual recognition of object visualization in V3v. The processing of orientation information during non-visually guided grasping of objects relies on other non-visual sources and is specifically divided by the purpose of action or recognition.
Collapse
Affiliation(s)
| | - Zhen Li
- Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, Shenzhen, China
- Department of Engineering, Shenzhen MSU-BIT University, Shenzhen, China
| | | |
Collapse
|
4
|
Manual action re-planning interferes with the maintenance process of working memory: an ERP investigation. PSYCHOLOGICAL RESEARCH 2022:10.1007/s00426-022-01741-4. [PMID: 36434433 PMCID: PMC10366281 DOI: 10.1007/s00426-022-01741-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Accepted: 09/14/2022] [Indexed: 11/27/2022]
Abstract
AbstractThe current study investigated the re-planning of the grasping movements, its functional interactions with working memory (WM), and underlying neurophysiological activity. Mainly, the current study investigated the movement re-planning interference with WM domains (verbal, visuospatial) and processes (maintenance, retrieval). We combined a cognitive-motor dual-task paradigm with an EEG setting. Thirty-six participants completed the verbal and visuospatial versions of a WM task concurrently with a manual task which required performing a grasp-and-place movement by keeping the initial movement plan (prepared movement condition) or changing it for reversing the movement direction (re-planned movement condition). ERPs were extracted for the prepared and re-planned conditions in the verbal and visuospatial tasks separately during the maintenance and retrieval processes. ERP analyses showed that during the maintenance process of both the verbal and visuospatial tasks, the re-planned movements compared to the prepared movements generated a larger positive slow wave with a centroparietal maximum between 200 and 700. We interpreted this ERP effect as a P300 component for the re-planned movements. There was no ERP difference between the planned and re-planned movements during the retrieval process. Accordingly, we suggest that re-planning the grasp-and-place movement interfered at least with the maintenance of the verbal and visuospatial domains, resulting in the re-planning costs. More generally, the current study provides the initial neurophysiological investigations of the movement re-planning–WM interactions during grasping movements, and contributes to a better understanding of the neurocognitive mechanisms underlying manual action flexibility.
Collapse
|
5
|
Alipour A, Beggs JM, Brown JW, James TW. A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
Affiliation(s)
- Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - John M Beggs
- Program in Neuroscience, Indiana University, Bloomington, IN USA
- Department of Physics, Indiana University, Bloomington, IN USA
| | - Joshua W Brown
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| |
Collapse
|
6
|
Hesse C, Harrison RE, Giesel M, Schenk T. Bimanual Grasping Adheres to Weber's Law. Iperception 2021; 12:20416695211054534. [PMID: 34868538 PMCID: PMC8641124 DOI: 10.1177/20416695211054534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 10/03/2021] [Indexed: 11/16/2022] Open
Abstract
Weber's law states that our ability to detect changes in stimulus attributes decreases linearly with their magnitude. This principle holds true for many attributes across sensory modalities but appears to be violated in grasping. One explanation for the failure to observe Weber's law in grasping is that its effect is masked by biomechanical constraints of the hand. We tested this hypothesis using a bimanual task that eliminates biomechanical constraints. Participants either grasped differently sized boxes that were comfortably within their arm span (action task) or estimated their width (perceptual task). Within each task, there were two conditions: One where the hands' start positions remained fixed for all object sizes (meaning the distance between the initial and final hand-positions varied with object size), and one in which the hands' start positions adapted with object size (such that the distance between the initial and final hand-position remained constant). We observed adherence to Weber's law in bimanual estimation and grasping across both conditions. Our results conflict with a previous study that reported the absence of Weber's law in bimanual grasping. We discuss potential explanations for these divergent findings and encourage further research on whether Weber's law persists when biomechanical constraints are reduced.
Collapse
Affiliation(s)
| | | | - Martin Giesel
- School of Psychology, University of Aberdeen, Aberdeen, UK
| | - Thomas Schenk
- Department of Neuropsychology, Ludwig-Maximilians University, Munich, Germany
| |
Collapse
|
7
|
Dissociating the Influence of Perceptual Biases and Contextual Artifacts Within Target Configurations During the Planning and Control of Visually Guided Action. Motor Control 2021; 25:349-368. [PMID: 33811190 DOI: 10.1123/mc.2020-0054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 01/13/2021] [Accepted: 01/18/2021] [Indexed: 11/18/2022]
Abstract
The failure of perceptual illusions to elicit corresponding biases within movement supports the view of two visual pathways separately contributing to perception and action. However, several alternative findings may contest this overarching framework. The present study aimed to examine the influence of perceptual illusions within the planning and control of aiming. To achieve this, we manipulated and measured the planning/control phases by respectively perturbing the target illusion (relative size-contrast illusion; Ebbinghaus/Titchener circles) following movement onset and detecting the spatiotemporal characteristics of the movement trajectory. The perceptual bias that was indicated by the perceived target size estimates failed to correspondingly manifest within the effective target size. While movement time (specifically, time after peak velocity) was affected by the target configuration, this outcome was not consistent with the direction of the perceptual illusions. These findings advocate an influence of the surrounding contextual information (e.g., annuli) on movement control that is independent of the direction predicted by the illusion.
Collapse
|
8
|
Uccelli S, Palumbo L, Harrison NR, Bruno N. Asymmetric effects of graspable distractor disks on motor preparation of successive grasps: A behavioural and event-related potential (ERP) study. Int J Psychophysiol 2020; 158:318-330. [PMID: 33164874 DOI: 10.1016/j.ijpsycho.2020.10.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 07/29/2020] [Accepted: 10/20/2020] [Indexed: 10/23/2022]
Abstract
There is evidence that seeing a graspable object automatically elicits a preparatory motor process. However, it is unclear whether this implicit visuomotor process might influence the preparation of a successive grasp for a different object. We addressed the issue by implementing a combined behavioural and electrophysiological paradigm. Participants performed pantomimed grasps directed to small or large disks with either a two (pincer) or a five-finger (pentapod) grip, after the presentation of congruent (same size) or incongruent (different size) distractor disks. Preview reaction times (PRTs) and response-locked lateralized readiness potentials (R-LRPs) were recorded as online indices of motor preparation. Results revealed asymmetric effects of the distractors on PRTs and R-LRPs. For pincer grip disks, incongruent distractors were associated with longer PRTs and a delayed R-LRP peak. For pentapod grip disks, conversely, incongruent distractors were associated with shorter PRTs and a delayed R-LRP onset. Supporting an interpretation of these effects as tapping into motor preparation, we did not observe modulations of stimulus-locked LRP's (sensitive to sensory processing), or of the P300 component (related to reallocating attentional resources). These results challenge models (i.e., the "dorsal amnesia" hypothesis) which assume that visuomotor information presented before a grasp will not affect how we later perform that grasp.
Collapse
Affiliation(s)
| | - Letizia Palumbo
- Liverpool Hope University, United Kingdom of Great Britain and Northern Ireland
| | - Neil R Harrison
- Liverpool Hope University, United Kingdom of Great Britain and Northern Ireland
| | | |
Collapse
|
9
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
10
|
Pisu V, Uccelli S, Riggio L, Bruno N. Action preparation in grasping reveals generalization of precision between implicit and explicit motor processes. Neuropsychologia 2020; 141:107406. [DOI: 10.1016/j.neuropsychologia.2020.107406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 02/14/2020] [Accepted: 02/17/2020] [Indexed: 10/24/2022]
|
11
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
12
|
Smeets JBJ, van der Kooij K, Brenner E. A review of grasping as the movements of digits in space. J Neurophysiol 2019; 122:1578-1597. [DOI: 10.1152/jn.00123.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.
Collapse
Affiliation(s)
- Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Katinka van der Kooij
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
13
|
Löhr-Limpens M, Göhringer F, Schenk T, Hesse C. Grasping and perception are both affected by irrelevant information and secondary tasks: new evidence from the Garner paradigm. PSYCHOLOGICAL RESEARCH 2019; 84:1269-1283. [PMID: 30778763 DOI: 10.1007/s00426-019-01151-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/29/2019] [Indexed: 11/30/2022]
Abstract
In their Perception-Action Model (PAM), Goodale and Milner (1992) proposed functionally independent and encapsulated processing of visual information for action and perception. In this context, they postulated that visual input for action is processed in an automatized and analytic manner, which renders visuomotor behaviour immune to perceptual interferences or multitasking costs due to sharing of cognitive resources. Here, we investigate the well-known Garner Interference effect under dual- and single-task conditions in its classic perceptual form as well as in grasping. Garner Interference arises when stimuli are classified along a relevant dimension (e.g., their length), while another irrelevant dimension (e.g., their width) has to be ignored. In the present study, participants were presented with differently sized rectangular objects and either grasped them or classified them as long or short via button presses. We found classical Garner Interference effects in perception as expressed in prolonged reaction times when variations occurred also in the irrelevant object dimension. While reaction times during grasping were not susceptible to Garner Interference, effects were observed in a number of measures that reflect grasping accuracy (i.e., poorer adjustment of grip aperture to object size, prolonged adjustment times, and increased variability of the maximum hand opening when irrelevant object dimensions were varied). In addition, multitasking costs occurred in both perception and action tasks. Thus, our findings challenge the assumption of automaticity in visuomotor behaviour as proposed by the PAM.
Collapse
Affiliation(s)
- Miriam Löhr-Limpens
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany.
| | - Frederic Göhringer
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany
| | - Thomas Schenk
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany
| | - Constanze Hesse
- School of Psychology, University of Aberdeen King's College, William Guild Building, Aberdeen, AB24 3FX, UK
| |
Collapse
|
14
|
Two visual pathways – Where have they taken us and where will they lead in future? Cortex 2018; 98:283-292. [DOI: 10.1016/j.cortex.2017.12.002] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 12/05/2017] [Indexed: 01/05/2023]
|
15
|
de Haan EH, Jackson SR, Schenk T. Where are we now with ‘What’ and ‘How’? Cortex 2018; 98:1-7. [DOI: 10.1016/j.cortex.2017.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 12/04/2017] [Accepted: 12/05/2017] [Indexed: 01/02/2023]
|