1
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
2
|
Numasawa K, Miyamoto T, Kizuka T, Ono S. The relationship between the implicit visuomotor control and the motor planning accuracy. Exp Brain Res 2021; 239:2151-2158. [PMID: 33977362 DOI: 10.1007/s00221-021-06120-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 04/20/2021] [Indexed: 11/29/2022]
Abstract
It has been well established that an implicit motor response can be elicited by a target perturbation or a visual background motion during a reaching movement. Computational studies have suggested that the mechanism of this response is based on the error signal between the efference copy and the actual sensory feedback. If the implicit motor response is based on the efference copy, the motor command accuracy would affect the amount of the modulation of the motor response. Therefore, the purpose of the current study was to investigate the relationship between the implicit motor response and the motor planning accuracy. We used a memory-guided reaching task and a manual following response (MFR) which is induced by visual grating motion. Participants performed reaching movements toward a memorized-target location with a beep cue which was presented 0 or 3 s after the target disappeared (0-s delay and 3-s delay conditions). Leftward or rightward visual grating motion was applied 400 ms after the cue. In addition, an event-related potential (ERP) was recorded during the reaching task, which reflects the motor command accuracy. Our results showed that the N170 ERP amplitude in the parietal electrodes and the MFR amplitude were significantly larger for the 3-s delay condition than the 0-s delay condition. These results suggest that the motor planning accuracy affects the amount of the implicit visuomotor response. Furthermore, there was a significant within-subjects correlation between the MFR and the N170 amplitude, which could corroborate the relationship between the implicit motor response and the motor planning accuracy.
Collapse
Affiliation(s)
- Kosuke Numasawa
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Takeshi Miyamoto
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Tomohiro Kizuka
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan.
| |
Collapse
|
3
|
Peters CM, Glazebrook CM. Rhythmic auditory stimuli heard before and during a reaching movement elicit performance improvements in both temporal and spatial movement parameters. Acta Psychol (Amst) 2020; 207:103086. [PMID: 32422419 DOI: 10.1016/j.actpsy.2020.103086] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 02/28/2020] [Accepted: 04/30/2020] [Indexed: 10/24/2022] Open
Abstract
Rhythmic auditory stimuli (RAS) have been proposed to improve motor performance in populations with and without sensorimotor impairments. However, the reasons for the reported benefits are poorly understood. One idea is that RAS may supplement intrinsic feedback when other sensory input is diminished. The current experiment tested this idea by removing vision during a goal-directed reaching task. We hypothesized that any improvements in movement performance due to the RAS would be greater when vision was removed. Twenty-two typically developing adults performed reaching movements to one of two targets with RAS presented before movement initiation, after movement initiation, both before and after movement initiation, and no sound, all with and without vision. Dependent variables were analyzed using a 2 vision by 2 sound-before by 2 sound-during repeated measures ANOVA. Conditions where the metronome was heard before movement initiation yielded shorter and less variable reaction times compared when there was no sound before the movement. The RAS heard before and during the movement independently impacted spatial aspects of the movement. Sound before movement initiation resulted in smaller endpoint error, primarily in the anterior-posterior axis. Sound during the movement resulted in smaller endpoint error, primarily in the mediolateral axis. In no-vision blocks, inclusion of RAS resulted in improved endpoint performance, indicating that RAS supplemented the motor system. The present results strengthen our understanding of sensory integration underlying reaching performance by demonstrating that sound heard before and during a reaching movement can improve motor performance by supplementing the motor system when vision is unavailable.
Collapse
|
4
|
Engagement of the motor system in position monitoring: reduced distractor suppression and effects of internal representation quality on motor kinematics. Exp Brain Res 2018; 236:1445-1460. [PMID: 29546652 PMCID: PMC5937884 DOI: 10.1007/s00221-018-5234-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Accepted: 03/12/2018] [Indexed: 11/04/2022]
Abstract
The position monitoring task is a measure of divided spatial attention in which participants track the changing positions of one or more objects, attempting to represent positions with as much precision as possible. Typically precision of representations declines with each target object added to participants’ attention load. Since the motor system requires precise representations of changing target positions, we investigated whether position monitoring would be facilitated by increasing engagement of the motor system. Using motion capture, we recorded the positions of participants’ index finger during pointing responses. Participants attempted to monitor the changing positions of between one and four target discs as they moved randomly around a large projected display. After a period of disc motion, all discs disappeared and participants were prompted to report the final position of one of the targets, either by mouse click or by pointing to the final perceived position on the screen. For mouse click responses, precision declined with attentional load. For pointing responses, precision declined only up to three targets and remained at the same level for four targets, suggesting obligatory attention to all four objects for loads above two targets. Kinematic profiles for pointing responses for highest and lowest loads showed greater motor adjustments during the point, demonstrating that, like external environmental task demands, the quality of internal representations affects motor kinematics. Specifically, these adjustments reflect the difficulty of both pointing to very precisely represented locations as well as keeping representations distinct from one another.
Collapse
|
5
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
6
|
Knol H, Huys R, Sarrazin JC, Spiegler A, Jirsa VK. Ebbinghaus figures that deceive the eye do not necessarily deceive the hand. Sci Rep 2017; 7:3111. [PMID: 28596601 PMCID: PMC5465067 DOI: 10.1038/s41598-017-02925-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 04/27/2017] [Indexed: 11/09/2022] Open
Abstract
In support of the visual stream dissociation hypothesis, which states that distinct visual streams serve vision-for-perception and vision-for-action, visual size illusions were reported over 20 years ago to ‘deceive the eye but not the hand’. Ever since, inconclusive results and contradictory interpretations have accumulated. Therefore, we investigated the effects of the Ebbinghaus figure on repetitive aiming movements with distinct dynamics. Participants performed a Fitts’ task in which Ebbinghaus figures served as targets. We systematically varied the three parameters which have been shown to influence the perceived size of the Ebbinghaus figure’s target circle, namely the size of the target, its distance to the context circles and the size of the context circles. This paper shows that movement is significantly affected by the context size, but, in contrast to perception, not by the other two parameters. This is especially prominent in the approach phase of the movement towards the target, regardless of the dynamics. To reconcile the findings, we argue that different informational variables are used for size perception and the visual control of movements irrespective of whether certain variables induce (perceptual) illusions.
Collapse
Affiliation(s)
- Hester Knol
- Aix Marseille Université, CNRS, Institut des Sciences du Mouvement, UMR 7287, Marseille, France.
| | - Raoul Huys
- Centre de Recherche Cerveau & Cognition, Université Paul Sabatier, Université de Toulouse, Toulouse, France.,CerCo, CNRS UMR 5549, Toulouse, France
| | | | - Andreas Spiegler
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| | - Viktor K Jirsa
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
7
|
Gunduz Can R, Schack T, Koester D. Movement Interferes with Visuospatial Working Memory during the Encoding: An ERP Study. Front Psychol 2017; 8:871. [PMID: 28611714 PMCID: PMC5447076 DOI: 10.3389/fpsyg.2017.00871] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 05/11/2017] [Indexed: 11/13/2022] Open
Abstract
The present study focuses on the functional interactions of cognition and manual action control. Particularly, we investigated the neurophysiological correlates of the dual-task costs of a manual-motor task (requiring grasping an object, holding it, and subsequently placing it on a target) for working memory (WM) domains (verbal and visuospatial) and processes (encoding and retrieval). Thirty participants were tested in a cognitive-motor dual-task paradigm, in which a single block (a verbal or visuospatial WM task) was compared with a dual block (concurrent performance of a WM task and a motor task). Event-related potentials (ERPs) were analyzed separately for the encoding and retrieval processes of verbal and visuospatial WM domains both in single and dual blocks. The behavioral analyses show that the motor task interfered with WM and decreased the memory performance. The performance decrease was larger for the visuospatial task compared with the verbal task, i.e., domain-specific memory costs were obtained. The ERP analyses show the domain-specific interference also at the neurophysiological level, which is further process-specific to encoding. That is, comparing the patterns of WM-related ERPs in the single block and dual block, we showed that visuospatial ERPs changed only for the encoding process when a motor task was performed at the same time. Generally, the present study provides evidence for domain- and process-specific interactions of a prepared manual-motor movement with WM (visuospatial domain during the encoding process). This study, therefore, provides an initial neurophysiological characterization of functional interactions of WM and manual actions in a cognitive-motor dual-task setting, and contributes to a better understanding of the neuro-cognitive mechanisms of motor action control.
Collapse
Affiliation(s)
- Rumeysa Gunduz Can
- Neurocognition and Action – Biomechanics Research Group, Faculty of Psychology and Sport Science, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology – Center of Excellence, Bielefeld UniversityBielefeld, Germany
| | - Thomas Schack
- Neurocognition and Action – Biomechanics Research Group, Faculty of Psychology and Sport Science, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology – Center of Excellence, Bielefeld UniversityBielefeld, Germany
- Research Institute for Cognition and Robotics, Bielefeld UniversityBielefeld, Germany
| | - Dirk Koester
- Neurocognition and Action – Biomechanics Research Group, Faculty of Psychology and Sport Science, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology – Center of Excellence, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
8
|
Hesse C, Miller L, Buckingham G. Visual information about object size and object position are retained differently in the visual brain: Evidence from grasping studies. Neuropsychologia 2016; 91:531-543. [PMID: 27663865 DOI: 10.1016/j.neuropsychologia.2016.09.016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 07/29/2016] [Accepted: 09/20/2016] [Indexed: 10/21/2022]
Abstract
Many experiments have examined how the visual information used for action control is represented in our brain, and whether or not visually-guided and memory-guided hand movements rely on dissociable visual representations that are processed in different brain areas (dorsal vs. ventral). However, little is known about how these representations decay over longer time periods and whether or not different visual properties are retained in a similar fashion. In three experiments we investigated how information about object size and object position affect grasping as visual memory demands increase. We found that position information decayed rapidly with increasing delays between viewing the object and initiating subsequent actions - impacting both the accuracy of the transport component (lower end-point accuracy) and the grasp component (larger grip apertures) of the movement. In contrast, grip apertures and fingertip forces remained well-adjusted to target size in conditions in which positional information was either irrelevant or provided, regardless of delay, indicating that object size is encoded in a more stable manner than object position. The findings provide evidence that different grasp-relevant properties are encoded differently by the visual system. Furthermore, we argue that caution is required when making inferences about object size representations based on alterations in the grip component as these variations are confounded with the accuracy with which object position is represented. Instead fingertip forces seem to provide a reliable and confound-free measure to assess internal size estimations in conditions of increased visual uncertainty.
Collapse
Affiliation(s)
| | - Louisa Miller
- Department of Psychiatry, University of Cambridge, UK
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, UK
| |
Collapse
|
9
|
Pulling out all the stops to make the distance: Effects of effort and optical information in distance perception responses made by rope pulling. Atten Percept Psychophys 2015; 78:685-99. [DOI: 10.3758/s13414-015-1035-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
10
|
Zhao H, Warren WH. On-line and model-based approaches to the visual control of action. Vision Res 2014; 110:190-202. [PMID: 25454700 DOI: 10.1016/j.visres.2014.10.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 10/08/2014] [Accepted: 10/09/2014] [Indexed: 10/24/2022]
Abstract
Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| | - William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| |
Collapse
|
11
|
Heijnen MJH, Romine NL, Stumpf DM, Rietdyk S. Memory-guided obstacle crossing: more failures were observed for the trail limb versus lead limb. Exp Brain Res 2014; 232:2131-42. [DOI: 10.1007/s00221-014-3903-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Accepted: 03/03/2014] [Indexed: 11/24/2022]
|
12
|
Going for distance and going for speed: Effort and optical variables shape information for distance perception from observation to response. Atten Percept Psychophys 2014; 76:1015-35. [DOI: 10.3758/s13414-014-0629-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Effects of spatial-memory decay and dual-task interference on perturbation-evoked reach-to-grasp reactions in the absence of online visual feedback. Hum Mov Sci 2013; 32:328-42. [DOI: 10.1016/j.humov.2012.11.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Revised: 08/03/2012] [Accepted: 11/01/2012] [Indexed: 11/20/2022]
|
14
|
Poon C, Chin-Cottongim LG, Coombes SA, Corcos DM, Vaillancourt DE. Spatiotemporal dynamics of brain activity during the transition from visually guided to memory-guided force control. J Neurophysiol 2012; 108:1335-48. [PMID: 22696535 DOI: 10.1152/jn.00972.2011] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control.
Collapse
Affiliation(s)
- Cynthia Poon
- Department of Kinesiology and Nutrition, University of Illinois at Chicago, Chicago, IL, USA
| | | | | | | | | |
Collapse
|
15
|
Loucks TMJ, Ofori E, Sosnoff JJ. Force Control under Auditory Feedback: Effector Differences and Audiomotor Memory. Percept Mot Skills 2012; 114:915-35. [DOI: 10.2466/24.25.27.pms.114.3.915-935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Audiomotor integration is a basic form of sensorimotor control for regulating vocal pitch and vocal loudness, but its contribution to general motor control has only been studied minimally. In this paper, auditory feedback for prolonged force control was investigated by comparing manual and oral force generation and testing short-term audiomotor memory for these effectors. Ten healthy volunteers between the ages of 20 and 30 years old were recruited. The participants produced continuous force for 30 sec. with the lip or finger to match auditory targets. In the feedback condition, when auditory feedback was provided for 30 sec., lip force was more variable than finger force. In the memory condition, the force output of both effectors remained stable for approximately 4 sec. after feedback removal, followed by significant decay. A longer short-term memory capacity could facilitate encoding of motor memories for tasks having acoustic goals. The results demonstrate that “audiomotor” integration was effective for sustaining forces, and that audiomotor force memory is comparable to reports of visuomotor force memory.
Collapse
Affiliation(s)
- Torrey M. J. Loucks
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign
| | - Edward Ofori
- Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign
| | - Jacob J. Sosnoff
- Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign
| |
Collapse
|
16
|
Cheng DT, De Grosbois J, Smirl J, Heath M, Binsted G. Preceding movement effects on sequential aiming. Exp Brain Res 2011; 215:1-11. [PMID: 21947132 DOI: 10.1007/s00221-011-2862-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2010] [Accepted: 09/02/2011] [Indexed: 10/17/2022]
Abstract
In this study, two experiments were devised to examine the control strategy used by individuals when performing sequential aiming movements. Of particular interest was the aiming behavior displayed when task difficulty was changed midway through a sequence of movements. In Experiment 1, target size was manipulated, as the targets were made either larger or smaller, between the 8th and 12th movement of the sequence. In Experiment 2, the amplitude between the two targets was similarly changed while the target size remained constant. Results revealed that in Experiment 1, individuals took two movements following the perturbation to target size, to re-tune their movement times in order to correspond with the new task difficulty. Conversely for Experiment 2, movement time changed immediately and in correspondence with the new target amplitude. These findings demonstrate that participants can use information from the preceding movement to prepare and guide subsequent movements--but only when target size is changed. When response amplitude changes mid-sequence, it seems individuals rely more on immediate, target-derived information. Therefore, counter to some current accounts of visual movement control, it appears that memory representations of the preceding movement can guide subsequent movements; however, this information appears selectively accessed in a context-dependent fashion.
Collapse
Affiliation(s)
- Darian T Cheng
- University of British Columbia, FIN323-3333 University Way, Kelowna, BC V1V 1V7, Canada
| | | | | | | | | |
Collapse
|
17
|
Skewes JC, Roepstorff A, Frith CD. How do illusions constrain goal-directed movement: perceptual and visuomotor influences on speed/accuracy trade-off. Exp Brain Res 2011; 209:247-55. [PMID: 21267551 DOI: 10.1007/s00221-011-2542-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2010] [Accepted: 01/04/2011] [Indexed: 11/25/2022]
Abstract
Recent research shows that visual processing influences the speed/accuracy trade-off people use when performing goal-directed movement. This raises the question of how this influence is produced in visual cognition. Visual influences on speed/accuracy trade-off could be produced in conscious visual perception, in non-conscious visuomotor transformation, or by some interaction of conscious perceptual and non-conscious visuomotor processes. There is independent evidence showing that both perceptual and visuomotor processes are involved in trading off speed and accuracy; however, the interaction between these processes has yet to be investigated. We present an experiment in which we show that a change in visual consciousness induced by a perceptual illusion affects the speed and accuracy of goal-directed movements, suggesting that perceptual and visuomotor processes do interact in speed/accuracy trade-off. We discuss the consequences of these results for theories of visual function more generally.
Collapse
Affiliation(s)
- Joshua C Skewes
- Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Nørrebrogade 44, Bygning 10G, 5. Sal, 8000 Århus C, Denmark.
| | | | | |
Collapse
|
18
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
19
|
Hesse C, Franz VH. Grasping remembered objects: Exponential decay of the visual memory. Vision Res 2010; 50:2642-50. [DOI: 10.1016/j.visres.2010.07.026] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2010] [Revised: 07/26/2010] [Accepted: 07/29/2010] [Indexed: 10/19/2022]
|
20
|
Hesse C, Franz VH. Memory mechanisms in grasping. Neuropsychologia 2009; 47:1532-45. [DOI: 10.1016/j.neuropsychologia.2008.08.012] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2008] [Revised: 06/24/2008] [Accepted: 08/09/2008] [Indexed: 10/21/2022]
|
21
|
Updating the programming of a precision grip is a function of recent history of available feedback. Exp Brain Res 2009; 194:619-29. [DOI: 10.1007/s00221-009-1737-1] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2008] [Accepted: 02/07/2009] [Indexed: 11/24/2022]
|
22
|
Brouwer AM, Knill DC. Humans use visual and remembered information about object location to plan pointing movements. J Vis 2009; 9:24.1-19. [PMID: 19271894 DOI: 10.1167/9.1.24] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Accepted: 06/18/2008] [Indexed: 11/24/2022] Open
Abstract
We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to a target. In some trials, the target shifted unnoticed while the finger made the first movement. We regressed subjects' movement trajectories against the initial and shifted target locations to infer the weights that subjects gave to remembered and visual locations. We measured the reliability of vision and memory by adding conditions in which the target only appeared after subjects made the first movement (vision only) and in which the target was initially present but disappeared during the first movement (memory only). When both visual and remembered information were available, movement trajectories were biased to the remembered target location. The different weights that subjects gave to memory and visual information on average matched the weights predicted by the variance associated with the use of vision and memory alone. This suggests that humans integrate remembered information about object locations with peripheral visual information by taking into account the relative reliability of the two sources of information.
Collapse
|
23
|
Rolheiser TM, Binsted G, Brownell KJ. Visuomotor representation decay: influence on motor systems. Exp Brain Res 2006; 173:698-707. [PMID: 16676170 DOI: 10.1007/s00221-006-0453-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2005] [Accepted: 03/01/2006] [Indexed: 11/25/2022]
Abstract
The contribution of ventral stream information to the variability of movement has been the focus of much attention, and has provided numerous researchers with conflicting results. These results have been obtained through the use of discrete pointing movements, and as such, do not offer any explanation regarding how ventral stream information contributes to movement variability over time. The present study examined the contribution of ventral stream information to movement variability in three tasks: Hand-only movement, eye-only movement, and an eye-hand coordinated task. Participants performed a continuous reciprocal tapping task to two point-of-light targets for 10 s. The targets were visible for the first 5 s, at which point vision of the targets was removed. Movement variability was similar in all conditions for the initial 5-s interval. The no-vision condition (final 5 s) can be summarized as follows: ventral stream information contributed to an initial significant increase in variability across motor systems, though the different motor systems were able to preserve ventral information integrity differently. The results of these studies can be attributed to the behavioral and cortical networks that underlie the saccadic and manual motor systems.
Collapse
Affiliation(s)
- Tyler M Rolheiser
- College of Kinesiology, University of Saskatchewan, Saskatoon, Canada.
| | | | | |
Collapse
|