1
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
2
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Ghasemi F, Harris LR, Jörges B. Simulated eye height impacts size perception differently depending on real-world posture. Sci Rep 2023; 13:20075. [PMID: 37974023 PMCID: PMC10654384 DOI: 10.1038/s41598-023-47364-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 11/13/2023] [Indexed: 11/19/2023] Open
Abstract
Changes in perceived eye height influence visually perceived object size in both the real world and in virtual reality. In virtual reality, conflicts can arise between the eye height in the real world and the eye height simulated in a VR application. We hypothesized that participants would be influenced more by variation in simulated eye height when they had a clear expectation about their eye height in the real world such as when sitting or standing, and less so when they did not have a clear estimate of the distance between their eyes and the real-life ground plane, e.g., when lying supine. Using virtual reality, 40 participants compared the height of a red square simulated at three different distances (6, 12, and 18 m) against the length of a physical stick (38.1 cm) held in their hands. They completed this task in all combinations of four real-life postures (supine, sitting, standing, standing on a table) and three simulated eye heights that corresponded to each participant's real-world eye height (123cm sitting; 161cm standing; 201cm on table; on average). Confirming previous results, the square's perceived size varied inversely with simulated eye height. Variations in simulated eye height affected participants' perception of size significantly more when sitting than in the other postures (supine, standing, standing on a table). This shows that real-life posture can influence the perception of size in VR. However, since simulated eye height did not affect size estimates less in the lying supine than in the standing position, our hypothesis that humans would be more influenced by variations in eye height when they had a reliable estimate of the distance between their eyes and the ground plane in the real world was not fully confirmed.
Collapse
Affiliation(s)
- Fatemeh Ghasemi
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
| | - Laurence R Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.
| | - Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
4
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
5
|
The Effects of Different Kinds of Smooth Pursuit Exercises on Center of Pressure and Muscle Activities during One Leg Standing. Healthcare (Basel) 2022; 10:healthcare10122498. [PMID: 36554022 PMCID: PMC9777704 DOI: 10.3390/healthcare10122498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/01/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
This study examined the effects of gaze fixation and different kinds of smooth-pursuit eye movements on the trunk and lower extremity muscle activities and center of pressure. METHODS Twenty-four subjects were selected for the study. The activity of trunk and lower limb muscles (tibialis anterior, lateral gastrocnemius, medial gastrocnemius, vastus midialis obliques, vastus lateralis, biceps femoris, rectus abdominis, and erector spinae) and the COP (center of pressure) (surface area ellipse, length, and average speed) were measured to observe the effects of gaze fixation and different kinds of smooth-pursuit eye movements on the center of pressure and muscle activities during one leg standing. Before the experiment, a Gaze point GP3 HD Eye Tracker (Gazept, Vancouver, BC, Canada) was used to train eye movement so that the subjects would be familiar with smooth eye movement. Repeated each exercise 3 times at random. In order to avoid the sequence deviation caused by fatigue, the movement sequence is randomly selected. RESULT The center of pressure and muscle activities were increased significantly when the smooth-pursuit eye movement with one leg standing compared with gaze fixation with one leg standing. In smooth-pursuit eye movements, the changes in the center of pressure and muscle activities were increased significantly with eye and head movement. When the head and eyes moved in opposite directions, the center of pressure and muscle activities were increased more than with any other exercises. CONCLUSION Smooth-pursuit eye movement with one leg movement affects balance. In particular, in the smooth-pursuit eye movement with one leg standing, there were higher requirements for balance when the eyes and head move in the opposite direction. Therefore, this movement can be recommended to people who need to enhance their balance ability.
Collapse
|
6
|
Browne CJ, Fahey P, Sheeba SR, Sharpe MH, Rosner M, Feinberg D, Mucci V. Visual disorders and mal de debarquement syndrome: a potential comorbidity questionnaire-based study. Future Sci OA 2022; 8:FSO813. [PMID: 36248065 PMCID: PMC9540399 DOI: 10.2144/fsoa-2022-0022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 08/31/2022] [Indexed: 11/23/2022] Open
Abstract
Aim: Mal de debarquement syndrome (MdDS) is a neurological condition characterized by a constant sensation of self-motion; onset may be motion-triggered (MT) or non-motion-triggered/spontaneous (NMT/SO). People with MdDS experience similar symptoms to those with vertical heterophoria, a subset of binocular visual dysfunction. Hence, we aimed to explore potential visual symptom overlaps. Methods: MdDS patients (n = 196) and controls (n = 197) completed a visual health questionnaire. Results: Compared with controls, the MdDS group demonstrated higher visual disorder scores and visual complaints. NMT/SO participants reported unique visual symptoms and a higher prevalence of mild traumatic brain injury. Conclusion: Our findings suggest visual disorders may coexist with MdDS, particularly the NMT/SO subtype. The difference in visual dysfunction frequency and medical histories between subtypes, warrants further investigation into differing pathophysiological mechanisms.
Collapse
Affiliation(s)
- Cherylea J Browne
- School of Science, Western Sydney University, Sydney, NSW 2560, Australia
- Translational Neuroscience Facility (TNF), School of Medical Sciences, UNSW Sydney, NSW, 2033, Australia
- Brain Stimulation and Rehabilitation (BrainStAR) Lab, School of Health Sciences, Western Sydney University, Sydney, NSW, 2560, Australia
| | - Paul Fahey
- School of Health Sciences, Western Sydney University, Sydney, NSW, 2560, Australia
| | - Stella R Sheeba
- School of Science, Western Sydney University, Sydney, NSW 2560, Australia
- Brain Stimulation and Rehabilitation (BrainStAR) Lab, School of Health Sciences, Western Sydney University, Sydney, NSW, 2560, Australia
| | - Margie H Sharpe
- Dizziness & Balance Disorders Center, Adelaide, SA, 5000, Australia
| | - Mark Rosner
- NeuroVisual Medicine Institute, Bloomfield Hills, MI 48302, USA
| | - Debby Feinberg
- NeuroVisual Medicine Institute, Bloomfield Hills, MI 48302, USA
| | - Viviana Mucci
- School of Science, Western Sydney University, Sydney, NSW 2560, Australia
| |
Collapse
|
7
|
Backward and forward neck tilt affects perceptual bias when interpreting ambiguous figures. Sci Rep 2022; 12:7276. [PMID: 35508496 PMCID: PMC9068752 DOI: 10.1038/s41598-022-10985-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 04/08/2022] [Indexed: 11/15/2022] Open
Abstract
The relationships between posture and perception have already been investigated in several studies. However, it is still unclear how perceptual bias and experiential contexts of human perception affect observers’ perception when posture is changed. In this study, we hypothesized that a change in the perceptual probability caused by perceptual bias also depends on posture. In order to verify this hypothesis, we used the Necker cube with two types of appearance, from above and below, although the input is constant, and investigated the change of the probability of perceptual content. Specifically, we asked observers their perception of the appearance of the Necker cube placed at any of the five angles in the space of virtual reality. There were two patterns of neck movement, vertical and horizontal. During the experiment, pupil diameter, one of the cognitive indices, was also measured. Results showed that during the condition of looking down vertically, the probability of the viewing-from-above perception of the Necker cube was significantly greater than during the condition of looking up. Interestingly, the pupillary results were also consistent with the probability of the perception. These results indicate that perception was modulated by the posture of the neck and suggest that neck posture is incorporated into ecological constraints.
Collapse
|
8
|
Defocus curves: Focusing on factors influencing assessment. J Cataract Refract Surg 2022; 48:961-968. [PMID: 35137697 DOI: 10.1097/j.jcrs.0000000000000906] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 02/01/2022] [Indexed: 11/26/2022]
Abstract
ABSTRACT Defocus curve assessment is used to emulate defocus over a range of distances and is a valuable tool that is used to differentiate the performance of presbyopia-correcting intraocular lenses. However, defocus curves are limited by a lack of standardization, and multiple factors can impact their generation and interpretation. This review discusses key factors that influence the assessment of defocus curves, including pupil size, level of contrast, sphere versus cylinder defocus, viewing distance, monocular versus binocular assessment, use of Snellen versus logarithm of the minimum angle of resolution charts, and diopter range and step size. There are also different methods to analyze defocus curves, including the direct comparison method, range-of-focus analysis, and area under the curve analysis, which can impact result interpretation. A good understanding of these factors and standardization of the methodology are important to ensure optimal cross-study comparisons.
Collapse
|
9
|
Murdison TS, Blohm G, Bremmer F. Saccade-induced changes in ocular torsion reveal predictive orientation perception. J Vis 2020; 19:10. [PMID: 31533148 DOI: 10.1167/19.11.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Germany
| |
Collapse
|
10
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Misperception of motion in depth originates from an incomplete transformation of retinal signals. J Vis 2019; 19:21. [PMID: 31647515 DOI: 10.1167/19.12.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Depth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly, for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly transform binocular retinal motion into 3D spatial coordinates. Here we tested this hypothesis by asking participants to reconstruct the spatial trajectory of an isolated disparity stimulus moving in depth either peri-foveally or peripherally while participants' gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (not accounting for veridical vergence and version) and the spatially correct motion. We quantify these errors with a 3D reference frame model accounting for target, eye, and head position upon motion percept encoding. This model could capture the behavior well, revealing that participants tended to underestimate their version by up to 17%, overestimate their vergence by up to 22%, and underestimate the overall change in retinal disparity by up to 64%, and that the use of extraretinal information depended on retinal eccentricity. Since such large perceptual errors are not observed in everyday viewing, we suggest that both monocular retinal cues and binocular extraretinal signals are required for accurate real-world motion in depth perception.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Guillaume Leclercq
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| |
Collapse
|
12
|
On the role of ocular torsion in binocular visual matching. Sci Rep 2018; 8:10666. [PMID: 30006553 PMCID: PMC6045635 DOI: 10.1038/s41598-018-28513-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 06/22/2018] [Indexed: 11/17/2022] Open
Abstract
When an observer scans the visual surround, the images cast on the two retinae are slightly different due to the different viewpoints of the two eyes. Objects in the horizontal plane of regard can be seen single by aligning the lines of sight without changing the torsional stance of the eyes. Due to the peculiar ocular kinematics this is not possible for objects above or below the horizontal plane of regard. We provide evidence that binocular fusion can be achieved independently of viewing direction by adjusting the mutual torsional orientation of the eyes in the frontal plane. We characterize the fusion positions of the eyes across the oculomotor range by deriving simple trigonometric equations for the required torsion as a function of gaze direction and compute the iso-torsion contours yielding binocular fusion. Finally, we provide experimental evidence that eye positions in far-to-near re-fixation saccades indeed converge towards the predicted positions by adjusting the torsion of the eyes. This is the first report that describes the three-dimensional orientation of the eyes at binocular fusion positions based on the three-dimensional ocular kinematics. It closes a gap between the sensory and the motor side of binocular vision and stereoscopy.
Collapse
|
13
|
Harris LR, Carnevale MJ, D’Amour S, Fraser LE, Harrar V, Hoover AEN, Mander C, Pritchett LM. How our body influences our perception of the world. Front Psychol 2015; 6:819. [PMID: 26124739 PMCID: PMC4464078 DOI: 10.3389/fpsyg.2015.00819] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Accepted: 05/29/2015] [Indexed: 12/02/2022] Open
Abstract
Incorporating the fact that the senses are embodied is necessary for an organism to interpret sensory information. Before a unified perception of the world can be formed, sensory signals must be processed with reference to body representation. The various attributes of the body such as shape, proportion, posture, and movement can be both derived from the various sensory systems and can affect perception of the world (including the body itself). In this review we examine the relationships between sensory and motor information, body representations, and perceptions of the world and the body. We provide several examples of how the body affects perception (including but not limited to body perception). First we show that body orientation effects visual distance perception and object orientation. Also, visual-auditory crossmodal-correspondences depend on the orientation of the body: audio "high" frequencies correspond to a visual "up" defined by both gravity and body coordinates. Next, we show that perceived locations of touch is affected by the orientation of the head and eyes on the body, suggesting a visual component to coding body locations. Additionally, the reference-frame used for coding touch locations seems to depend on whether gaze is static or moved relative to the body during the tactile task. The perceived attributes of the body such as body size, affect tactile perception even at the level of detection thresholds and two-point discrimination. Next, long-range tactile masking provides clues to the posture of the body in a canonical body schema. Finally, ownership of seen body parts depends on the orientation and perspective of the body part in view. Together, all of these findings demonstrate how sensory and motor information, body representations, and perceptions (of the body and the world) are interdependent.
Collapse
Affiliation(s)
- Laurence R. Harris
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Michael J. Carnevale
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Sarah D’Amour
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lindsey E. Fraser
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Vanessa Harrar
- School of Optometry, University of Montreal, Montreal, QC, Canada
| | - Adria E. N. Hoover
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Charles Mander
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lisa M. Pritchett
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| |
Collapse
|
14
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
15
|
Nakashima R, Shioiri S. Why do we move our head to look at an object in our peripheral region? Lateral viewing interferes with attentive search. PLoS One 2014; 9:e92284. [PMID: 24647634 PMCID: PMC3960241 DOI: 10.1371/journal.pone.0092284] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Accepted: 02/21/2014] [Indexed: 11/18/2022] Open
Abstract
Why do we frequently fixate an object of interest presented peripherally by moving our head as well as our eyes, even when we are capable of fixating the object with an eye movement alone (lateral viewing)? Studies of eye-head coordination for gaze shifts have suggested that the degree of eye-head coupling could be determined by an unconscious weighing of the motor costs and benefits of executing a head movement. The present study investigated visual perceptual effects of head direction as an additional factor impacting on a cost-benefit organization of eye-head control. Three experiments using visual search tasks were conducted, manipulating eye direction relative to head orientation (front or lateral viewing). Results show that lateral viewing increased the time required to detect a target in a search for the letter T among letter L distractors, a serial attentive search task, but not in a search for T among letter O distractors, a parallel preattentive search task (Experiment 1). The interference could not be attributed to either a deleterious effect of lateral gaze on the accuracy of saccadic eye movements, nor to potentially problematic optical effects of binocular lateral viewing, because effect of head directions was obtained under conditions in which the task was accomplished without saccades (Experiment 2), and during monocular viewing (Experiment 3). These results suggest that a difference between the head and eye directions interferes with visual processing, and that the interference can be explained by the modulation of attention by the relative positions of the eyes and head (or head direction).
Collapse
Affiliation(s)
- Ryoichi Nakashima
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Core Research for Evolutional Science and Technology (CREST), Japan Science and Technology Agency, Tokyo, Japan
| | - Satoshi Shioiri
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Core Research for Evolutional Science and Technology (CREST), Japan Science and Technology Agency, Tokyo, Japan
| |
Collapse
|
16
|
Leclercq G, Lefèvre P, Blohm G. 3D kinematics using dual quaternions: theory and applications in neuroscience. Front Behav Neurosci 2013; 7:7. [PMID: 23443667 PMCID: PMC3576712 DOI: 10.3389/fnbeh.2013.00007] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 01/28/2013] [Indexed: 12/02/2022] Open
Abstract
In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Brussels, Belgium
| | | | | |
Collapse
|
17
|
Farshadmanesh F, Byrne P, Wang H, Corneil BD, Crawford JD. Relationships between neck muscle electromyography and three-dimensional head kinematics during centrally induced torsional head perturbations. J Neurophysiol 2012; 108:2867-83. [PMID: 22956790 DOI: 10.1152/jn.00312.2012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The relationship between neck muscle electromyography (EMG) and torsional head rotation (about the nasooccipital axis) is difficult to assess during normal gaze behaviors with the head upright. Here, we induced acute head tilts similar to cervical dystonia (torticollis) in two monkeys by electrically stimulating 20 interstitial nucleus of Cajal (INC) sites or inactivating 19 INC sites by injection of muscimol. Animals engaged in a simple gaze fixation task while we recorded three-dimensional head kinematics and intramuscular EMG from six bilateral neck muscle pairs. We used a cross-validation-based stepwise regression to quantitatively examine the relationships between neck EMG and torsional head kinematics under three conditions: 1) unilateral INC stimulation (where the head rotated torsionally toward the side of stimulation); 2) corrective poststimulation movements (where the head returned toward upright); and 3) unilateral INC inactivation (where the head tilted toward the opposite side of inactivation). Our cross-validated results of corrective movements were slightly better than those obtained during unperturbed gaze movements and showed many more torsional terms, mostly related to velocity, although some orientation and acceleration terms were retained. In addition, several simplifying principles were identified. First, bilateral muscle pairs showed similar, but opposite EMG-torsional coupling terms, i.e., a change in torsional kinematics was associated with increased muscle activity on one side and decreased activity on the other side. s, whenever torsional terms were retained in a given muscle, they were independent of the inputs we tested, i.e., INC stimulation vs. corrective motion vs. INC inactivation, and left vs. right INC data. These findings suggest that, despite the complexity of the head-neck system, the brain can use a single, bilaterally coupled inverse model for torsional head control that is valid across different behaviors and movement directions. Combined with our previous data, these new data provide the terms for a more complete three-dimensional model of EMG: head rotation coupling for the muscles and gaze behaviors that we recorded.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
18
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
19
|
Priot AE, Neveu P, Sillan O, Plantier J, Roumes C, Prablanc C. How perceived egocentric distance varies with changes in tonic vergence. Exp Brain Res 2012; 219:457-65. [PMID: 22623089 DOI: 10.1007/s00221-012-3097-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2011] [Accepted: 04/13/2012] [Indexed: 11/29/2022]
Abstract
According to the eye muscle potentiation (EMP) hypothesis, sustained vergence leads to changes in egocentric perceived distance. This perceptual effect has been attributed to a change in the resting or tonic state of vergence. The goal of the present study was to test the EMP hypothesis by quantifying the relationship between prism-induced changes in tonic vergence and corresponding changes in perceived distance and by measuring the dynamics of changes in perceived distance. During a 10-min exposure to 5-diopter base-out prisms that increased the vergence demand, thirteen right-handed subjects pointed to visual targets located within reaching space using their left hand, without visual feedback. Pre- and post-exposure tests assessed tonic vergence through phoria measurements and egocentric distance estimate through pointing to visual targets with each hand successively, without visual feedback. Similar distance aftereffects were observed for both hands, although only the left hand was used during exposure, indicating that these aftereffects are mediated by visual processes rather than by visuomotor interactions. The distance aftereffects were significantly correlated with prism-induced changes in phoria, demonstrating a relationship between perceived distance and the level of tonic vergence. Changes in perceived distance increased monotonically across trials during prism exposure and remained stable during the post-test, indicating a long time constant for these perceptual effects, consistent with current models of the vergence control system. Overall, these results support the hypothesis that vergence plays a role in reduced-cue distance perception. They further illustrate that variations in tonic vergence influence perceived distance by altering the sensed vergence effort.
Collapse
Affiliation(s)
- Anne-Emmanuelle Priot
- Institut de recherche biomédicale des armées (IRBA), BP 73, 91223, Brétigny-sur-Orge cedex, France.
| | | | | | | | | | | |
Collapse
|
20
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
21
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
22
|
Blohm G, Lefèvre P. Visuomotor Velocity Transformations for Smooth Pursuit Eye Movements. J Neurophysiol 2010; 104:2103-15. [PMID: 20719930 DOI: 10.1152/jn.00728.2009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion signals. These retinal motion signals are converted into motor commands that obey Listing's law (i.e., no accumulation of ocular torsion). The fact that smooth pursuit follows Listing's law is often taken as evidence that no explicit reference frame transformation between the retinal velocity input and the head-centered motor command is required. Such eye-position-dependent reference frame transformations between eye- and head-centered coordinates have been well-described for saccades to static targets. Here we suggest that such an eye (and head)-position-dependent reference frame transformation is also required for target motion (i.e., velocity) driving smooth pursuit eye movements. Therefore we tested smooth pursuit initiation under different three-dimensional eye positions and compared human performance to model simulations. We specifically tested if the ocular rotation axis changed with vertical eye position, if the misalignment of the spatial and retinal axes during oblique fixations was taken into account, and if ocular torsion (due to head roll) was compensated for. If no eye-position-dependent velocity transformation was used, the pursuit initiation should follow the retinal direction, independently of eye position; in contrast, a correct visuomotor velocity transformation would result in spatially correct pursuit initiation. Overall subjects accounted for all three components of the visuomotor velocity transformation, but we did observe differences in the compensatory gains between individual subjects. We concluded that the brain does perform a visuomotor velocity transformation but that this transformation was prone to noise and inaccuracies of the internal model.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Neuroscience Studies, Department of Physiology and Faculty of Arts and Science, Queen's University, Kingston, Ontario, Canada; and
- Centre for Systems Engineering and Applied Mechanics and Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Philippe Lefèvre
- Centre for Systems Engineering and Applied Mechanics and Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
23
|
Priot AE, Laboissière R, Sillan O, Roumes C, Prablanc C. Adaptation of egocentric distance perception under telestereoscopic viewing within reaching space. Exp Brain Res 2010; 202:825-36. [DOI: 10.1007/s00221-010-2188-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2009] [Accepted: 02/04/2010] [Indexed: 12/11/2022]
|