51
|
Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol 2007; 98:696-709. [PMID: 17553952 DOI: 10.1152/jn.00206.2007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
Collapse
|
52
|
Vesia M, Monteon JA, Sergio LE, Crawford JD. Hemispheric asymmetry in memory-guided pointing during single-pulse transcranial magnetic stimulation of human parietal cortex. J Neurophysiol 2006; 96:3016-27. [PMID: 17005619 DOI: 10.1152/jn.00411.2006] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dorsal posterior parietal cortex (PPC) has been implicated through single-unit recordings, neuroimaging data, and studies of brain-damaged humans in the spatial guidance of reaching and pointing movements. The present study examines the causal effect of single-pulse transcranial magnetic stimulation (TMS) over the left and right dorsal posterior parietal cortex during a memory-guided "reach-to-touch" movement task in six human subjects. Stimulation of the left parietal hemisphere significantly increased endpoint variability, independent of visual field, with no horizontal bias. In contrast, right parietal stimulation did not increase variability, but instead produced a significantly systematic leftward directional shift in pointing (contralateral to stimulation site) in both visual fields. Furthermore, the same lateralized pattern persisted with left-hand movement, suggesting that these aspects of parietal control of pointing movements are spatially fixed. To test whether the right parietal TMS shift occurs in visual or motor coordinates, we trained subjects to point correctly to optically reversed peripheral targets, viewed through a left-right Dove reversing prism. After prism adaptation, the horizontal pointing direction for a given visual target reversed, but the direction of shift during right parietal TMS did not reverse. Taken together, these data suggest that induction of a focal current reveals a hemispheric asymmetry in the early stages of the putative spatial processing in PPC. These results also suggest that a brief TMS pulse modifies the output of the right PPC in motor coordinates downstream from the adapted visuomotor reversal, rather than modifying the upstream visual coordinates of the memory representation.
Collapse
|
53
|
Ren L, Khan AZ, Blohm G, Henriques DYP, Sergio LE, Crawford JD. Proprioceptive guidance of saccades in eye-hand coordination. J Neurophysiol 2006; 96:1464-77. [PMID: 16707717 DOI: 10.1152/jn.01012.2005] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The saccade generator updates memorized target representations for saccades during eye and head movements. Here, we tested if proprioceptive feedback from the arm can also update handheld object locations for saccades, and what intrinsic coordinate system(s) is used in this transformation. We measured radial saccades beginning from a central light-emitting diode to 16 target locations arranged peripherally in eight directions and two eccentricities on a horizontal plane in front of subjects. Target locations were either indicated 1) by a visual flash, 2) by the subject actively moving the handheld central target to a peripheral location, 3) by the experimenter passively moving the subject's hand, or 4) through a combination of the above proprioceptive and visual stimuli. Saccade direction was relatively accurate, but subjects showed task-dependent systematic overshoots and variable errors in radial amplitude. Visually guided saccades showed the smallest overshoot, followed by saccades guided by both vision and proprioception, whereas proprioceptively guided saccades showed the largest overshoot. In most tasks, the overall distribution of saccade endpoints was shifted and expanded in a gaze- or head-centered cardinal coordinate system. However, the active proprioception task produced a tilted pattern of errors, apparently weighted toward a limb-centered coordinate system. This suggests the saccade generator receives an efference copy of the arm movement command but fails to compensate for the arm's inertia-related directional anisotropy. Thus the saccade system is able to transform hand-centered somatosensory signals into oculomotor coordinates and combine somatosensory signals with visual inputs, but it seems to have a poorly calibrated internal model of limb properties.
Collapse
|
54
|
Prime SL, Niemeier M, Crawford JD. Transsaccadic integration of visual features in a line intersection task. Exp Brain Res 2005; 169:532-48. [PMID: 16374631 DOI: 10.1007/s00221-005-0164-1] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2004] [Accepted: 07/31/2005] [Indexed: 12/31/2022]
Abstract
Transsaccadic integration (TSI) refers to the perceptual integration of visual information collected across separate gaze fixations. Current theories of TSI disagree on whether it relies solely on visual algorithms or also uses extra-retinal signals. We designed a task in which subjects had to rely on internal oculomotor signals to synthesize remembered stimulus features presented within separate fixations. Using a mouse-controlled pointer, subjects estimated the intersection point of two successively presented bars, in the dark, under two conditions: Saccade task (bars viewed in separate fixations) and Fixation task (bars viewed in one fixation). Small, but systematic biases were observed in both intersection tasks, including position-dependent vertical undershoots and order-dependent horizontal biases. However, the magnitude of these errors was statistically indistinguishable in the Saccade and Fixation tasks. Moreover, part of the errors in the Saccade task were dependent on saccade metrics, showing that egocentric oculomotor signals were used to fuse remembered location and orientation features across saccades. We hypothesize that these extra-retinal signals are normally used to reduce the computational load of calculating visual correspondence between fixations. We further hypothesize that TSI may be implemented within dynamically updated recurrent feedback loops that interconnect a common eye-centered map in occipital cortex with both the "dorsal" and "ventral" streams of visual analysis.
Collapse
|
55
|
Khan AZ, Pisella L, Vighetto A, Cotton F, Luauté J, Boisson D, Salemme R, Crawford JD, Rossetti Y. Optic ataxia errors depend on remapped, not viewed, target location. Nat Neurosci 2005; 8:418-20. [PMID: 15768034 DOI: 10.1038/nn1425] [Citation(s) in RCA: 94] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2004] [Accepted: 02/25/2005] [Indexed: 11/09/2022]
Abstract
Optic ataxia is a disorder associated with posterior parietal lobe lesions, in which visually guided reaching errors typically occur for peripheral targets. It has been assumed that these errors are related to a faulty sensorimotor transformation of inputs from the 'ataxic visual field'. However, we show here that the errors observed in the contralesional field in optic ataxia depend on a dynamic gaze-centered internal representation of reach space.
Collapse
|
56
|
Prime SL, Niemeier M, Crawford JD. Trans-saccadic integration of the orientation and location features of linear objects. J Vis 2004. [DOI: 10.1167/4.8.742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
57
|
Marotta JJ, Keith GP, Crawford JD. Is reversing prism adaptation global or modular? J Vis 2004. [DOI: 10.1167/4.8.290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
58
|
Abstract
Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Collapse
|
59
|
Crawford JD, Martinez-Trujillo JC, Klier EM. Neural control of three-dimensional eye and head movements. Curr Opin Neurobiol 2004; 13:655-62. [PMID: 14662365 DOI: 10.1016/j.conb.2003.10.009] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Although the eyes and head can potentially rotate about any three-dimensional axis during orienting gaze shifts, behavioral recordings have shown that certain lawful strategies--such as Listing's law and Donders' law--determine which axis is used for a particular sensory input. Here, we review recent advances in understanding the neuromuscular mechanisms for these laws, the neural mechanisms that control three-dimensional head posture, and the neural mechanisms that coordinate three-dimensional eye orientation with head motion. Finally, we consider how the brain copes with the perceptual consequences of these motor acts.
Collapse
|
60
|
Marotta JJ, Medendorp WP, Crawford JD. Kinematic rules for upper and lower arm contributions to grasp orientation. J Neurophysiol 2003; 90:3816-27. [PMID: 12930815 DOI: 10.1152/jn.00418.2003] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The purpose of the current study was to investigate the contribution of upper and lower arm torsion to grasp orientation during a reaching and grasping movement. In particular, we examined how the visuomotor system deals with the conflicting demands of coordinating upper and lower arm torsion and maintaining Donders' Law of the upper arm (a behavioral restriction of the axes of arm rotation to a two-dimensional "surface"). In experiment 1, subjects reached out and grasped a target block that was presented in one of 19 orientations (5 degrees clockwise increments from horizontal to vertical) at one position in a vertical presentation board. In experiment 2, target blocks were presented in one of three orientations (horizontal, three-quarter, and vertical) at nine different positions in the presentation board. If reach and grasp commands control the proximal and distal arms separately, then one would only expect the lower arm to contribute to grasp orientations and that Donders' Law would hold for the upper arm-independent of grasp orientations. Instead, as the required grasp orientation increased from horizontal to vertical, there was a significant clockwise torsional rotation in the upper arm, which accounted for 9% of the final vertical grasp orientation, and the lower arm, which accounted for 42%. A linear relationship existed between the torsional rotations of the upper and lower arm, which indicates that the components of the arm rotate in coordination with one another. The location-dependent aspects of upper and lower arm torsion remained invariant, however, yielding consistently shaped Donders' "surfaces" (with different torsional offsets) for different grasp orientations. These observations suggest that the entire arm-hand system contributes to grasp orientation, and therefore, the reach/grasp distinction is not directly reflected in proximal-distal kinematics but is better reflected in the distinction between these coordinated orienting rules and the location-dependent kinematic rules for the upper arm that result in Donders' Law for one given grasp orientation.
Collapse
|
61
|
Henriques DYP, Medendorp WP, Gielen CCAM, Crawford JD. Geometric computations underlying eye-hand coordination: orientations of the two eyes and the head. Exp Brain Res 2003; 152:70-8. [PMID: 12827330 DOI: 10.1007/s00221-003-1523-4] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2002] [Accepted: 05/08/2003] [Indexed: 11/25/2022]
Abstract
Eye-hand coordination is geometrically complex. To compute the location of a visual target relative to the hand, the brain must consider every anatomical link in the chain from retinas to fingertips. Here we focus on the first three links, studying how the brain handles information about the angles of the two eyes and the head. It is known that people, even in darkness, reach more accurately when the eye looks toward the target, rather than right or left of it. We show that reaching is also impaired when the binocular fixation point is displaced from the target in depth: reaching becomes not just sloppy, but systematically inaccurate. Surprisingly, though, in normal Gaze-On-Target reaching we found no strong correlations between errors in aiming the eyes and hand onto the target site. We also asked people to reach when the head was not facing the target. When the eyes were on-target, people reached accurately, but when gaze was off-target, performance degraded. Taking all these findings together, we suggest that the brain's computational networks have learned the complex geometry of reaching for well-practiced tasks, but that the networks are poorly calibrated for less common tasks such as Gaze-Off-Target reaching.
Collapse
|
62
|
Abstract
Eye-hand coordination is complicated by the fact that the eyes are constantly in motion relative to the head. This poses problems in interpreting the spatial information gathered from the retinas and using this to guide hand motion. In particular, eye-centered visual information must somehow be spatially updated across eye movements to be useful for future actions, and these representations must then be transformed into commands appropriate for arm motion. In this review, we present evidence that early visuomotor representations for arm movement are remapped relative to the gaze direction during each saccade. We find that this mechanism holds for targets in both far and near visual space. We then show how the brain incorporates the three-dimensional, rotary geometry of the eyes when interpreting retinal images and transforming these into commands for arm movement. Next, we explore the possibility that hand-eye alignment is optimized for the eye with the best field of view. Finally, we describe how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our 'conversion-on-demand' model, in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with objects in personal space, thus providing a virtual on-line map of visuomotor space.
Collapse
|
63
|
Henriques DYP, Medendorp WP, Khan AZ, Crawford JD. Visuomotor transformations for eye-hand coordination. PROGRESS IN BRAIN RESEARCH 2003; 140:329-40. [PMID: 12508600 DOI: 10.1016/s0079-6123(02)40060-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In recent years the scientific community has come to appreciate that the early cortical representations for visually guided arm movements are probably coded in a visual frame, i.e. relative to retinal landmarks. While this scheme accounts for many behavioral and neurophysiological observations, it also poses certain problems for manual control. For example, how are these oculocentric representations updated across eye movements, and how are they then transformed into useful commands for accurate movements of the arm relative to the body? Also, since we have two eyes, which is used as the reference point in eye-hand alignment tasks like pointing? We show that patterns of errors in human pointing suggest that early oculocentric representations for arm movement are remapped relative to the gaze direction during each saccade. To then transform these oculocentric representations into useful commands for accurate movements of the arm relative to the body, the brain correctly incorporates the three-dimensional, rotary geometry of the eyes when interpreting retinal images. We also explore the possibility that the eye-hand coordination system uses a strategy like ocular dominance, but switches alignment between the left and right eye in order to maximize eye-hand coordination in the best field of view. Finally, we describe the influence of eye position on eye-hand alignment, and then consider how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our 'conversion-on-demand' model, which suggests a virtual representation of egocentric space, i.e. one in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with actual objects in personal space.
Collapse
|
64
|
Klier EM, Henriques DYP, Crawford JD. Visual-motor transformations account for three-dimensional eye position. Arch Ital Biol 2002; 140:193-201. [PMID: 12173522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
|
65
|
Henriques DYP, Crawford JD, Vilis T. The visuomotor transformation for arm movement accounts for 3-D eye orientation and retinal geometry. Ann N Y Acad Sci 2002; 956:515-9. [PMID: 11960855 DOI: 10.1111/j.1749-6632.2002.tb02870.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
66
|
Henriques DYP, Crawford JD. Role of eye, head, and shoulder geometry in the planning of accurate arm movements. J Neurophysiol 2002; 87:1677-85. [PMID: 11929889 DOI: 10.1152/jn.00509.2001] [Citation(s) in RCA: 62] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.
Collapse
|
67
|
Smith MA, Crawford JD. Implications of ocular kinematics for the internal updating of visual space. J Neurophysiol 2001; 86:2112-7. [PMID: 11600667 DOI: 10.1152/jn.2001.86.4.2112] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies have suggested that during saccades cortical and subcortical representations of visual targets are represented and remapped in retinal coordinates. If this is correct, then the remapping processes must incorporate the noncommutativity of rotations. For example, our three-dimensional (3-D) simulations of the commutative vector-subtraction model of retinocentric remapping predicted centripetal errors in saccade trajectories between "remembered" eccentric targets, whereas our noncommutative model predicted accurate saccades. We tested between these two models in five head-fixed human subjects. Typically, a central fixation light appeared and two peripheral targets were flashed. With all targets extinguished, subjects were required to saccade to the remembered location of one of the peripheral targets and saccade between their remembered locations. Subjects showed minor misestimations of the spatial locations of targets, but failed to show the cumulative pattern of errors predicted by the commutative model. This experiment indicates that if targets are remapped in a retinal frame, then the remapping process also takes the noncommutativity of 3-D eye rotations into account. Unlike other noncommutative aspects of eye rotations that may have mechanical explanations, the noncommutative aspects of this process must be entirely internal.
Collapse
|
68
|
Smith MA, Crawford JD. Self-organizing task modules and explicit coordinate systems in a neural network model for 3-D saccades. J Comput Neurosci 2001; 10:127-50. [PMID: 11361255 DOI: 10.1023/a:1011264913465] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The goal of this study was to train an artificial neural network to generate accurate saccades in Listing's plane and then determine how the hidden units performed the visuomotor transformation. A three-layer neural network was successfully trained, using back-prop, to take in oculocentric retinal error vectors and three-dimensional eye orientation and to generate the correct head-centric motor error vector within Listing's plane. Analysis of the hidden layer of trained networks showed that explicit representations of desired target direction and eye orientation were not employed. Instead, the hidden-layer units consistently divided themselves into four parallel modules: a dominant "vector-propagation" class (approximately 50% of units) with similar visual and motor tuning but negligible position sensitivity and three classes with specific spatial relations between position, visual, and motor tuning. Surprisingly, the vector-propagation units, and only these, formed a highly precise and consistent orthogonal coordinate system aligned with Listing's plane. Selective "lesions" confirmed that the vector-propagation module provided the main drive for saccade magnitude and direction, whereas a balance between activity in the other modules was required for the correct eye-position modulation. Thus, contrary to popular expectation, error-driven learning in itself was sufficient to produce a "neural" algorithm with discrete functional modules and explicit coordinate systems, much like those observed in the real saccade generator.
Collapse
|
69
|
Klier EM, Wang H, Crawford JD. The superior colliculus encodes gaze commands in retinal coordinates. Nat Neurosci 2001; 4:627-32. [PMID: 11369944 DOI: 10.1038/88450] [Citation(s) in RCA: 126] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The superior colliculus (SC) has a topographic map of visual space, but the spatial nature of its output command for orienting gaze shifts remains unclear. Here we show that the SC codes neither desired gaze displacement nor gaze direction in space (as debated previously), but rather, desired gaze direction in retinal coordinates. Electrical micro-stimulation of the SC in two head-free (non-immobilized) monkeys evoked natural-looking, eye-head gaze shifts, with anterior sites producing small, fixed-vector movements and posterior sites producing larger, strongly converging movements. However, when correctly calculated in retinal coordinates, all of these trajectories became 'fixed-vector.' Moreover, our data show that this eye-centered SC command is then further transformed, as a function of eye and head position, by downstream mechanisms into the head- and body-centered commands for coordinated eye-head gaze shifts.
Collapse
|
70
|
Abstract
Ocular dominance is the tendency to prefer visual input from one eye to the other [e.g. Porac, C. & Coren, S. (1976). The dominant eye. Psychological Bulletin 83(5), 880-897]. In standard sighting tests, most people consistently fall into either the left- or right eye-dominant category [Miles, W. R. (1930). Ocular dominance in human adults. Journal of General Psychology 3, 412-420]. Here we show this static concept to be flawed, being based on the limited results of sighting with gaze pointed straight ahead. In a reach-grasp task for targets within the binocular visual field, subjects switched between left and right eye dominance depending on horizontal gaze angle. On average, ocular dominance switched at gaze angles of only 15.5 degrees off center.
Collapse
|
71
|
Abstract
To achieve stereoscopic vision, the brain must search for corresponding image features on the two retinas. As long as the eyes stay still, corresponding features are confined to narrow bands called epipolar lines. But when the eyes change position, the epipolar lines migrate on the retinas. To find the matching features, the brain must either search different retinal bands depending on current eye position, or search retina-fixed zones that are large enough to cover all usual locations of the epipolar lines. Here we show, using a new type of stereogram in which the depth image vanishes at certain gaze elevations, that the search zones are retina-fixed. This being the case, motor control acquires a crucial function in depth vision: we show that the eyes twist about their lines of sight in a way that reduces the motion of the epipolar lines, allowing stereopsis to get by with smaller search zones and thereby lightening its computational load.
Collapse
|
72
|
Fletcher LB, Crawford JD. Acoustic detection by sound-producing fishes (Mormyridae): the role of gas-filled tympanic bladders. J Exp Biol 2001; 204:175-83. [PMID: 11136604 DOI: 10.1242/jeb.204.2.175] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Mormyrid electric fish use sounds for communication and have unusual ears. Each ear has a small gas-filled tympanic bladder coupled to the sacculus. Although it has long been thought that this gas-filled structure confers acoustic pressure sensitivity, this has never been evaluated experimentally. We examined tone detection thresholds by measuring behavioral responses to sounds in normal fish and in fish with manipulations to one or to both of the tympanic bladders. We found that the tympanic bladders increase auditory sensitivity by approximately 30 dB in the middle of the animal's hearing range (200–1200 Hz). Normal fish had their best tone detection thresholds in the range 400–500 Hz, with thresholds of approximately 60 dB (re 1 microPa). When the gas was displaced from the bladders with physiological saline, the animals showed a dramatic loss of auditory sensitivity. In contrast, control animals in which only one bladder was manipulated or in which a sham operation had been performed on both sides had normal hearing.
Collapse
|
73
|
Medendorp WP, Crawford JD, Henriques DY, Van Gisbergen JA, Gielen CC. Kinematic strategies for upper arm-forearm coordination in three dimensions. J Neurophysiol 2000; 84:2302-16. [PMID: 11067974 DOI: 10.1152/jn.2000.84.5.2302] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This study addressed the question of how the three-dimensional (3-D) control strategy for the upper arm depends on what the forearm is doing. Subjects were instructed to point a laser-attached in line with the upper arm-toward various visual targets, such that two-dimensional (2-D) pointing directions of the upper arm were held constant across different tasks. For each such task, subjects maintained one of several static upper arm-forearm configurations, i. e., each with a set elbow angle and forearm orientation. Upper arm, forearm, and eye orientations were measured with the use of 3-D search coils. The results confirmed that Donders' law (a behavioral restriction of 3-D orientation vectors to a 2-D "surface") does not hold across all pointing tasks, i.e., for a given pointing target, upper arm torsion varied widely. However, for any one static elbow configuration, torsional variance was considerably reduced and was independent of previous arm position, resulting in a thin, Donders-like surface of orientation vectors. More importantly, the shape of this surface (which describes upper arm torsion as a function of its 2-D pointing direction) depended on both elbow angle and forearm orientation. For pointing with the arm fully extended or with the elbow flexed in the horizontal plane, a Listing's-law-like strategy was observed, minimizing shoulder rotations to and from center at the cost of position-dependent tilts in the forearm. In contrast, when the arm was bent in the vertical plane, the surface of best fit showed a Fick-like twist that increased continuously as a function of static elbow flexion, thereby reducing position-dependent tilts of the forearm with respect to gravity. In each case, the torsional variance from these surfaces remained constant, suggesting that Donders' law was obeyed equally well for each task condition. Further experiments established that these kinematic rules were independent of gaze direction and eye orientation, suggesting that Donders' law of the arm does not coordinate with Listing's law for the eye. These results revive the idea that Donders' law is an important governing principle for the control of arm movements but also suggest that its various forms may only be limited manifestations of a more general set of context-dependent kinematic rules. We propose that these rules are implemented by neural velocity commands arising as a function of initial arm orientation and desired pointing direction, calculated such that the torsional orientation of the upper arm is implicitly coordinated with desired forearm posture.
Collapse
|
74
|
Rivkees SA, Crawford JD. Dexamethasone treatment of virilizing congenital adrenal hyperplasia: the ability to achieve normal growth. Pediatrics 2000; 106:767-73. [PMID: 11015521 DOI: 10.1542/peds.106.4.767] [Citation(s) in RCA: 89] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE To assess whether treatment of virilizing congenital adrenal hyperplasia (CAH) with long-acting glucocorticoids is associated with favorable growth outcomes. METHOD We examined the long-term growth of 17 boys and 9 girls with CAH treated with dexamethasone (.27 +/-.01 mg/m(2)/day). RESULTS For individuals with comparable bone age (BA) and chronological age (CA) at the onset of dexamethasone therapy, males were 2.8 +/-.8 years (mean +/- standard error of the mean; n = 13) and females were 2.4 +/- 1.0 years (n = 6). Males were treated for 7.3 +/- 1.1 years (DeltaCA) over which time the change in BA (DeltaBA) was 7.0 +/- 1.3 years, and the change in height age (DeltaHA) was 6.9 +/- 1.1 years. Females were treated for 6.8 +/- 1.3 years, over which time the DeltaBA was 6.5 +/- 1.0 years, and the DeltaHA was 6.3 +/-.8 years. During treatment 17 ketosteroid excretion rates were normal for age and 17-hydroxyprogesterone values were 69.6 +/- 18 ng/dL. Testicular enlargement was first detected at 10.7 +/-.8 years and breast tissue at 9.9 +/- 1.2 years. Three boys and 1 girl had final heights of 171. 8 +/- 6 cm and 161 cm, respectively, compared with midparental heights of 176.1 +/- 4.1 cm and 160 cm. Predicted adult heights for 6 other boys and 5 girls were 176.8 +/- 2.0 cm and 161.4 +/- 2.8 cm, respectively, compared with midparental heights of 174.6 +/- 1.4 cm and 158.2 +/- 2.0 cm. Statural outcomes were less favorable for 7 children started on dexamethasone when BAs were considerably advanced, although height predictions increased during therapy. CONCLUSIONS These observations show that children treated with dexamethasone for CAH can achieve normal growth with the convenience of once-a-day dosing in most cases.congenital adrenal hyperplasia, dexamethasone, growth.
Collapse
|
75
|
Marvit P, Crawford JD. Auditory discrimination in a sound-producing electric fish (Pollimyrus): tone frequency and click-rate difference detection. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2000; 108:1819-1825. [PMID: 11051508 DOI: 10.1121/1.1287845] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Pollimyrus adspersus is a fish that uses simple sounds for communication and has auditory specializations for sound-pressure detection. The sounds are species-specific, and the sounds of individuals are sufficiently stereotyped that they could mediate individual recognition. Behavioral measurements are presented indicating that Pollimyrus probably can make species and individual discriminations on the basis of acoustic cues. Interclick interval (ICI; 10-40 ms) and frequency (100-1400 Hz) discrimination was assessed using modulations of the fish's electric organ discharge rate in the presence of a target stimulus presented in alternation with an ongoing base stimulus. Tone frequency discrimination was best in the 200-600-Hz range, with the best threshold of 1.7% +/- 0.4% standard error at 500 Hz (or 8.5 Hz +/- 1.9 SE). The just noticeable differences (jnd's) were relatively constant from 100 to 500 Hz (mean 8.7 Hz), then increased at a rate of 13.3 Hz per 100 Hz. For click trains, jnd's increased linearly with ICI. The mean jnd's for 10- and 15-ms ICI were both 300 micros (SE= 0.8 ms at 10-ms ICI, SE= 0.11 ms at 15-ms ICI). The jnd at 20-ms ICI was only 1.1 ms +/- 0.25 SE.
Collapse
|