1
|
Ellery A. Bio-Inspired Strategies Are Adaptable to Sensors Manufactured on the Moon. Biomimetics (Basel) 2024; 9:496. [PMID: 39194475 DOI: 10.3390/biomimetics9080496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 08/09/2024] [Accepted: 08/10/2024] [Indexed: 08/29/2024] Open
Abstract
Bio-inspired strategies for robotic sensing are essential for in situ manufactured sensors on the Moon. Sensors are one crucial component of robots that should be manufactured from lunar resources to industrialize the Moon at low cost. We are concerned with two classes of sensor: (a) position sensors and derivatives thereof are the most elementary of measurements; and (b) light sensing arrays provide for distance measurement within the visible waveband. Terrestrial approaches to sensor design cannot be accommodated within the severe limitations imposed by the material resources and expected manufacturing competences on the Moon. Displacement and strain sensors may be constructed as potentiometers with aluminium extracted from anorthite. Anorthite is also a source of silica from which quartz may be manufactured. Thus, piezoelectric sensors may be constructed. Silicone plastic (siloxane) is an elastomer that may be derived from lunar volatiles. This offers the prospect for tactile sensing arrays. All components of photomultiplier tubes may be constructed from lunar resources. However, the spatial resolution of photomultiplier tubes is limited so only modest array sizes can be constructed. This requires us to exploit biomimetic strategies: (i) optical flow provides the visual navigation competences of insects implemented through modest circuitry, and (ii) foveated vision trades the visual resolution deficiencies with higher resolution of pan-tilt motors enabled by micro-stepping. Thus, basic sensors may be manufactured from lunar resources. They are elementary components of robotic machines that are crucial for constructing a sustainable lunar infrastructure. Constraints imposed by the Moon may be compensated for using biomimetic strategies which are adaptable to non-Earth environments.
Collapse
Affiliation(s)
- Alex Ellery
- Centre for Self-Replication Research (CESER), Department of Mechanical & Aerospace Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
2
|
Tang X, Yu S, Takahashi S, Yang J, Ejima Y, Gao Y, Wu Q, Wu J. The human brain deals with violating general color or depth knowledge in different time courses. Neuropsychologia 2024; 201:108941. [PMID: 38908477 DOI: 10.1016/j.neuropsychologia.2024.108941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 05/31/2024] [Accepted: 06/18/2024] [Indexed: 06/24/2024]
Abstract
Utilizing the high temporal resolution of event-related potentials (ERPs), we compared the time course of processing incongruent color versus 3D-depth information. Participants were asked to judge whether the food color (color condition) or 3D structure (3D-depth condition) was congruent or incongruent with their previous knowledge and experience. The behavioral results showed that the reaction times in the congruent 3D-depth condition were slower than those in the congruent color condition. The reaction times in the incongruent 3D-depth condition were slower than those in the incongruent color condition. The ERP results showed that incongruent color stimuli induced a larger N270, larger P300, and smaller N400 components in the fronto-central region than the congruent color stimuli. Incongruent 3D-depth stimuli induced a smaller N1 in the occipital region, larger P300 and smaller N400 in the parietal-occipital region than congruent 3D-depth stimuli. The time-frequency analysis found that incongruent color stimuli induced a larger theta band (360-580 ms) activation in the fronto-central region than congruent color stimuli. Incongruent 3D-depth stimuli induced larger alpha and beta bands (240-350 ms) activation in the parietal region than congruent 3D-depth stimuli. Our results suggest that the human brain deals with violating general color or depth knowledge in different time courses. We speculate that the depth perception conflict was dominated by solving the problem with visual processing, whereas the color perception conflict was dominated by solving the problem with semantic violation.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.
| | - Shilong Yu
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | | | - Jiajia Yang
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Yoshimichi Ejima
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.
| | - Qiong Wu
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| | - Jinglong Wu
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China; Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| |
Collapse
|
3
|
Steering Transforms the Cortical Representation of Self-Movement from Direction to Destination. J Neurosci 2016; 35:16055-63. [PMID: 26658859 DOI: 10.1523/jneurosci.2368-15.2015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Steering demands rapid responses to heading deviations and uses optic flow to redirect self-movement toward the intended destination. We trained monkeys in a naturalistic steering paradigm and recorded dorsal medial superior temporal area (MSTd) cortical neuronal responses to the visual motion and spatial location cues in optic flow. We found that neuronal responses to the initial heading direction are dominated by the optic flow's global radial pattern cue. Responses to subsequently imposed heading deviations are dominated by the local direction of motion cue. Finally, as the monkey steers its heading back to the goal location, responses are dominated by the spatial location cue, the screen location of the flow field's center of motion. We conclude that MSTd responses are not rigidly linked to specific stimuli, but rather are transformed by the task relevance of cues that guide performance in learned, naturalistic behaviors. SIGNIFICANCE STATEMENT Unplanned heading changes trigger lifesaving steering back to a goal. Conventionally, such behaviors are thought of as cortical sensory-motor reflex arcs. We find that a more reciprocal process underlies such cycles of perception and action, rapidly transforming visual processing to suit each stage of the task. When monkeys monitor their simulated self-movement, dorsal medial superior temporal area (MSTd) neurons represent their current heading direction. When monkeys steer to recover from an unplanned change in heading direction, MSTd shifts toward representing the goal location. We hypothesize that this transformation reflects the reweighting of bottom-up visual motion signals and top-down spatial location signals, reshaping MSTd's response properties through task-dependent interactions with adjacent cortical areas.
Collapse
|
4
|
Correia Grácio BJ, Bos JE, van Paassen MM, Mulder M. Perceptual scaling of visual and inertial cues: effects of field of view, image size, depth cues, and degree of freedom. Exp Brain Res 2013; 232:637-46. [PMID: 24292492 DOI: 10.1007/s00221-013-3772-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Accepted: 11/11/2013] [Indexed: 11/26/2022]
Abstract
In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceived match between visual and inertial motion. This result is thought to be caused by the "quality" of the motion cues delivered by the simulator motion and visual systems. This paper studies how different visual characteristics, like field of view (FoV) and size and depth cues, influence the scaling between visual and inertial motion in a simulation environment. Subjects were exposed to simulator visuals with different fields of view and different visual scenes and were asked to vary the visual amplitude until it matched the perceived inertial amplitude. This was done for motion profiles in surge, sway, and yaw. Results showed that the subjective visual amplitude was significantly affected by the FoV, visual scene, and degree-of-freedom. When the FoV and visual scene were closer to what one expects in the real world, the scaling between the visual and inertial cues was closer to one. For yaw motion, the subjective visual amplitudes were approximately the same as the real inertial amplitudes, whereas for sway and especially surge, the subjective visual amplitudes were higher than the inertial amplitudes. This study demonstrated that visual characteristics affect the scaling between visual and inertial motion which leads to the hypothesis that this scaling may be a good metric to quantify the effect of different visual properties in motion-based simulation.
Collapse
Affiliation(s)
- B J Correia Grácio
- Faculty of Aerospace Engineering, Control and Simulation Division, Delft University of Technology, P. O. Box 5058, 2600 GB, Delft, The Netherlands,
| | | | | | | |
Collapse
|
5
|
Eckmeier D, Kern R, Egelhaaf M, Bischof HJ. Encoding of naturalistic optic flow by motion sensitive neurons of nucleus rotundus in the zebra finch (Taeniopygia guttata). Front Integr Neurosci 2013; 7:68. [PMID: 24065895 PMCID: PMC3778379 DOI: 10.3389/fnint.2013.00068] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Accepted: 09/02/2013] [Indexed: 02/05/2023] Open
Abstract
The retinal image changes that occur during locomotion, the optic flow, carry information about self-motion and the three-dimensional structure of the environment. Especially fast moving animals with only little binocular vision depend on these depth cues for maneuvering. They actively control their gaze to facilitate perception of depth based on cues in the optic flow. In the visual system of birds, nucleus rotundus neurons were originally found to respond to object motion but not to background motion. However, when background and object were both moving, responses increased the more the direction and velocity of object and background motion on the retina differed. These properties may play a role in representing depth cues in the optic flow. We therefore investigated, how neurons in nucleus rotundus respond to optic flow that contains depth cues. We presented simplified and naturalistic optic flow on a panoramic LED display while recording from single neurons in nucleus rotundus of anaesthetized zebra finches. Unlike most studies on motion vision in birds, our stimuli included depth information. We found extensive responses of motion selective neurons in nucleus rotundus to optic flow stimuli. Simplified stimuli revealed preferences for optic flow reflecting translational or rotational self-motion. Naturalistic optic flow stimuli elicited complex response modulations, but the presence of objects was signaled by only few neurons. The neurons that did respond to objects in the optic flow, however, show interesting properties.
Collapse
Affiliation(s)
- Dennis Eckmeier
- Neuroethology Group, Department of Behavioural Biology, Bielefeld University Bielefeld, Germany
| | | | | | | |
Collapse
|
6
|
Vision and agility training in community dwelling older adults: incorporating visual training into programs for fall prevention. Gait Posture 2012; 35:585-9. [PMID: 22206782 PMCID: PMC3405148 DOI: 10.1016/j.gaitpost.2011.11.029] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2011] [Revised: 10/21/2011] [Accepted: 11/15/2011] [Indexed: 02/02/2023]
Abstract
This study aimed to examine the effect of visual training on obstacle course performance of independent community dwelling older adults. Agility is the ability to rapidly alter ongoing motor patterns, an important aspect of mobility which is required in obstacle avoidance. However, visual information is also a critical factor in successful obstacle avoidance. We compared obstacle course performance of a group that trained in visually driven body movements and agility drills, to a group that trained only in agility drills. We also included a control group that followed the American College of Sports Medicine exercise recommendations for older adults. Significant gains in fitness, mobility and power were observed across all training groups. Obstacle course performance results revealed that visual training had the greatest improvement on obstacle course performance (22%) following a 12 week training program. These results suggest that visual training may be an important consideration for fall prevention programs.
Collapse
|
7
|
Reed-Jones RJ, Vallis LA. Modulation of Visually Evoked Movement Responses in Moving Virtual Environments. Perception 2009; 38:652-63. [DOI: 10.1068/p6086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided ‘real-world’ visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Collapse
Affiliation(s)
- Rebecca J Reed-Jones
- Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario N1G 2W1, Canada
| | - Lori Ann Vallis
- Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario N1G 2W1, Canada
| |
Collapse
|
8
|
Zhong H, Cornilleau-Pérès V, Cheong LF, Yeow GM, Droulez D. The visual perception of plane tilt from motion in small field and large field: psychophysics and theory. Vision Res 2006; 46:3494-513. [PMID: 16769100 DOI: 10.1016/j.visres.2006.04.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2005] [Revised: 03/30/2006] [Accepted: 04/04/2006] [Indexed: 10/24/2022]
Abstract
Subjects indicated the tilt of dotted planes rotating in depth, in monocular viewing, under perspective projection. The responses depended on the FOV (field of view) and on the angle W between the tilt and frontal translation (orthogonal to the rotation axis). Response accuracy increased with the FOV, and decreased with W. Our results support the processing of the second-order optic flow in all cases, but indicate that this flow is quantitatively small in small-field, leading to tilt ambiguities. We examine computational models based on the affine components of the optic flow to interpret our results.
Collapse
Affiliation(s)
- H Zhong
- Department of Cognitive Sciences, Universiy of California, Irvine, USA
| | | | | | | | | |
Collapse
|
9
|
Ennaceur A, Michalikova S, Chazot PL. Models of anxiety: responses of rats to novelty in an open space and an enclosed space. Behav Brain Res 2006; 171:26-49. [PMID: 16678277 DOI: 10.1016/j.bbr.2006.03.016] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2005] [Revised: 03/06/2006] [Accepted: 03/14/2006] [Indexed: 01/01/2023]
Abstract
Exposure to novelty has been shown to induce anxiety responses in a variety of behavioural paradigms. The purpose of the present study was to investigate whether exposition of naïve rats to novelty would result in a comparable or a different pattern of responses in an open space versus enclosed space with or without the presence of an object in the centre of the field. Lewis and Wistar rats of both genders were used to illustrate and discuss the value and validity of these anxiety paradigms. We examined a wide range of measures, which cover several aspects of animals' responses. The results of this study revealed significant differences between the behaviour of animals in an open space and in the enclosed space. It also revealed significant differences in animal's responses to the presence and absence of an object in the open space and in the enclosed space. In the enclosed space, rats spent most of their time in the outer area with lower number of exits and avoided the object area except when there was an object, while in the open space rats displayed frequent short duration re-entries in the outer area and spent longer time in the object area in presence of an object. The time spent in the inner area (away from the outer area and the object area) was significantly longer and the number of faecal boli was significantly higher in the open space than in the enclosed space. In the present report, we will discuss the fundamental differences between enclosed space and open space models, and we will examine some methodological issues related to the current animal models of human behaviour in anxiety. In the enclosed space, animals can avoid the potential threat associated with the centre area of a box and chose the safety of walls and corners, whereas, in the open space animals have to avoid every parts of the field from which there was no safe escape. The response of animals to novelty in an open space model appears more relevant to anxiety than in an enclosed space. The present studies revealed no correlations between the measures of behaviour in enclosed space and the measures of behaviour in open space, which suggest that these two models do not involve the same construct. Our results suggest that the enclosed space model involves avoidance responses while the open space model involves anxiety responses. The open space model can be very useful in understanding the underlying neural mechanisms of anxiety responses, and in assessing the effects of potential anxiolytic drugs.
Collapse
Affiliation(s)
- A Ennaceur
- University of Sunderland, Sunderland Pharmacy School, UK.
| | | | | |
Collapse
|
10
|
Johnson AP, Barnes WJP, Macauley MWS. Effects of light intensity and pattern contrast on the ability of the land crab,Cardisoma guanhumi, to separate optic flow-field components. Vis Neurosci 2005; 21:895-904. [PMID: 15733344 DOI: 10.1017/s0952523804216091] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2004] [Indexed: 11/05/2022]
Abstract
Using a novel suite of computer-generated visual stimuli that mimicked components of optic flow, the visual responses of the tropical land crab,Cardisoma guanhumi, were investigated. We show that crabs are normally successful in distinguishing the rotational and translational components of the optic flow field, showing strong optokinetic responses to the former but not the latter. This ability was not dependant on the orientation of the crab, occurring both in “forwards-walking” and “sideways-walking” configurations. However, under conditions of low overall light intensity and/or low object/background contrast, the separation mechanism shows partial failure causing the crab to generate compensatory eye movements to translation, particularly in response to low-frequency (low-velocity) stimuli. Using this discovery, we then tested the ability of crabs to separate rotational and translational components in a combined rotation/translation flow field under different conditions. We demonstrate that, while crabs can successfully separate such a combined flow field under normal circumstances, showing compensatory eye movements only to the rotational component, they are unable to make this separation under conditions of low overall light intensity and low object/background contrast. Here, the responses to both flow-field components show summation when they are in phase, but, surprisingly, there is little reduction in the amplitude of responses to rotation when the translational component is in antiphase. Our results demonstrate that the crab's visual system finds separation of flow-field components a harder task than detection of movement, since the former shows partial failure at light intensities and/or object/background contrasts at which movement of the world around the crab is still generating high-gain optokinetic responses.
Collapse
Affiliation(s)
- Aaron P Johnson
- Division of Environmental and Evolutionary Biology, Institute of Biomedical and Life Sciences, University of Glasgow, Glasgow, Scotland, UK.
| | | | | |
Collapse
|
11
|
Naji JJ, Freeman TCA. Perceiving depth order during pursuit eye movement. Vision Res 2004; 44:3025-34. [PMID: 15474575 DOI: 10.1016/j.visres.2004.07.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2004] [Revised: 06/23/2004] [Indexed: 10/26/2022]
Abstract
Pursuit eye movements alter retinal motion cues to depth. For instance, the sinusoidal retinal velocity profile produced by a translating, corrugated surface resembles a sinusoidal shear during pursuit. One way to recover the correct spatial phase of the corrugation's profile (i.e. which part is near and which part is far) is to combine estimates of shear with extra-retinal estimates of translation. In support of this hypothesis, we found the corrugation's spatial phase appeared ambiguous when retinal shear was viewed without translation, but unambiguous when translated and viewed with or without a pursuit eye movement. The eyes lagged the sinusoidal translation by a small but persistent amount, raising the possibility that retinal slip could serve as the disambiguating cue in the eye-moving condition. A yoked control was therefore performed in which measured horizontal slip was fed back into a fixated shearing stimulus on a trial-by-trial basis. The results showed that the corrugation's phase was only seen unambiguously during the real eye movement. This supports the idea that extra-retinal estimates of eye velocity can help disambiguate ordinal depth structure within moving retinal images.
Collapse
Affiliation(s)
- Jenny J Naji
- School of Psychology, Cardiff University, Tower Building, Park Place, CF10 3AT, Wales, UK
| | | |
Collapse
|
12
|
Abstract
Image movement provides one of the most potent two-dimensional cues for depth. From motion cues alone, the brain is capable of deriving a three-dimensional representation of distant objects. For many decades, theoretical and empirical investigations into this ability have interpreted these percepts as faithful copies of the projected 3-D structures. Here we review empirical findings showing that perceived 3-D shape from motion is not veridical and cannot be accounted for by the current models. We present a probabilistic model based on a local analysis of optic flow. Although such a model does not guarantee a correct reconstruction of 3-D shape, it is shown to be consistent with human performance.
Collapse
Affiliation(s)
- Fulvio Domini
- Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912-1978, USA.
| | | |
Collapse
|
13
|
Kral K. Behavioural-analytical studies of the role of head movements in depth perception in insects, birds and mammals. Behav Processes 2003; 64:1-12. [PMID: 12914988 DOI: 10.1016/s0376-6357(03)00054-8] [Citation(s) in RCA: 101] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
In this review, studies of the role of head movements in generating motion parallax which is used in depth perception are examined. The methods used and definitiveness of the results vary with the animal groups studied. In the case of insects, studies which quantify motor outputs have provided clear evidence that motion parallax evoked by head movements is used for distance estimation and depth perception. In the case of birds and rodents, training studies and analyses of the head movements themselves have provided similar indications. In the case of larger mammals, due to a lack of systematic experiments, the evidence is less conclusive.
Collapse
Affiliation(s)
- Karl Kral
- Neurobiology Department, Institute of Zoology, University of Graz, A-8010, Graz, Austria
| |
Collapse
|
14
|
Campos JJ, Anderson DI, Barbu-Roth MA, Hubbard EM, Hertenstein MJ, Witherington D. Travel Broadens the Mind. INFANCY 2000; 1:149-219. [DOI: 10.1207/s15327078in0102_1] [Citation(s) in RCA: 632] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
15
|
Dijkerman HC, Milner AD, Carey DP. Motion parallax enables depth processing for action in a visual form agnosic when binocular vision is unavailable. Neuropsychologia 1999; 37:1505-10. [PMID: 10617271 DOI: 10.1016/s0028-3932(99)00063-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Visual-form agnosic patient DF, who has severe difficulties in using visual information about size, shape and orientation for perceptual report, can nevertheless--under normal viewing conditions--use the same information to accurately guide her hand movements. However, her performance of prehension tasks requiring the analysis of visual depth is severely disrupted when binocular vision is prevented. We have suggested that this deterioration in visuomotor control is due to an inability to use pictorial depth cues to compensate for the removal of binocular vision. In the current study we investigated whether DF was able to use motion parallax as an alternative to binocular cues. We asked her to grasp a square plaque slanted at different orientations in depth, under two monocular testing conditions. In one condition her head remained stationary on a chin rest, and in the other condition she made large lateral head movements just prior to each prehension movement. The results confirmed that DF is impaired in adjusting her hand orientation to the orientation of the target object when reaching monocularly with her head stationary. In contrast, when she made head movements, her manual performance was restored to almost normal levels. Our results are consistent with the idea that the processing of pictorial depth cues depends on the cortical ventral stream, which is known to be disrupted by DF's lesion. They further indicate that orientation in depth can be computed from motion parallax just as well as from binocular cues in the absence of a normally functioning ventral stream.
Collapse
Affiliation(s)
- H C Dijkerman
- School of Psychology, University of St. Andrews, Scotland, UK.
| | | | | |
Collapse
|
16
|
Busettini C, Masson GS, Miles FA. Radial optic flow induces vergence eye movements with ultra-short latencies. Nature 1997; 390:512-5. [PMID: 9394000 DOI: 10.1038/37359] [Citation(s) in RCA: 71] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
An observer moving forwards through the environment experiences a radial pattern of image motion on each retina. Such patterns of optic flow are a potential source of information about the observer's rate of progress, direction of heading and time to reach objects that lie ahead. As the viewing distance changes there must be changes in the vergence angle between the two eyes so that both foveas remain aligned on the object of interest in the scene ahead. Here we show that radial optic flow can elicit appropriately directed (horizontal) vergence eye movements with ultra-short latencies (roughly 80 ms) in human subjects. Centrifugal flow, signalling forwards motion, increases the vergence angle, whereas centripetal flow decreases the vergence angle. These vergence eye movements are still evident when the observer's view of the flow pattern is restricted to the temporal hemifield of one eye, indicating that these responses do not result from anisotropies in motion processing but from a mechanism that senses the radial pattern of flow. We hypothesize that flow-induced vergence is but one of a family of rapid ocular reflexes, mediated by the medial superior temporal cortex, compensating for translational disturbance of the observer.
Collapse
Affiliation(s)
- C Busettini
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892, USA
| | | | | |
Collapse
|