1
|
Krabben K, Mann D, Lojanica M, Mueller D, Dominici N, van der Kamp J, Savelsbergh G. How wide should you view to fight? Establishing the size of the visual field necessary for grip fighting in judo. J Sports Sci 2021; 40:236-247. [PMID: 34617503 DOI: 10.1080/02640414.2021.1987721] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Peripheral vision is often considered vital in (combat) sports, yet most experimental paradigms (e.g., eye tracking) ignore peripheral information or struggle to make inferences about the role of peripheral vision in an in-situ performance environment. This study aimed to determine where visual information is located in the peripheral field during an in-situ combat sports task. Eight advanced judokas competed in grip-fighting exchanges while wearing a mobile eye-tracker to locate gaze direction. Three-dimensional position data of the head and hands were tracked using a VICON motion capture system. Gaze analysis through automatic feature detection showed that participants predominantly fixated on their opponent's chest. Kinematic data were used to calculate the angles between the opponent's hands and the gaze-anchor point on the chest of the opponent. Results revealed a nonlinear relationship between visual field (VF) size and visibility of the hands, with athletes needing a VF of at least 30-40 degrees radius to simultaneously monitor both hands of the opponent most of the time. These findings hold implications for the regulation of Paralympic judo for athletes with vision impairment, suggesting that a less severe degree of impairment should be required to qualify than the current criterion of 20 degrees radius.
Collapse
Affiliation(s)
- Kai Krabben
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - David Mann
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Maria Lojanica
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Daniel Mueller
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Nadia Dominici
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - John van der Kamp
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Geert Savelsbergh
- Department of Human Movement Sciences, Faculty of Behaviour and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Carter OL, Campbell TG, Liu GB, Wallis G. Contradictory influence of context on predominance during binocular rivalry. Clin Exp Optom 2021; 87:153-62. [PMID: 15186206 DOI: 10.1111/j.1444-0938.2004.tb03168.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2003] [Revised: 03/26/2004] [Accepted: 03/28/2004] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Binocular rivalry is a complex process characterised by alternations in perceptual suppression and dominance that result when two different images are presented simultaneously to the left and right eyes. It has been reported recently that the addition of contextual cues will promote the predominance of the context consistent rivalry target. In contrast to Levelt's second proposition (1965), this effect has been found to result exclusively from an increase in the dominance phase duration, while the suppression phase duration remains unaffected. METHODS Human subjects were simultaneously presented with a small (2 degrees ) disc consisting of gratings (four cycles per degree) of different orientations to the two eyes. Four experiments were conducted to ascertain the effects of background gratings and contextual colour information on target predominance and phase duration. For each of the four experimental conditions, the orientation and colour of the target gratings and surrounding contextual background were systematically manipulated. RESULTS In this study, we report an effect opposite to that of Levelt. Contradictory contextual information increases target predominance and phase duration during binocular rivalry. Our results demonstrate that it is possible to promote the dominance of the context contradictory percept with co-linearity, co-chromaticity and orientation cues. In line with previous studies involving context, we find that this effect on predominance is due to an increase in the duration of the dominance rather than the suppression phase. DISCUSSION We discuss our findings in respect to those from previous studies and consider high- and low-level processes that may be responsible for these apparently 'contradictory' roles of context on binocular rivalry. In addition, we discuss how the apparent 'anti-Levelt' effect of context can be reinterpreted in a manner that brings it back in line with Levelt's second proposition and raises the question of whether 'suppressability' plays a disproportionately large role in determining the duration of perceptual phases in binocular rivalry.
Collapse
Affiliation(s)
- Olivia L Carter
- Vision Touch and Hearing Research Centre, University of Queensland, 4072, Australia
| | | | | | | |
Collapse
|
3
|
Woldegiorgis BH, Lin CJ, Liang WZ. Impact of parallax and interpupillary distance on size judgment performances of virtual objects in stereoscopic displays. ERGONOMICS 2019; 62:76-87. [PMID: 30235062 DOI: 10.1080/00140139.2018.1526328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 09/12/2018] [Accepted: 09/16/2018] [Indexed: 06/08/2023]
Abstract
Effective interactions in both real and stereoscopic environments require accurate perceptions of size and position. This study investigated the effects of parallax and interpupillary distance (IPD) on size perception of virtual objects in widescreen stereoscopic environments. Twelve participants viewed virtual spherical targets displayed at seven different depth positions, based on seven parallax levels. A perceptual matching task using five circular plates of different sizes was used to report the size judgment. The results indicated that the virtual objects were perceived as larger and smaller than the corresponding theoretical sizes, respectively, in negative and positive parallaxes. Similarly, the estimates from participants with small IPDs were greater than the predicted estimates. The findings of this study are used to explain human factor issues such as the phenomenon of inaccurate depth judgments in virtual environments, where compression is widely reported, especially at farther egocentric distances. Furthermore, a multiple regression model was developed to describe how the size was affected by parallax and IPD. Practitioner Summary: The study investigates the effects of parallax and interpupillary distance on size perception of virtual targets in a stereoscopic environment. Virtual objects were perceived as larger in negative and smaller in positive parallax. Also, size estimates were greater than the theoretical sizes for participants with smaller IPD. A multiple-regression model explains the impact of parallax and measured IPD. Abbreviations IPD interpupillary distance VR virtual eality HMD head mounted-displays 2AFC two-alternative forced choice IOD interocular distance PD pupillary distance ANOVA analysis of variance.
Collapse
Affiliation(s)
- Bereket Haile Woldegiorgis
- a Department of Industrial Management , National Taiwan University of Science and Technology , Taipei City , Taiwan
- b Faculty of Mechanical and Industrial Engineering , Bahir Dar University , Bahir Dar , Ethiopia
| | - Chiuhsiang Joe Lin
- a Department of Industrial Management , National Taiwan University of Science and Technology , Taipei City , Taiwan
| | - Wei-Zhe Liang
- c National Chung-Shan Institute of Science and Technology , Taoyuan City , Taiwan
| |
Collapse
|
4
|
Kusano T, Shimono K. Slant of a Surface Shifts Binocular Visual Direction. Vision (Basel) 2018; 2:vision2020020. [PMID: 31735884 PMCID: PMC6836083 DOI: 10.3390/vision2020020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 04/30/2018] [Accepted: 05/02/2018] [Indexed: 11/16/2022] Open
Affiliation(s)
- Tsutomu Kusano
- Faculty of Human Sciences, Kanagawa University, Yokohama-shi, Kanagawa Prefecture 221-8686, Japan
- Correspondence: ; Tel.: +81-90-5782-7308
| | - Koichi Shimono
- Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Koto-ku, Tokyo 135-8533, Japan
| |
Collapse
|
5
|
Elbaum T, Wagner M, Botzer A. Cyclopean, Dominant, and Non-dominant Gaze Tracking for Smooth Pursuit Gaze Interaction. J Eye Mov Res 2017; 10. [PMID: 33828647 PMCID: PMC7141094 DOI: 10.16910/jemr.10.1.2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
User-centered design questions in gaze interfaces have been explored in multitude empirical investigations. Interestingly, the question of what eye should be the input device has never been studied. We compared tracking accuracy between the “cyclopean” (i.e., midpoint between eyes) dominant and non-dominant eye. In two experiments, participants performed tracking tasks. In Experiment 1, participants did not use a crosshair. Results showed that mean distance from target was smaller with cyclopean than with dominant or non-dominant eyes. In Experiment 2, participants controlled a crosshair with their cyclopean, dominant and non-dominant eye intermittently and had to align the crosshair with the target. Overall tracking accuracy was highest with cyclopean eye, yet similar between cyclopean and dominant eye in the second half of the experiment. From a theoretical viewpoint, our findings correspond with the cyclopean eye theory of egocentric direction and provide indication for eye dominance, in accordance with the hemispheric laterality approach. From a practical viewpoint, we show that what eye to use as input should be a design consideration in gaze interfaces.
Collapse
|
6
|
Metabolic Changes in the Bilateral Visual Cortex of the Monocular Blind Macaque: A Multi-Voxel Proton Magnetic Resonance Spectroscopy Study. Neurochem Res 2016; 42:697-708. [PMID: 27909856 DOI: 10.1007/s11064-016-2126-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 11/07/2016] [Accepted: 11/23/2016] [Indexed: 10/20/2022]
Abstract
The metabolic changes accompanied with adaptive plasticity in the visual cortex after early monocular visual loss were unclear. In this study, we detected the metabolic changes in bilateral visual cortex of normal (group A) and monocular blind macaque (group B) for studying the adaptive plasticity using multi-voxel proton magnetic resonance spectroscopy (1H-MRS) at 32 months after right optic nerve transection. Then, we compared the N-Acetyl aspartate (NAA)/Creatine (Cr), myoinositol (Ins)/Cr, choline (Cho)/Cr and Glx (Glutamate + glutamine)/Cr ratios in the visual cortex between two groups, as well as between the left and right visual cortex of group A and B. Compared with group A, in the bilateral visual cortex, a decreased NAA/Cr and Glx/Cr ratios in group B were found, which was more clearly in the right visual cortex; whereas the Ins/Cr and Cho/Cr ratios of group B were increased. All of these findings were further confirmed by immunohistochemical staining. In conclusion, the difference of metabolic ratios can be detected by multi-voxel 1H-MRS in the visual cortex between groups A and B, which was valuable for investigating the adaptive plasticity of monocular blind macaque.
Collapse
|
7
|
Mapp AP, Ono H, Khokhotva M. Hitting the Target: Relatively Easy, Yet Absolutely Difficult. Perception 2016; 36:1139-51. [DOI: 10.1068/p5677] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
It is generally agreed that absolute-direction judgments require information about eye position, whereas relative-direction judgments do not. The source of this eye-position information, particularly during monocular viewing, is a matter of debate. It may be either binocular eye position, or the position of the viewing-eye only, that is crucial. Using more ecologically valid stimulus situations than the traditional LED in the dark, we performed two experiments. In experiment 1, observers threw darts at targets that were fixated either monocularly or binocularly. In experiment 2, observers aimed a laser gun at targets while fixating either the rear or the front gunsight monocularly, or the target either monocularly or binocularly. We measured the accuracy and precision of the observers' absolute- and relative-direction judgments. We found that (a) relative-direction judgments were precise and independent of phoria, and (b) monocular absolute-direction judgments were inaccurate, and the magnitude of the inaccuracy was predictable from the magnitude of phoria. These results confirm that relative-direction judgments do not require information about eye position. Moreover, they show that binocular eye-position information is crucial when judging the absolute direction of both monocular and binocular targets.
Collapse
Affiliation(s)
- Alistair P Mapp
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hiroshi Ono
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Mykola Khokhotva
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Porac C. More Than a Left Hand. Laterality 2016. [DOI: 10.1016/b978-0-12-801239-0.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
9
|
On binocular vision: The geometric horopter and Cyclopean eye. Vision Res 2015; 119:73-81. [PMID: 26548811 DOI: 10.1016/j.visres.2015.11.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2015] [Revised: 08/10/2015] [Accepted: 11/04/2015] [Indexed: 11/22/2022]
Abstract
We study geometric properties of horopters defined by the criterion of equality of angle. Our primary goal is to derive the precise geometry for anatomically correct horopters. When eyes fixate on points along a curve in the horizontal visual plane for which the vergence remains constant, this curve is the larger arc of a circle connecting the eyes' rotation centers. This isovergence circle is known as the Vieth-Müller circle. We show that, along the isovergence circular arc, there is an infinite family of horizontal horopters formed by circular arcs connecting the nodal points. These horopters intersect at the point of symmetric convergence. We prove that the family of 3D geometric horopters consists of two perpendicular components. The first component consists of the horizontal horopters parametrized by vergence, the point of the isovergence circle, and the choice of the nodal point location. The second component is formed by straight lines parametrized by vergence. Each of these straight lines is perpendicular to the visual plane and passes through the point of symmetric convergence. Finally, we evaluate the difference between the geometric horopter and the Vieth-Müller circle for typical near fixation distances and discuss its possible significance for depth discrimination and other related functions of vision that make use of disparity processing.
Collapse
|
10
|
Ono H, Saqib Y. The reference point for monocular visual direction can, sometimes, be one of the eyes rather than the cyclopean eye. Perception 2015; 44:597-603. [PMID: 26422906 DOI: 10.1068/p7934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
We found that the imaginary line passing through two stimuli that points to an eye appears to do so when seen monocularly, which is consistent with Porterfield's axiom but inconsistent with Wells's proposition regarding visual direction. We also found that the imaginary line appears to point to the bridge of the nose when the near stimulus is seen binocularly and the far one is seen monocularly, which is consistent with Wells's proposition but inconsistent with Porterfield's axiom. We argue that these findings themselves do not necessarily vitiate the axiom or the proposition and that one should explore the different experimental conditions and hypothesize about the processes that might be involved.
Collapse
|
11
|
Barendregt M, Harvey BM, Rokers B, Dumoulin SO. Transformation from a retinal to a cyclopean representation in human visual cortex. Curr Biol 2015; 25:1982-7. [PMID: 26144967 DOI: 10.1016/j.cub.2015.06.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Revised: 05/13/2015] [Accepted: 06/01/2015] [Indexed: 11/28/2022]
Abstract
We experience our visual world as seen from a single viewpoint, even though our two eyes receive slightly different images. One role of the visual system is to combine the two retinal images into a single representation of the visual field, sometimes called the cyclopean image [1]. Conventional terminology, i.e. retinotopy, implies that the topographic organization of visual areas is maintained throughout visual cortex [2]. However, following the hypothesis that a transformation occurs from a representation of the two retinal images (retinotopy) to a representation of a single cyclopean image (cyclopotopy), we set out to identify the stage in visual processing at which this transformation occurs in the human brain. Using binocular stimuli, population receptive field mapping (pRF), and ultra-high-field (7 T) fMRI, we find that responses in striate cortex (V1) best reflect stimulus position in the two retinal images. In extrastriate cortex (from V2 to LO), on the other hand, responses better reflect stimulus position in the cyclopean image. These results pinpoint the location of the transformation from a retinal to a cyclopean representation and contribute to an understanding of the transition from sensory to perceptual stimulus space in the human brain.
Collapse
Affiliation(s)
- Martijn Barendregt
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA
| | - Ben M Harvey
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Faculty of Psychology and Education Sciences, University of Coimbra, 3001-802 Coimbra, Portugal
| | - Bas Rokers
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA.
| | - Serge O Dumoulin
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands
| |
Collapse
|
12
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
13
|
Radhakrishnan A, Sawides L, Dorronsoro C, Peli E, Marcos S. Single neural code for blur in subjects with different interocular optical blur orientation. J Vis 2015; 15:15. [PMID: 26114678 PMCID: PMC4484355 DOI: 10.1167/15.8.15] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 05/11/2015] [Indexed: 11/24/2022] Open
Abstract
The ability of the visual system to compensate for differences in blur orientation between eyes is not well understood. We measured the orientation of the internal blur code in both eyes of the same subject monocularly by presenting pairs of images blurred with real ocular point spread functions (PSFs) of similar blur magnitude but varying in orientations. Subjects assigned a level of confidence to their selection of the best perceived image in each pair. Using a classification-images-inspired paradigm and applying a reverse correlation technique, a classification map was obtained from the weighted averages of the PSFs, representing the internal blur code. Positive and negative neural PSFs were obtained from the classification map, representing the neural blur for best and worse perceived blur, respectively. The neural PSF was found to be highly correlated in both eyes, even for eyes with different ocular PSF orientations (rPos = 0.95; rNeg = 0.99; p < 0.001). We found that in subjects with similar and with different ocular PSF orientations between eyes, the orientation of the positive neural PSF was closer to the orientation of the ocular PSF of the eye with the better optical quality (average difference was ∼10°), while the orientation of the positive and negative neural PSFs tended to be orthogonal. These results suggest a single internal code for blur with orientation driven by the orientation of the optical blur of the eye with better optical quality.
Collapse
|
14
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
15
|
ONO HIROSHI, WADE NICHOLASJ. Two historical strands in studying visual direction1. JAPANESE PSYCHOLOGICAL RESEARCH 2012. [DOI: 10.1111/j.1468-5884.2011.00506.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
16
|
Hoover AEN, Harris LR, Steeves JKE. Sensory compensation in sound localization in people with one eye. Exp Brain Res 2011; 216:565-74. [DOI: 10.1007/s00221-011-2960-0] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Accepted: 11/15/2011] [Indexed: 11/30/2022]
|
17
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
18
|
Shimono K, Higashiyama A. Dual-Egocentre Hypothesis on Angular Errors in Visually Directed Pointing. Perception 2011; 40:805-21. [DOI: 10.1068/p6604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
We examined the hypothesis that angular errors in visually directed pointing, in which an unseen target is pointed to after its direction has been seen, are attributed to the difference between the locations of the visual and kinesthetic egocentres. Experiment 1 showed that in three of four cases, angular errors in visually directed pointing equaled those in kinesthetically directed pointing, in which a visual target was pointed to after its direction had been felt. Experiment 2 confirmed the results of experiment 1 for the targets at two different egocentric distances. Experiment 3 showed that when the kinesthetic egocentre was used as the reference of direction, angular errors in visually directed pointing equaled those in visually directed reaching, in which an unseen target is reached after its location has been seen. These results suggest that in the visually and the kinesthetically directed pointing, the egocentric directions represented in the visual space are transferred to the kinesthetic space and vice versa.
Collapse
Affiliation(s)
- Koichi Shimono
- Department of Logistics & Information Sciences, Tokyo University of Marine Science and Technology, Ettchujima 2-1-6, Koto-ku, Tokyo 135-8533, Japan
| | - Atsuki Higashiyama
- Department of Psychology, Ritsumeikan University, Tojiin Kitamachi 56-1, Kita-ku, Kyoto 603-8577, Japan
| |
Collapse
|
19
|
Sridhar D, Bedell HE. Relative contributions of the two eyes to perceived egocentric visual direction in normal binocular vision. Vision Res 2011; 51:1075-85. [PMID: 21371491 PMCID: PMC3092072 DOI: 10.1016/j.visres.2011.02.023] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2010] [Revised: 02/21/2011] [Accepted: 02/23/2011] [Indexed: 10/18/2022]
Abstract
Perceived egocentric direction (EVD) is based on the sensed position of the eyes in the orbit and the oculocentric visual direction (eye-centered, OVD). Previous reports indicate that in some subjects eye-position information from the two eyes contributes unequally to the perceived EVD. Findings from other studies indicate that the retinal information from the two eyes may not always contribute equally to perceived OVD. The goal of this study was to assess whether these two sources of information covary similarly within the same individuals. Open-loop pointing responses to an isolated target presented randomly at several horizontal locations were collected from 13 subjects during different magnitudes of asymmetric vergence to estimate the contribution of the position information from each eye to perceived EVD. For the same subjects, the direction at which a horizontally or vertically disparate target with different interocular contrast or luminance ratios appeared aligned with a non-disparate target estimated the relative contribution of each eye's retinal information. The results show that the eye-position and retinal information vary similarly in most subjects, which is consistent with a modified version of Hering's law of visual direction.
Collapse
Affiliation(s)
- Deepika Sridhar
- College of Optometry, University of Houston. 505 J Armistead Bldg, Houston, TX 77204-2020, USA
| | - Harold E. Bedell
- College of Optometry, University of Houston. 505 J Armistead Bldg, Houston, TX 77204-2020, USA
- Center for NeuroEngineering & Cognitive Science, University of Houston, Houston, TX 77204-4005, USA,
| |
Collapse
|
20
|
|
21
|
Harris JM, Wilcox LM. The role of monocularly visible regions in depth and surface perception. Vision Res 2009; 49:2666-85. [PMID: 19577589 DOI: 10.1016/j.visres.2009.06.021] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2008] [Revised: 06/22/2009] [Accepted: 06/25/2009] [Indexed: 11/18/2022]
Abstract
The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.
Collapse
Affiliation(s)
- Julie M Harris
- School of Psychology, University of St. Andrews, South St., St. Andrews, KY169JP Scotland, United Kingdom.
| | | |
Collapse
|
22
|
Ono H, Wade NJ, Lillakas L. Binocular Vision: Defining the Historical Directions. Perception 2009; 38:492-507. [DOI: 10.1068/p6130] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Ever since Kepler described the image-forming properties of the eye (400 years ago) there has been a widespread belief, which remains to this day, that an object seen with one eye is always seen where it is. Predictions made by Ptolemy in the first century, Alhazen in the eleventh, and Wells in the eighteenth, and supported by Towne, Hering, and LeConte in the nineteenth century, however, are contrary to this claimed veridicality. We discuss how among eighteenth-and nineteenth-century British researchers, particularly Porterfield, Brewster, and Wheatstone, the erroneous idea continued and also why observations made by Wells were neither understood nor appreciated. Finally, we discuss recent data, obtained with a new method, that further support Wells's predictions and which show that a distinction between headcentric and relative direction tasks is needed to appreciate the predictions.
Collapse
Affiliation(s)
- Hiroshi Ono
- Department of Psychology, York University, Toronto, Ontario M3J 1P3, Canada
| | - Nicholas J Wade
- School of Psychology, University of Dundee, Dundee DD1 4HN, Scotland, UK
| | | |
Collapse
|
23
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
24
|
Abstract
In presbyopia, patients can no longer obtain clear vision at distance and near. Monovision is a method of correcting presbyopia where one eye is focussed for distance vision and the other for near. Monovision is a fairly common method of correcting presbyopia with contact lenses and has received renewed interest with the increase in refractive surgery. The present paper is a review of the literature on monovision. The success rate of monovision in adapted contact lens wearers is 59-67%. The main limitations are problems with suppressing the blurred image when driving at night and the need for a third focal length, for example with computer screens at intermediate distances. Stereopsis is impaired in monovision, but most patients do not seem to notice this. These limitations highlight the need to take account of occupational factors. Monovision could cause a binocular vision anomaly to decompensate, so the pre-fitting screening should include an assessment of orthoptic function. Various methods have been used to determine which eye should be given the distance vision contact lens and the literature on tests of ocular dominance is reviewed. It is concluded that tests of blur suppression are most likely to be relevant, but that ocular dominance is not fixed but is rather a fluid, adaptive, phenomenon in most patients. Suitable patients can often be given trial lenses that allow them to experiment with monovision in real world situations and this can be a useful way of revealing the preferred eye for each distance. Of course, no patient should drive or operate machinery until successfully adapted to monovision. Surgically induced monovision is less easily reversed than contact lens-induced monovision, and is only appropriate after a successful trial of monovision with contact lenses.
Collapse
Affiliation(s)
- Bruce J W Evans
- Neville Chappell Research Clinic, Institute of Optometry, 56-62 Newington Causeway, London SE1 6DS, UK.
| |
Collapse
|
25
|
Ono H, Mapp AP, Mizushina H. The cyclopean illusion unleashed. Vision Res 2007; 47:2067-75. [PMID: 17574645 DOI: 10.1016/j.visres.2007.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2006] [Revised: 01/16/2007] [Accepted: 03/08/2007] [Indexed: 10/23/2022]
Abstract
The cyclopean illusion is the apparent lateral shift of stationary stimuli on a visual axis that occurs when vergence changes. This illusion is predictable from the rules of visual direction. There are three stimulus situations reported in the literature, however, in which the illusion does not occur. In the three experiments reported here we examine those stimulus situations. Experiment 1 showed that an afterimage seen on a stimulus moving on the visual axis does not produce the illusion as reported in the literature but an afterimage seen on a screen does. Experiment 2 showed that the illusion occurs for an intermittently presented stimulus in contrast to what has been reported previously. Experiment 3 showed that a monocular stimulus presented against a random-dot background produced the illusion, also in contrast to what has been reported. The results were consistent with the rules of visual direction.
Collapse
Affiliation(s)
- Hiroshi Ono
- Department of Psychology and Centre for Vision Research, York University, Toronto, Ont., Canada M3J 1P3.
| | | | | |
Collapse
|
26
|
Shimono K, Tam WJ, Ono H. Apparent motion of monocular stimuli in different depth planes with lateral head movements. Vision Res 2007; 47:1027-35. [PMID: 17337029 DOI: 10.1016/j.visres.2007.01.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2006] [Revised: 01/24/2007] [Accepted: 01/24/2007] [Indexed: 11/24/2022]
Abstract
A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.
Collapse
Affiliation(s)
- K Shimono
- Department of Marine Technology, Tokyo University of Marine Science and Technology, Ettchujima, Tokyo 135-8533, Japan.
| | | | | |
Collapse
|
27
|
Khokhotva M, Ono H, Mapp AP. The cyclopean eye is relevant for predicting visual direction. Vision Res 2005; 45:2339-45. [PMID: 15921718 DOI: 10.1016/j.visres.2005.04.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2004] [Revised: 10/15/2004] [Indexed: 11/29/2022]
Abstract
Wells-Hering's laws summarize how we process direction and predict that monocular stimuli appear displaced with respect to the viewer, but not with respect to other seen objects [Erkelens, C. J., & van Ee, R. (2002). The role of the cyclopean eye in vision: sometimes inappropriate, always irrelevant. Vision Research 42, 1157-1163] criticized this view and claimed that there is no perceptual displacement of these stimuli. We challenge their claim and improve on shortcomings of past studies. LEDs were monocularly presented to the observers, without their knowledge of which eye was being stimulated. Viewing distance was 9-10 cm; fixation distance was 30 cm. Observers reported the perceived relative and absolute directions of monocular stimuli. Our results are consistent with Wells-Hering's laws.
Collapse
Affiliation(s)
- Mykola Khokhotva
- Centre for Vision Research, York University, 4700 Keele Street, Toronto, Ontario, Canada M3J1P3
| | | | | |
Collapse
|
28
|
Abstract
UNLABELLED Ocular dominance manifests itself in tests that contain stereo-objects with a disparity beyond Panum's area, e.g. in pointing a finger. These tests force subjects to decide in favour of one or the other eye. In contrast, ocular prevalence is determined using stereo-targets imaged within Panum's areas. These tests allow a graded quantification of the balance between the eyes. Here we present the computer-based Freiburg Ocular Prevalence Test in which stereo-disparate targets have to be aligned, and compare it with the Haase Stereo-balance Test that requires an estimation of the horizontal distance between stationary stereo-disparate objects. In addition, we compare ocular prevalence with ocular dominance. METHODS (1) We measured the influence of a neutral-grey filter in front of one eye to assess the suitability of the Freiburg and the Haase Tests in revealing graded amounts of ocular prevalence. (2) About 20 subjects with equal vision of their two eyes underwent the Freiburg and the Haase Tests for ocular prevalence, and Parson's Monoptoscope Test for ocular dominance. RESULTS (1) In both the Freiburg and the Haase Tests, the neutral-grey filter shifted ocular prevalence by about 50%. (2) An ocular prevalence of more than 10% occurred in 13 of the 20 subjects using the Freiburg, and in 14 using the Haase Test. On average, the ocular prevalence was 24.1+/-3.8% in the Freiburg and 32.0+/-8.2% in the Haase Test. The dominant eye coincided with the prevalent eye in 15 of the 20 subjects. DISCUSSION The effect of the neutral-grey filter indicated that both the Freiburg and the Haase Tests can be used to measure fractions of ocular prevalence, although the Freiburg Test carries a higher reproducibility. Spontaneous ocular prevalence occurs frequently in persons with equal vision of their two eyes. This suggests that ocular prevalence does not represent a condition that requires treatment. Rather, partial suppression of one eye, the correlate of ocular prevalence, may play a physiological role in that it helps to disregard double images at stereo-disparities close to the limits of Panum's area.
Collapse
|
29
|
Mapp AP, Ono H, Barbeito R. What does the dominant eye dominate? A brief and somewhat contentious review. PERCEPTION & PSYCHOPHYSICS 2003; 65:310-7. [PMID: 12713246 DOI: 10.3758/bf03194802] [Citation(s) in RCA: 122] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examine a set of implicit and explicit claims about the concept of eye dominance that have been made over the years and note that the new literature on eye dominance does not reflect the old literature from the first half of the last century. We argue that the visual and oculomotor function of the dominant eye--defined by such criteria as asymmetry in acuity, rivalry, or sighting--remains unknown and that the usefulness of the concept for understanding its function is yet to be determined. We suggest that the sighting-dominant eye is the eye used for monocular tasks and has no unique functional role in vision.
Collapse
|