1
|
Xia Z, Zhang Y, Ma F, Cheng C, Hu F. Effect of spatial distortions in head-mounted displays on visually induced motion sickness. OPTICS EXPRESS 2023; 31:1737-1754. [PMID: 36785202 DOI: 10.1364/oe.478455] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 12/23/2022] [Indexed: 06/18/2023]
Abstract
Incomplete optical distortion correction in VR HMDs leads to spatial dynamic distortion, which is a potential cause of VIMS. A perception experiment is designed for the investigation with three spatial distortion levels, with the subjective SSQ, five-scale VIMS level rating, and objective postural instability adopted as the evaluation metrics. The results show that the factor of spatial distortion level has a significant effect on all metrics increments (p<0.05). As the spatial distortion level drops off, the increments of VIMS symptoms decrease. The study highlights the importance of perfect spatial distortion correction in VR HMDs for eliminating the potential VIMS aggravation effect.
Collapse
|
2
|
Williford K, Bennequin D, Friston K, Rudrauf D. The Projective Consciousness Model and Phenomenal Selfhood. Front Psychol 2018; 9:2571. [PMID: 30618988 PMCID: PMC6304424 DOI: 10.3389/fpsyg.2018.02571] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Accepted: 11/30/2018] [Indexed: 01/29/2023] Open
Abstract
We summarize our recently introduced Projective Consciousness Model (PCM) (Rudrauf et al., 2017) and relate it to outstanding conceptual issues in the theory of consciousness. The PCM combines a projective geometrical model of the perspectival phenomenological structure of the field of consciousness with a variational Free Energy minimization model of active inference, yielding an account of the cybernetic function of consciousness, viz., the modulation of the field's cognitive and affective dynamics for the effective control of embodied agents. The geometrical and active inference components are linked via the concept of projective transformation, which is crucial to understanding how conscious organisms integrate perception, emotion, memory, reasoning, and perspectival imagination in order to control behavior, enhance resilience, and optimize preference satisfaction. The PCM makes substantive empirical predictions and fits well into a (neuro)computationalist framework. It also helps us to account for aspects of subjective character that are sometimes ignored or conflated: pre-reflective self-consciousness, the first-person point of view, the sense of minenness or ownership, and social self-consciousness. We argue that the PCM, though still in development, offers us the most complete theory to date of what Thomas Metzinger has called "phenomenal selfhood."
Collapse
Affiliation(s)
- Kenneth Williford
- Department of Philosophy and Humanities, University of Texas at Arlington, Arlington, TX, United States
| | - Daniel Bennequin
- Department of Mathematics, Mathematics Institute of Jussieu–Paris Rive Gauche, University of Paris 7, Paris, France
| | - Karl Friston
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
| | - David Rudrauf
- Faculty of Psychology and Education Sciences, Section of Psychology, Swiss Center for Affective Sciences, Centre Universitaire d’Informatique, University of Geneva, Geneva, Switzerland
| |
Collapse
|
3
|
Tugac N, Gonzalez D, Noguchi K, Niechwiej-Szwedo E. The role of somatosensory input in target localization during binocular and monocular viewing while performing a high precision reaching and placement task. Exp Eye Res 2018; 183:76-83. [PMID: 30125540 DOI: 10.1016/j.exer.2018.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 08/15/2018] [Accepted: 08/16/2018] [Indexed: 11/25/2022]
Abstract
Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is possible that other sensory inputs could be used to obtain reliable depth information when binocular vision is not available. However, it is currently unknown whether depth information from another modality improves target localization in depth during action execution. Therefore, the goal of this study was to assess whether somatosensory input improves target localization during the performance of a precision placement task. Visually normal young adults (n = 15) performed a bead threading task during binocular and monocular viewing in two experimental conditions where needle location was specified by 1) vision only, or 2) vision and somatosensory input, which was provided by the non-dominant limb. Performance on the task was assessed using spatial and temporal kinematic measures. In accordance with the hypothesis, results showed that the interval spent placing the bead on the needle was significantly shorter during monocular viewing when somatosensory input was available in comparison to a vision only condition. In contrast, results showed no evidence to support that somatosensory input about the needle location affects trajectory control. These findings demonstrate that the central nervous system relies predominately on visual input during reach execution, however, somatosensory input can be used to facilitate the performance of the precision placement task.
Collapse
Affiliation(s)
- Naime Tugac
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - David Gonzalez
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - Kimihiro Noguchi
- Department of Mathematics, Western Washington University, Bellingham, USA
| | | |
Collapse
|
4
|
Elbaum T, Wagner M, Botzer A. Cyclopean, Dominant, and Non-dominant Gaze Tracking for Smooth Pursuit Gaze Interaction. J Eye Mov Res 2017; 10. [PMID: 33828647 PMCID: PMC7141094 DOI: 10.16910/jemr.10.1.2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
User-centered design questions in gaze interfaces have been explored in multitude empirical investigations. Interestingly, the question of what eye should be the input device has never been studied. We compared tracking accuracy between the “cyclopean” (i.e., midpoint between eyes) dominant and non-dominant eye. In two experiments, participants performed tracking tasks. In Experiment 1, participants did not use a crosshair. Results showed that mean distance from target was smaller with cyclopean than with dominant or non-dominant eyes. In Experiment 2, participants controlled a crosshair with their cyclopean, dominant and non-dominant eye intermittently and had to align the crosshair with the target. Overall tracking accuracy was highest with cyclopean eye, yet similar between cyclopean and dominant eye in the second half of the experiment. From a theoretical viewpoint, our findings correspond with the cyclopean eye theory of egocentric direction and provide indication for eye dominance, in accordance with the hemispheric laterality approach. From a practical viewpoint, we show that what eye to use as input should be a design consideration in gaze interfaces.
Collapse
|
5
|
Mapp AP, Ono H, Khokhotva M. Hitting the Target: Relatively Easy, Yet Absolutely Difficult. Perception 2016; 36:1139-51. [DOI: 10.1068/p5677] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
It is generally agreed that absolute-direction judgments require information about eye position, whereas relative-direction judgments do not. The source of this eye-position information, particularly during monocular viewing, is a matter of debate. It may be either binocular eye position, or the position of the viewing-eye only, that is crucial. Using more ecologically valid stimulus situations than the traditional LED in the dark, we performed two experiments. In experiment 1, observers threw darts at targets that were fixated either monocularly or binocularly. In experiment 2, observers aimed a laser gun at targets while fixating either the rear or the front gunsight monocularly, or the target either monocularly or binocularly. We measured the accuracy and precision of the observers' absolute- and relative-direction judgments. We found that (a) relative-direction judgments were precise and independent of phoria, and (b) monocular absolute-direction judgments were inaccurate, and the magnitude of the inaccuracy was predictable from the magnitude of phoria. These results confirm that relative-direction judgments do not require information about eye position. Moreover, they show that binocular eye-position information is crucial when judging the absolute direction of both monocular and binocular targets.
Collapse
Affiliation(s)
- Alistair P Mapp
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hiroshi Ono
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Mykola Khokhotva
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Ono H, Saqib Y. The reference point for monocular visual direction can, sometimes, be one of the eyes rather than the cyclopean eye. Perception 2015; 44:597-603. [PMID: 26422906 DOI: 10.1068/p7934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
We found that the imaginary line passing through two stimuli that points to an eye appears to do so when seen monocularly, which is consistent with Porterfield's axiom but inconsistent with Wells's proposition regarding visual direction. We also found that the imaginary line appears to point to the bridge of the nose when the near stimulus is seen binocularly and the far one is seen monocularly, which is consistent with Wells's proposition but inconsistent with Porterfield's axiom. We argue that these findings themselves do not necessarily vitiate the axiom or the proposition and that one should explore the different experimental conditions and hypothesize about the processes that might be involved.
Collapse
|
7
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
8
|
Carey DP, Hutchinson CV. Looking at eye dominance from a different angle: is sighting strength related to hand preference? Cortex 2012; 49:2542-52. [PMID: 23357202 DOI: 10.1016/j.cortex.2012.11.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2011] [Revised: 08/24/2012] [Accepted: 11/04/2012] [Indexed: 10/27/2022]
Abstract
Sighting dominance (the behavioural preference for one eye over the other under monocular viewing conditions) has traditionally been thought of as a robust individual trait. However, Khan and Crawford (2001) have shown that, under certain viewing conditions, eye preference reverses as a function of horizontal gaze angle. Remarkably, the reversal of sighting from one eye to the other depends on which hand is used to reach out and grasp the target. Their procedure provides an ideal way to measure the strength of monocular preference for sighting, which may be related to other indicators of hemispheric specialisation for speech, language and motor function. Therefore, we hypothesised that individuals with consistent side preferences (e.g., right hand, right eye) should have more robust sighting dominance than those with crossed lateral preferences. To test this idea, we compared strength of eye dominance in individuals who are consistently right or left sided for hand and foot preference with those who are not. We also modified their procedure in order to minimise a potential image size confound, suggested by Banks et al. (2004) as an explanation of Khan and Crawford's results. We found that the sighting dominance switch occurred at similar eccentricities when we controlled for effects of hand occlusion and target size differences. We also found that sighting dominance thresholds change predictably with the hand used. However, we found no evidence for relationships between strength of hand preference as assessed by questionnaire or by pegboard performance and strength of sighting dominance. Similarly, participants with consistent hand and foot preferences did not show stronger eye preference as assessed using the Khan and Crawford procedure. These data are discussed in terms of indirect relationships between sighting dominance, hand preference and cerebral specialisation for language and motor control.
Collapse
Affiliation(s)
- David P Carey
- School of Psychology, Bangor University, Gwynedd LL57 2AS, UK.
| | | |
Collapse
|
9
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
10
|
ONO HIROSHI, WADE NICHOLASJ. Two historical strands in studying visual direction1. JAPANESE PSYCHOLOGICAL RESEARCH 2012. [DOI: 10.1111/j.1468-5884.2011.00506.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Shimono K, Higashiyama A. Dual-Egocentre Hypothesis on Angular Errors in Visually Directed Pointing. Perception 2011; 40:805-21. [DOI: 10.1068/p6604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
We examined the hypothesis that angular errors in visually directed pointing, in which an unseen target is pointed to after its direction has been seen, are attributed to the difference between the locations of the visual and kinesthetic egocentres. Experiment 1 showed that in three of four cases, angular errors in visually directed pointing equaled those in kinesthetically directed pointing, in which a visual target was pointed to after its direction had been felt. Experiment 2 confirmed the results of experiment 1 for the targets at two different egocentric distances. Experiment 3 showed that when the kinesthetic egocentre was used as the reference of direction, angular errors in visually directed pointing equaled those in visually directed reaching, in which an unseen target is reached after its location has been seen. These results suggest that in the visually and the kinesthetically directed pointing, the egocentric directions represented in the visual space are transferred to the kinesthetic space and vice versa.
Collapse
Affiliation(s)
- Koichi Shimono
- Department of Logistics & Information Sciences, Tokyo University of Marine Science and Technology, Ettchujima 2-1-6, Koto-ku, Tokyo 135-8533, Japan
| | - Atsuki Higashiyama
- Department of Psychology, Ritsumeikan University, Tojiin Kitamachi 56-1, Kita-ku, Kyoto 603-8577, Japan
| |
Collapse
|
12
|
Ono H, Wade NJ, Lillakas L. Binocular Vision: Defining the Historical Directions. Perception 2009; 38:492-507. [DOI: 10.1068/p6130] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Ever since Kepler described the image-forming properties of the eye (400 years ago) there has been a widespread belief, which remains to this day, that an object seen with one eye is always seen where it is. Predictions made by Ptolemy in the first century, Alhazen in the eleventh, and Wells in the eighteenth, and supported by Towne, Hering, and LeConte in the nineteenth century, however, are contrary to this claimed veridicality. We discuss how among eighteenth-and nineteenth-century British researchers, particularly Porterfield, Brewster, and Wheatstone, the erroneous idea continued and also why observations made by Wells were neither understood nor appreciated. Finally, we discuss recent data, obtained with a new method, that further support Wells's predictions and which show that a distinction between headcentric and relative direction tasks is needed to appreciate the predictions.
Collapse
Affiliation(s)
- Hiroshi Ono
- Department of Psychology, York University, Toronto, Ontario M3J 1P3, Canada
| | - Nicholas J Wade
- School of Psychology, University of Dundee, Dundee DD1 4HN, Scotland, UK
| | | |
Collapse
|
13
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
14
|
NAKAMIZO SACHIO, KAWABATA HIDEAKI, ONO HIROSHI. Misconvergence to the stimulus plane causes apparent displacement of the stimulus elements seen monocularly. JAPANESE PSYCHOLOGICAL RESEARCH 2008. [DOI: 10.1111/j.1468-5884.2007.00361.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Ono H, Mapp AP, Mizushina H. The cyclopean illusion unleashed. Vision Res 2007; 47:2067-75. [PMID: 17574645 DOI: 10.1016/j.visres.2007.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2006] [Revised: 01/16/2007] [Accepted: 03/08/2007] [Indexed: 10/23/2022]
Abstract
The cyclopean illusion is the apparent lateral shift of stationary stimuli on a visual axis that occurs when vergence changes. This illusion is predictable from the rules of visual direction. There are three stimulus situations reported in the literature, however, in which the illusion does not occur. In the three experiments reported here we examine those stimulus situations. Experiment 1 showed that an afterimage seen on a stimulus moving on the visual axis does not produce the illusion as reported in the literature but an afterimage seen on a screen does. Experiment 2 showed that the illusion occurs for an intermittently presented stimulus in contrast to what has been reported previously. Experiment 3 showed that a monocular stimulus presented against a random-dot background produced the illusion, also in contrast to what has been reported. The results were consistent with the rules of visual direction.
Collapse
Affiliation(s)
- Hiroshi Ono
- Department of Psychology and Centre for Vision Research, York University, Toronto, Ont., Canada M3J 1P3.
| | | | | |
Collapse
|
16
|
Shimono K, Tam WJ, Ono H. Apparent motion of monocular stimuli in different depth planes with lateral head movements. Vision Res 2007; 47:1027-35. [PMID: 17337029 DOI: 10.1016/j.visres.2007.01.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2006] [Revised: 01/24/2007] [Accepted: 01/24/2007] [Indexed: 11/24/2022]
Abstract
A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.
Collapse
Affiliation(s)
- K Shimono
- Department of Marine Technology, Tokyo University of Marine Science and Technology, Ettchujima, Tokyo 135-8533, Japan.
| | | | | |
Collapse
|