1
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024; 34:4983-4997.e9. [PMID: 39389059 PMCID: PMC11537840 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
2
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
3
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
4
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 PMCID: PMC7616138 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
5
|
Zhao B, Wang R, Zhu Z, Yang Q, Chen A. The computational rules of cross-modality suppression in the visual posterior sylvian area. iScience 2023; 26:106973. [PMID: 37378331 PMCID: PMC10291470 DOI: 10.1016/j.isci.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
The macaque visual posterior sylvian area (VPS) is an area with neurons responding selectively to heading direction in both visual and vestibular modalities, but how VPS neurons combined these two sensory signals is still unknown. In contrast to the subadditive characteristics in the medial superior temporal area (MSTd), responses in VPS were dominated by vestibular signals, with approximately a winner-take-all competition. The conditional Fisher information analysis shows that VPS neural population encodes information from distinct sensory modalities under large and small offset conditions, which differs from MSTd whose neural population contains more information about visual stimuli in both conditions. However, the combined responses of single neurons in both areas can be well fit by weighted linear sums of unimodal responses. Furthermore, a normalization model captured most vestibular and visual interaction characteristics for both VPS and MSTd, indicating the divisive normalization mechanism widely exists in the cortex.
Collapse
Affiliation(s)
- Bin Zhao
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Rong Wang
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Zhihua Zhu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qianli Yang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| |
Collapse
|
6
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
7
|
Layton OW, Parade MS, Fajen BR. The accuracy of object motion perception during locomotion. Front Psychol 2023; 13:1068454. [PMID: 36710725 PMCID: PMC9878598 DOI: 10.3389/fpsyg.2022.1068454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/19/2022] [Indexed: 01/15/2023] Open
Abstract
Human observers are capable of perceiving the motion of moving objects relative to the stationary world, even while undergoing self-motion. Perceiving world-relative object motion is complicated because the local optical motion of objects is influenced by both observer and object motion, and reflects object motion in observer coordinates. It has been proposed that observers recover world-relative object motion using global optic flow to factor out the influence of self-motion. However, object-motion judgments during simulated self-motion are biased, as if the visual system cannot completely compensate for the influence of self-motion. Recently, Xie et al. demonstrated that humans are capable of accurately judging world-relative object motion when self-motion is real, actively generated by walking, and accompanied by optic flow. However, the conditions used in that study differ from those found in the real world in that the moving object was a small dot with negligible optical expansion that moved at a fixed speed in retinal (rather than world) coordinates and was only visible for 500 ms. The present study investigated the accuracy of object motion perception under more ecologically valid conditions. Subjects judged the trajectory of an object that moved through a virtual environment viewed through a head-mounted display. Judgments exhibited bias in the case of simulated self-motion but were accurate when self-motion was real, actively generated, and accompanied by optic flow. The findings are largely consistent with the conclusions of Xie et al. and demonstrate that observers are capable of accurately perceiving world-relative object motion under ecologically valid conditions.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States,Department of Computer Science, Colby College, Waterville, ME, United States,*Correspondence: Oliver W. Layton, ✉
| | - Melissa S. Parade
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
8
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion-A registered report protocol. PLoS One 2023; 18:e0267983. [PMID: 36716328 PMCID: PMC9886253 DOI: 10.1371/journal.pone.0267983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Canada
- * E-mail:
| | | |
Collapse
|
9
|
Xing X, Saunders JA. Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion. J Vis 2022; 22:8. [PMID: 36223109 PMCID: PMC9583749 DOI: 10.1167/jov.22.11.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This study investigated the relationship between perceived heading direction and perceived motion of an independently moving object during self-motion. Using a dual task paradigm, we tested whether object motion judgments showed biases consistent with heading perception, both across conditions and from trial to trial. Subjects viewed simulated self-motion and estimated their heading direction (Experiment 1), or walked toward a target in virtual reality with conflicting physical and visual cues (Experiment 2). During self-motion, an independently moving object briefly appeared, with varied horizontal velocity, and observers judged whether the object was moving leftward or rightward. In Experiment 1, heading estimates showed an expected center bias, and object motion judgments showed corresponding biases. Trial-to-trial variations were also correlated: on trials with a more rightward heading bias, object motion judgments were consistent with a more rightward heading, and vice versa. In Experiment 2, we estimated the relative weighting of visual and physical cues in control of walking and object motion judgments. Both were strongly influenced by nonvisual cues, with less weighting for object motion (86% vs. 63%). There were also trial-to-trial correlations between biases in walking direction and object motion judgments. The results provide evidence that shared mechanisms contribute to heading perception and perception of object motion.
Collapse
Affiliation(s)
- Xing Xing
- Department of Psychology, University of Hong Kong, Hong Kong.,
| | | |
Collapse
|
10
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
11
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
12
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
13
|
An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR. JOURNAL OF ROBOTICS 2022. [DOI: 10.1155/2022/5264347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Environmental perception systems can provide information on the environment around a vehicle, which is key to active vehicle safety systems. However, these systems underperform in cases of sloped roads. Real-time obstacle detection using monocular vision is a challenging problem in this situation. In this study, an obstacle detection and distance measurement method for sloped roads based on Vision-IMU based detection and range method (VIDAR) is proposed. First, the road images are collected and processed. Then, the road distance and slope information provided by a digital map is input into the VIDAR to detect and eliminate false obstacles (i.e., those for which no height can be calculated). The movement state of the obstacle is determined by tracking its lowest point. Finally, experimental analysis is carried out through simulation and real-vehicle experiments. The results show that the proposed method has higher detection accuracy than YOLO v5s in a sloped road environment and is not susceptible to interference from false obstacles. The most prominent contribution of this research work is to describe a sloped road obstacle detection method, which is capable of detecting all types of obstacles without prior knowledge to meet the needs of real-time and accurate detection of slope road obstacles.
Collapse
|
14
|
Jörges B, Harris LR. Object speed perception during lateral visual self-motion. Atten Percept Psychophys 2022; 84:25-46. [PMID: 34704212 PMCID: PMC8547725 DOI: 10.3758/s13414-021-02372-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2021] [Indexed: 11/08/2022]
Abstract
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| | - Laurence R. Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| |
Collapse
|
15
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
16
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
17
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
18
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
19
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
20
|
Xie M, Niehorster DC, Lappe M, Li L. Roles of visual and non-visual information in the perception of scene-relative object motion during walking. J Vis 2020; 20:15. [PMID: 33052410 PMCID: PMC7571284 DOI: 10.1167/jov.20.10.15] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.
Collapse
Affiliation(s)
- Mingyang Xie
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,
| | | | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany.,
| | - Li Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,Faculty of Arts and Science, New York University Shanghai, Shanghai, China.,
| |
Collapse
|
21
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
22
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
23
|
Garzorz IT, Freeman TCA, Ernst MO, MacNeilage PR. Insufficient compensation for self-motion during perception of object speed: The vestibular Aubert-Fleischl phenomenon. J Vis 2018; 18:9. [DOI: 10.1167/18.13.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Isabelle T. Garzorz
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), Ludwig Maximilian University, Planegg-Martinsried, Germany
| | | | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Present address: Department of Psychology, Cognitive and Brain Sciences, University of Nevada, Reno, NV, USA
| |
Collapse
|
24
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
25
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
26
|
A Neural Model of MST and MT Explains Perceived Object Motion during Self-Motion. J Neurosci 2017; 36:8093-102. [PMID: 27488630 DOI: 10.1523/jneurosci.4593-15.2016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 06/02/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED When a moving object cuts in front of a moving observer at a 90° angle, the observer correctly perceives that the object is traveling along a perpendicular path just as if viewing the moving object from a stationary vantage point. Although the observer's own (self-)motion affects the object's pattern of motion on the retina, the visual system is able to factor out the influence of self-motion and recover the world-relative motion of the object (Matsumiya and Ando, 2009). This is achieved by using information in global optic flow (Rushton and Warren, 2005; Warren and Rushton, 2009; Fajen and Matthis, 2013) and other sensory arrays (Dupin and Wexler, 2013; Fajen et al., 2013; Dokka et al., 2015) to estimate and deduct the component of the object's local retinal motion that is due to self-motion. However, this account (known as "flow parsing") is qualitative and does not shed light on mechanisms in the visual system that recover object motion during self-motion. We present a simple computational account that makes explicit possible mechanisms in visual cortex by which self-motion signals in the medial superior temporal area interact with object motion signals in the middle temporal area to transform object motion into a world-relative reference frame. The model (1) relies on two mechanisms (MST-MT feedback and disinhibition of opponent motion signals in MT) to explain existing data, (2) clarifies how pathways for self-motion and object-motion perception interact, and (3) unifies the existing flow parsing hypothesis with established neurophysiological mechanisms. SIGNIFICANCE STATEMENT To intercept targets, we must perceive the motion of objects that move independently from us as we move through the environment. Although our self-motion substantially alters the motion of objects on the retina, compelling evidence indicates that the visual system at least partially compensates for self-motion such that object motion relative to the stationary environment can be more accurately perceived. We have developed a model that sheds light on plausible mechanisms within the visual system that transform retinal motion into a world-relative reference frame. Our model reveals how local motion signals (generated through interactions within the middle temporal area) and global motion signals (feedback from the dorsal medial superior temporal area) contribute and offers a new hypothesis about the connection between pathways for heading and object motion perception.
Collapse
|
27
|
Niehorster DC, Li L. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion. Iperception 2017; 8:2041669517708206. [PMID: 28567272 PMCID: PMC5439648 DOI: 10.1177/2041669517708206] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.
Collapse
Affiliation(s)
| | - Li Li
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong; Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, China
| |
Collapse
|
28
|
Eye Movements in Darkness Modulate Self-Motion Perception. eNeuro 2017; 4:eN-NWR-0211-16. [PMID: 28144623 PMCID: PMC5263893 DOI: 10.1523/eneuro.0211-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2016] [Revised: 11/25/2016] [Accepted: 12/06/2016] [Indexed: 12/31/2022] Open
Abstract
During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.
Collapse
|
29
|
Kim HR, Pitkow X, Angelaki DE, DeAngelis GC. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J Neurophysiol 2016; 116:1449-67. [PMID: 27334948 DOI: 10.1152/jn.00005.2016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/16/2016] [Indexed: 11/22/2022] Open
Abstract
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York;
| |
Collapse
|
30
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
31
|
Shinder ME, Taube JS. Resolving the active versus passive conundrum for head direction cells. Neuroscience 2014; 270:123-38. [PMID: 24704515 PMCID: PMC4067261 DOI: 10.1016/j.neuroscience.2014.03.053] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Revised: 03/25/2014] [Accepted: 03/26/2014] [Indexed: 11/27/2022]
Abstract
Head direction (HD) cells have been identified in a number of limbic system structures. These cells encode the animal's perceived directional heading in the horizontal plane and are dependent on an intact vestibular system. Previous studies have reported that the responses of vestibular neurons within the vestibular nuclei are markedly attenuated when an animal makes a volitional head turn compared to passive rotation. This finding presents a conundrum in that if vestibular responses are suppressed during an active head turn how is a vestibular signal propagated forward to drive and update the HD signal? This review identifies and discusses four possible mechanisms that could resolve this problem. These mechanisms are: (1) the ascending vestibular signal is generated by more than just vestibular-only neurons, (2) not all vestibular-only neurons contributing to the HD pathway have firing rates that are attenuated by active head turns, (3) the ascending pathway may be spared from the affects of the attenuation in that the HD system receives information from other vestibular brainstem sites that do not include vestibular-only cells, and (4) the ascending signal is affected by the inhibited vestibular signal during an active head turn, but the HD circuit compensates and uses the altered signal to accurately update the current HD. Future studies will be needed to decipher which of these possibilities is correct.
Collapse
Affiliation(s)
- M E Shinder
- Department of Psychological & Brain Sciences, Dartmouth College, United States
| | - J S Taube
- Department of Psychological & Brain Sciences, Dartmouth College, United States.
| |
Collapse
|