1
|
Prabhakar AT, Ninan GA, Roy A, Kumar S, Margabandhu K, Priyadarshini Michael J, Bal D, Mannam P, McKendrick AM, Carter O, Garrido MI. Self-motion induced environmental kinetopsia and pop-out illusion - Insight from a single case phenomenology. Neuropsychologia 2024; 196:108820. [PMID: 38336207 DOI: 10.1016/j.neuropsychologia.2024.108820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/07/2024] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
Stable visual perception, while we are moving, depends on complex interactions between multiple brain regions. We report a patient with damage to the right occipital and temporal lobes who presented with a visual disturbance of inward movement of roadside buildings towards the centre of his visual field, that occurred only when he moved forward on his motorbike. We describe this phenomenon as "self-motion induced environmental kinetopsia". Additionally, he was identified to have another illusion, in which objects displayed on the screen, appeared to pop out of the background. Here, we describe the clinical phenomena and the behavioural tasks specifically designed to document and measure this altered visual experience. Using the methods of lesion mapping and lesion network mapping we were able to demonstrate disrupted functional connectivity in the areas that process flow-parsing such as V3A and V6 that may underpin self-motion induced environmental kinetopsia. Moreover, we suggest that altered connectivity to the regions that process environmental frames of reference such as retrosplenial cortex (RSC) might explain the pop-out illusion. Our case adds novel and convergent lesion-based evidence to the role of these brain regions in visual processing.
Collapse
Affiliation(s)
- Appawamy Thirumal Prabhakar
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India; Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia.
| | - George Abraham Ninan
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India
| | - Anupama Roy
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Sharath Kumar
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Kavitha Margabandhu
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Jessica Priyadarshini Michael
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Deepti Bal
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Pavithra Mannam
- Department of Radiology, Christian Medical College, Vellore, India
| | - Allison M McKendrick
- Division of Optometry, School of Allied Health, University of Western Australia, Lions Eye Institute, Perth, Australia
| | - Olivia Carter
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia; Graeme Clark Institute for Biomedical Engineering, University of Melbourne, Vic, Australia
| |
Collapse
|
2
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
3
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
4
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
5
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
6
|
Self-motion illusions from distorted optic flow in multifocal glasses. iScience 2022; 25:103567. [PMID: 34988405 PMCID: PMC8693457 DOI: 10.1016/j.isci.2021.103567] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/03/2021] [Accepted: 12/01/2021] [Indexed: 11/24/2022] Open
Abstract
Progressive addition lenses (PALs) are ophthalmic lenses to correct presbyopia by providing improvements of near and far vision in different areas of the lens, but distorting the periphery of the wearer's field of view. Distortion-related difficulties reported by PAL wearers include unnatural self-motion perception. Visual self-motion perception is guided by optic flow, the pattern of retinal motion produced by self-motion. We tested the influence of PAL distortions on optic flow-based heading estimation using a model of heading perception and a virtual reality-based psychophysical experiment. The model predicted changes of heading estimation along a vertical axis, depending on visual field size and gaze direction. Consistent with this prediction, participants experienced upwards deviations of self-motion when gaze through the periphery of the lens was simulated, but not for gaze through the center. We conclude that PALs may lead to illusions of self-motion which could be remedied by a careful gaze strategy. Multifocal lenses impair vision of spectacle wearers with gaze-dependent distortions A model of heading perception from distorted optic flow suggest a misperception Heading perception was tested with a virtual reality-based simulation of distortions Distortions lead to gaze direction-dependent illusions in perceived vertical heading
Collapse
|
7
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
8
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
9
|
Evans L, Champion RA, Rushton SK, Montaldi D, Warren PA. Detection of scene-relative object movement and optic flow parsing across the adult lifespan. J Vis 2020; 20:12. [PMID: 32945848 PMCID: PMC7509779 DOI: 10.1167/jov.20.9.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20–76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.
Collapse
Affiliation(s)
- Lucy Evans
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rebecca A Champion
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | | | - Daniela Montaldi
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paul A Warren
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
10
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|
11
|
Rogers C, Rushton SK, Warren PA. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement. Iperception 2017; 8:2041669517736072. [PMID: 29201335 PMCID: PMC5700793 DOI: 10.1177/2041669517736072] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.
Collapse
Affiliation(s)
| | | | - Paul A Warren
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
12
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|