1
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024; 34:4983-4997.e9. [PMID: 39389059 PMCID: PMC11537840 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
2
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.18.567582. [PMID: 38014023 PMCID: PMC10680834 DOI: 10.1101/2023.11.18.567582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, NY 10027, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
3
|
Dong Y, Lengyel G, Shivkumar S, Anzai A, DiRisio GF, Haefner RM, DeAngelis GC. How to reward animals based on their subjective percepts: A Bayesian approach to online estimation of perceptual biases. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.25.605047. [PMID: 39091868 PMCID: PMC11291170 DOI: 10.1101/2024.07.25.605047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
Elucidating the neural basis of perceptual biases, such as those produced by visual illusions, can provide powerful insights into the neural mechanisms of perceptual inference. However, studying the subjective percepts of animals poses a fundamental challenge: unlike human participants, animals cannot be verbally instructed to report what they see, hear, or feel. Instead, they must be trained to perform a task for reward, and researchers must infer from their responses what the animal perceived. However, animals' responses are shaped by reward feedback, thus raising the major concern that the reward regimen may alter the animal's decision strategy or even intrinsic perceptual biases. We developed a method that estimates perceptual bias during task performance and then computes the reward for each trial based on the evolving estimate of the animal's perceptual bias. Our approach makes use of multiple stimulus contexts to dissociate perceptual biases from decision-related biases. Starting with an informative prior, our Bayesian method updates a posterior over the perceptual bias after each trial. The prior can be specified based on data from past sessions, thus reducing the variability of the online estimates and allowing it to converge to a stable estimate over a small number of trials. After validating our method on synthetic data, we apply it to estimate perceptual biases of monkeys in a motion direction discrimination task in which varying background optic flow induces robust perceptual biases. This method overcomes an important challenge to understanding the neural basis of subjective percepts.
Collapse
Affiliation(s)
- Yelin Dong
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Gabor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Grace F DiRisio
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
4
|
Baltaretu BR, Schuetz I, Võ MLH, Fiehler K. Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments. Sci Rep 2024; 14:15549. [PMID: 38969745 PMCID: PMC11226608 DOI: 10.1038/s41598-024-66428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.
Collapse
Affiliation(s)
- Bianca R Baltaretu
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - Immo Schuetz
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| |
Collapse
|
5
|
Sulpizio V, Teghil A, Pitzalis S, Boccia M. Common and specific activations supporting optic flow processing and navigation as revealed by a meta-analysis of neuroimaging studies. Brain Struct Funct 2024; 229:1021-1045. [PMID: 38592557 PMCID: PMC11147901 DOI: 10.1007/s00429-024-02790-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/12/2024] [Indexed: 04/10/2024]
Abstract
Optic flow provides useful information in service of spatial navigation. However, whether brain networks supporting these two functions overlap is still unclear. Here we used Activation Likelihood Estimation (ALE) to assess the correspondence between brain correlates of optic flow processing and spatial navigation and their specific neural activations. Since computational and connectivity evidence suggests that visual input from optic flow provides information mainly during egocentric navigation, we further tested the correspondence between brain correlates of optic flow processing and that of both egocentric and allocentric navigation. Optic flow processing shared activation with egocentric (but not allocentric) navigation in the anterior precuneus, suggesting its role in providing information about self-motion, as derived from the analysis of optic flow, in service of egocentric navigation. We further documented that optic flow perception and navigation are partially segregated into two functional and anatomical networks, i.e., the dorsal and the ventromedial networks. Present results point to a dynamic interplay between the dorsal and ventral visual pathways aimed at coordinating visually guided navigation in the environment.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Psychology, Sapienza University, Rome, Italy
- Department of Humanities, Education and Social Sciences, University of Molise, Campobasso, Italy
| | - Alice Teghil
- Department of Psychology, Sapienza University, Rome, Italy
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Maddalena Boccia
- Department of Psychology, Sapienza University, Rome, Italy.
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| |
Collapse
|
6
|
Prabhakar AT, Ninan GA, Roy A, Kumar S, Margabandhu K, Priyadarshini Michael J, Bal D, Mannam P, McKendrick AM, Carter O, Garrido MI. Self-motion induced environmental kinetopsia and pop-out illusion - Insight from a single case phenomenology. Neuropsychologia 2024; 196:108820. [PMID: 38336207 DOI: 10.1016/j.neuropsychologia.2024.108820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/07/2024] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
Stable visual perception, while we are moving, depends on complex interactions between multiple brain regions. We report a patient with damage to the right occipital and temporal lobes who presented with a visual disturbance of inward movement of roadside buildings towards the centre of his visual field, that occurred only when he moved forward on his motorbike. We describe this phenomenon as "self-motion induced environmental kinetopsia". Additionally, he was identified to have another illusion, in which objects displayed on the screen, appeared to pop out of the background. Here, we describe the clinical phenomena and the behavioural tasks specifically designed to document and measure this altered visual experience. Using the methods of lesion mapping and lesion network mapping we were able to demonstrate disrupted functional connectivity in the areas that process flow-parsing such as V3A and V6 that may underpin self-motion induced environmental kinetopsia. Moreover, we suggest that altered connectivity to the regions that process environmental frames of reference such as retrosplenial cortex (RSC) might explain the pop-out illusion. Our case adds novel and convergent lesion-based evidence to the role of these brain regions in visual processing.
Collapse
Affiliation(s)
- Appawamy Thirumal Prabhakar
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India; Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia.
| | - George Abraham Ninan
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India
| | - Anupama Roy
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Sharath Kumar
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Kavitha Margabandhu
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Jessica Priyadarshini Michael
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Deepti Bal
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Pavithra Mannam
- Department of Radiology, Christian Medical College, Vellore, India
| | - Allison M McKendrick
- Division of Optometry, School of Allied Health, University of Western Australia, Lions Eye Institute, Perth, Australia
| | - Olivia Carter
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia; Graeme Clark Institute for Biomedical Engineering, University of Melbourne, Vic, Australia
| |
Collapse
|
7
|
Sulpizio V, von Gal A, Galati G, Fattori P, Galletti C, Pitzalis S. Neural sensitivity to translational self- and object-motion velocities. Hum Brain Mapp 2024; 45:e26571. [PMID: 38224544 PMCID: PMC10785198 DOI: 10.1002/hbm.26571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 12/04/2023] [Accepted: 12/07/2023] [Indexed: 01/17/2024] Open
Abstract
The ability to detect and assess world-relative object-motion is a critical computation performed by the visual system. This computation, however, is greatly complicated by the observer's movements, which generate a global pattern of motion on the observer's retina. How the visual system implements this computation is poorly understood. Since we are potentially able to detect a moving object if its motion differs in velocity (or direction) from the expected optic flow generated by our own motion, here we manipulated the relative motion velocity between the observer and the object within a stationary scene as a strategy to test how the brain accomplishes object-motion detection. Specifically, we tested the neural sensitivity of brain regions that are known to respond to egomotion-compatible visual motion (i.e., egomotion areas: cingulate sulcus visual area, posterior cingulate sulcus area, posterior insular cortex [PIC], V6+, V3A, IPSmot/VIP, and MT+) to a combination of different velocities of visually induced translational self- and object-motion within a virtual scene while participants were instructed to detect object-motion. To this aim, we combined individual surface-based brain mapping, task-evoked activity by functional magnetic resonance imaging, and parametric and representational similarity analyses. We found that all the egomotion regions (except area PIC) responded to all the possible combinations of self- and object-motion and were modulated by the self-motion velocity. Interestingly, we found that, among all the egomotion areas, only MT+, V6+, and V3A were further modulated by object-motion velocities, hence reflecting their possible role in discriminating between distinct velocities of self- and object-motion. We suggest that these egomotion regions may be involved in the complex computation required for detecting scene-relative object-motion during self-motion.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of PsychologySapienza UniversityRomeItaly
| | | | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of PsychologySapienza UniversityRomeItaly
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor SciencesUniversity of BolognaBolognaItaly
| | - Claudio Galletti
- Department of Biomedical and Neuromotor SciencesUniversity of BolognaBolognaItaly
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of Movement, Human and Health SciencesUniversity of Rome “Foro Italico”RomeItaly
| |
Collapse
|
8
|
Vafaii H, Yates JL, Butts DA. Hierarchical VAEs provide a normative account of motion processing in the primate brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.27.559646. [PMID: 37808629 PMCID: PMC10557690 DOI: 10.1101/2023.09.27.559646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
The relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.
Collapse
|
9
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
10
|
Sun Q, Zhan LZ, Zhang BY, Jia S, Gong XM. Heading perception from optic flow occurs at both perceptual representation and working memory stages with EEG evidence. Vision Res 2023; 208:108235. [PMID: 37094419 DOI: 10.1016/j.visres.2023.108235] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 04/10/2023] [Accepted: 04/13/2023] [Indexed: 04/26/2023]
Abstract
Psychophysical studies have demonstrated that heading perception from optic flow occurs in perceptual and post-perceptual stages. The post-perception stage is a complex concept, containing working memory. The current study examined whether working memory was involved in heading perception from optic flow by asking participants to conduct a heading perception task and recording their scalp EEG. On each trial, an optic flow display was presented, followed by a blank display. Participants were then asked to report their perceived heading. We know that participants would tend to automatically forget previous headings when they learned that previously presented headings were unrelated to the current heading perception to save cognitive resources. As a result, we could not decode previous headings from the EEG data of current trials. More importantly, if we successfully decoded previous headings when the blank display (optic flow) was presented, then working memory (perceptual representation stage) was involved in heading perception. Our results showed that the decoding accuracy was significantly higher than the chance level when the optic flow and blank displays were presented. Therefore, the current study provided electrophysiological evidence that heading perception from optic flow occurred in the perceptual representation and working memory stages, against the previous perceptual claim.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China.
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, China
| | - Bao-Yuan Zhang
- School of Psychology, Zhejiang Normal University, Jinhua, China
| | - Shiwei Jia
- School of Psychology, Shandong Normal University, Jinan, China.
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
11
|
Kooijman L, Asadi H, Mohamed S, Nahavandi S. A virtual reality study investigating the train illusion. ROYAL SOCIETY OPEN SCIENCE 2023; 10:221622. [PMID: 37063997 PMCID: PMC10090874 DOI: 10.1098/rsos.221622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
The feeling of self-movement that occurs in the absence of physical motion is often referred to as vection, which is commonly exemplified using the train illusion analogy (TIA). Limited research exists on whether the TIA accurately exemplifies the experience of vection in virtual environments (VEs). Few studies complemented their vection research with participants' qualitative feedback or by recording physiological responses, and most studies used stimuli that contextually differed from the TIA. We investigated whether vection is experienced differently in a VE replicating the TIA compared to a VE depicting optic flow by recording subjective and physiological responses. Additionally, we explored participants' experience through an open question survey. We expected the TIA environment to induce enhanced vection compared to the optic flow environment. Twenty-nine participants were visually and audibly immersed in VEs that either depicted optic flow or replicated the TIA. Results showed optic flow elicited more compelling vection than the TIA environment and no consistent physiological correlates to vection were identified. The post-experiment survey revealed discrepancies between participants' quantitative and qualitative feedback. Although the dynamic content may outweigh the ecological relevance of the stimuli, it was concluded that more qualitative research is needed to understand participants' vection experience in VEs.
Collapse
Affiliation(s)
- Lars Kooijman
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Houshyar Asadi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Shady Mohamed
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
- Harvard Paulson School of Engineering and Applied Sciences, Harvard University, Allston, MA 02134, USA
| |
Collapse
|
12
|
Layton OW, Parade MS, Fajen BR. The accuracy of object motion perception during locomotion. Front Psychol 2023; 13:1068454. [PMID: 36710725 PMCID: PMC9878598 DOI: 10.3389/fpsyg.2022.1068454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/19/2022] [Indexed: 01/15/2023] Open
Abstract
Human observers are capable of perceiving the motion of moving objects relative to the stationary world, even while undergoing self-motion. Perceiving world-relative object motion is complicated because the local optical motion of objects is influenced by both observer and object motion, and reflects object motion in observer coordinates. It has been proposed that observers recover world-relative object motion using global optic flow to factor out the influence of self-motion. However, object-motion judgments during simulated self-motion are biased, as if the visual system cannot completely compensate for the influence of self-motion. Recently, Xie et al. demonstrated that humans are capable of accurately judging world-relative object motion when self-motion is real, actively generated by walking, and accompanied by optic flow. However, the conditions used in that study differ from those found in the real world in that the moving object was a small dot with negligible optical expansion that moved at a fixed speed in retinal (rather than world) coordinates and was only visible for 500 ms. The present study investigated the accuracy of object motion perception under more ecologically valid conditions. Subjects judged the trajectory of an object that moved through a virtual environment viewed through a head-mounted display. Judgments exhibited bias in the case of simulated self-motion but were accurate when self-motion was real, actively generated, and accompanied by optic flow. The findings are largely consistent with the conclusions of Xie et al. and demonstrate that observers are capable of accurately perceiving world-relative object motion under ecologically valid conditions.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States,Department of Computer Science, Colby College, Waterville, ME, United States,*Correspondence: Oliver W. Layton, ✉
| | - Melissa S. Parade
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
13
|
Guénot J, Trotter Y, Fricker P, Cherubini M, Soler V, Cottereau BR. Optic Flow Processing in Patients With Macular Degeneration. Invest Ophthalmol Vis Sci 2022; 63:21. [DOI: 10.1167/iovs.63.12.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Jade Guénot
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Yves Trotter
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Paul Fricker
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Marta Cherubini
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Vincent Soler
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
- Unité de rétine, consultation d'ophtalmologie, hôpital Pierre-Paul-Riquet, CHU Toulouse, Toulouse, France
| | - Benoit R. Cottereau
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| |
Collapse
|
14
|
French RL, DeAngelis GC. Scene-relative object motion biases depth percepts. Sci Rep 2022; 12:18480. [PMID: 36323845 PMCID: PMC9630409 DOI: 10.1038/s41598-022-23219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022] Open
Abstract
An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.
Collapse
Affiliation(s)
- Ranran L. French
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| | - Gregory C. DeAngelis
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| |
Collapse
|
15
|
Sun Q, Yan R, Wang J, Li X. Heading perception from optic flow is affected by heading distribution. Iperception 2022; 13:20416695221133406. [PMID: 36457854 PMCID: PMC9706071 DOI: 10.1177/20416695221133406] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023] Open
Abstract
Recent studies have revealed a central tendency in the perception of physical features. That is, the perceived feature was biased toward the mean of recently experienced features (i.e., previous feature distribution). However, no study explored whether the central tendency was in heading perception or not. In this study, we conducted three experiments to answer this question. The results showed that the perceived heading was not biased toward the mean of the previous heading distribution, suggesting that the central tendency was not in heading perception. However, the perceived headings were overall biased toward the left side, where headings rarely appeared in the right-heavied distribution (Experiment 3), suggesting that heading perception from optic flow was affected by previously seen headings. It indicated that the participants learned the heading distributions and used them to adjust their heading perception. Our study revealed that heading perception from optic flow was not purely perceptual and that postperceptual stages (e.g., attention and working memory) might be involved in the heading perception from optic flow.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China; Key Laboratory of Intelligent Education
Technology and Application of Zhejiang Province, Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Ruifang Yan
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Jingyi Wang
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Xinyu Li
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China; Key Laboratory of Intelligent Education
Technology and Application of Zhejiang Province, Zhejiang Normal University,
Jinhua, People’s Republic of China
| |
Collapse
|
16
|
Xing X, Saunders JA. Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion. J Vis 2022; 22:8. [PMID: 36223109 PMCID: PMC9583749 DOI: 10.1167/jov.22.11.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This study investigated the relationship between perceived heading direction and perceived motion of an independently moving object during self-motion. Using a dual task paradigm, we tested whether object motion judgments showed biases consistent with heading perception, both across conditions and from trial to trial. Subjects viewed simulated self-motion and estimated their heading direction (Experiment 1), or walked toward a target in virtual reality with conflicting physical and visual cues (Experiment 2). During self-motion, an independently moving object briefly appeared, with varied horizontal velocity, and observers judged whether the object was moving leftward or rightward. In Experiment 1, heading estimates showed an expected center bias, and object motion judgments showed corresponding biases. Trial-to-trial variations were also correlated: on trials with a more rightward heading bias, object motion judgments were consistent with a more rightward heading, and vice versa. In Experiment 2, we estimated the relative weighting of visual and physical cues in control of walking and object motion judgments. Both were strongly influenced by nonvisual cues, with less weighting for object motion (86% vs. 63%). There were also trial-to-trial correlations between biases in walking direction and object motion judgments. The results provide evidence that shared mechanisms contribute to heading perception and perception of object motion.
Collapse
Affiliation(s)
- Xing Xing
- Department of Psychology, University of Hong Kong, Hong Kong.,
| | | |
Collapse
|
17
|
Falconbridge M, Hewitt K, Haille J, Badcock DR, Edwards M. The induced motion effect is a high-level visual phenomenon: Psychophysical evidence. Iperception 2022; 13:20416695221118111. [PMID: 36092511 PMCID: PMC9459461 DOI: 10.1177/20416695221118111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/20/2022] [Indexed: 11/16/2022] Open
Abstract
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway where self-motion is assessed. We provide evidence for a high-level mechanism in two broad ways. Firstly, we show that the effect is insensitive to a set of low-level spatial aspects of the scene, namely, the spatial arrangement, the spatial frequency content and the orientation content of the background relative to the target. Secondly, we show that the effect is the same whether the target and background are composed of the same kind of local elements-one-dimensional (1D) or two-dimensional (2D)-or one is composed of one, and the other composed of the other. The latter finding is significant because 1D and 2D local elements are integrated by two different mechanisms so the induced motion effect is likely to be mediated in a visual motion processing area that follows the two separate integration mechanisms. Area medial superior temporal in monkeys and the equivalent in humans is suggested as a viable site. We present a simple flow-parsing-inspired model and demonstrate a good fit to our data and to data from a previous induced motion study.
Collapse
|
18
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
19
|
Egomotion-related visual areas respond to goal-directed movements. Brain Struct Funct 2022; 227:2313-2328. [PMID: 35763171 DOI: 10.1007/s00429-022-02523-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/04/2022] [Indexed: 11/02/2022]
Abstract
Integration of proprioceptive signals from the various effectors with visual feedback of self-motion from the retina is necessary for whole-body movement and locomotion. Here, we tested whether the human visual motion areas involved in processing optic flow signals simulating self-motion are also activated by goal-directed movements (as saccades or pointing) performed with different effectors (eye, hand, and foot), suggesting a role in visually guiding movements through the external environment. To achieve this aim, we used a combined approach of task-evoked activity and effective connectivity (PsychoPhysiological Interaction, PPI) by fMRI. We localized a set of six egomotion-responsive visual areas through the flow field stimulus and distinguished them into visual (pIPS/V3A, V6+ , IPSmot/VIP) and visuomotor (pCi, CSv, PIC) areas according to recent literature. We tested their response to a visuomotor task implying spatially directed delayed eye, hand, and foot movements. We observed a posterior-to-anterior gradient of preference for eye-to-foot movements, with posterior (visual) regions showing a preference for saccades, and anterior (visuomotor) regions showing a preference for foot pointing. No region showed a clear preference for hand pointing. Effective connectivity analysis showed that visual areas were more connected to each other with respect to the visuomotor areas, particularly during saccades. We suggest that visual and visuomotor egomotion regions can play different roles within a network that integrates sensory-motor signals with the aim of guiding movements in the external environment.
Collapse
|
20
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
21
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker's visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body's trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker's instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body's momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
22
|
Niehorster DC. Optic Flow: A History. Iperception 2021; 12:20416695211055766. [PMID: 34900212 PMCID: PMC8652193 DOI: 10.1177/20416695211055766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 09/02/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022] Open
Abstract
The concept of optic flow, a global pattern of visual motion that is both caused by and signals self-motion, is canonically ascribed to James Gibson's 1950 book "The Perception of the Visual World." There have, however, been several other developments of this concept, chiefly by Gwilym Grindley and Edward Calvert. Based on rarely referenced scientific literature and archival research, this article describes the development of the concept of optic flow by the aforementioned authors and several others. The article furthermore presents the available evidence for interactions between these authors, focusing on whether parts of Gibson's proposal were derived from the work of Grindley or Calvert. While Grindley's work may have made Gibson aware of the geometrical facts of optic flow, Gibson's work is not derivative of Grindley's. It is furthermore shown that Gibson only learned of Calvert's work in 1956, almost a decade after Gibson first published his proposal. In conclusion, the development of the concept of optic flow presents an intriguing example of convergent thought in the progress of science.
Collapse
Affiliation(s)
- Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| |
Collapse
|
23
|
The effect of eccentricity on the linear-radial speed bias: Testing the motion-in-depth model. Vision Res 2021; 189:93-103. [PMID: 34688109 DOI: 10.1016/j.visres.2021.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 07/28/2021] [Accepted: 09/20/2021] [Indexed: 11/22/2022]
Abstract
Radial motion is perceived as faster than linear motion when local spatiotemporal properties are matched. This radial speed bias (RSB) is thought to occur because radial motion is partly interpreted as motion-in-depth. Geometry dictates that a fixed amount of radial expansion at increasing eccentricities is consistent with smaller motion in depth, so it is perhaps surprising that the impact of eccentricity on RSB has not been examined. With this issue in mind, across 3 experiments we investigated the RSB as a function of eccentricity. In a 2IFC task, participants judged which of a linear (test - variable speed) or radial (reference - 2 or 4°/s) stimulus appeared to move faster. Linear and radial stimuli comprised 4 Gabor patches arranged left, right, above and below fixation at varying eccentricities (3.5°-14°). For linear stimuli, Gabors all drifted left or right, whereas for radial stimuli Gabors drifted towards or away from the centre. The RSB (difference in perceived speeds between matched linear and radial stimuli) was recovered from fitted psychometric functions. Across all 3 experiments we found that the RSB decreased with eccentricity but this tendency was less marked beyond 7° - i.e. at odds with the geometry, the effect did not continue to decrease as a function of eccentricity. This was true irrespective of whether stimuli were fixed in size (Experiment 1) or varied in size to account for changes in spatial scale across the retina (Experiment 2). It was also true when we removed conflicting stereo cues via monocular viewing (Experiment 3). To further investigate our data, we extended a previous model of speed perception, which suggests perceived motion for such stimuli reflects a balance between two opposing perceptual interpretations, one for motion in depth and the other for object deformation. We propose, in the context of this model, that our data are consistent with placing greater weight on the motion in depth interpretation with increasing eccentricity and this is why the RSB does not continue to reduce in line with purely geometric constraints.
Collapse
|
24
|
Crowe EM, Smeets JBJ, Brenner E. The response to background motion: Characteristics of a movement stabilization mechanism. J Vis 2021; 21:3. [PMID: 34617956 PMCID: PMC8504189 DOI: 10.1167/jov.21.11.3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When making goal-directed movements toward a target, our hand deviates from its path in the direction of sudden background motion. We propose that this manual following response arises because ongoing movements are constantly guided toward the planned movement endpoint. Such guidance is needed to compensate for modest, unexpected self-motion. Our proposal is that the compensation for such self-motion does not involve a sophisticated analysis of the global optic flow. Instead, we propose that any motion in the vicinity of the planned endpoint is attributed to the endpoint's egocentric position having shifted in the direction of the motion. The ongoing movement is then stabilized relative to the shifted endpoint. In six experiments, we investigate what aspects of motion determine this shift of planned endpoint. We asked participants to intercept a moving target when it reached a certain area. During the target's motion, background structures briefly moved either leftward or rightward. Participants’ hands responded to background motion even when each background structure was only briefly visible or when the vast majority of background structures remained static. The response was not restricted to motion along the target's path but was most sensitive to motion close to where the target was to be hit, both in the visual field and in depth. In this way, a movement stabilization mechanism provides a comprehensive explanation of many aspects of the manual following response.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| |
Collapse
|
25
|
Kashyap HJ, Fowlkes CC, Krichmar JL. Sparse Representations for Object- and Ego-Motion Estimations in Dynamic Scenes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2521-2534. [PMID: 32687472 DOI: 10.1109/tnnls.2020.3006467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking. In the dynamic image segments of a video frame containing independently moving objects, optic flow relative to the next frame is the sum of the motion fields generated due to camera and object motion. The traditional ego-motion estimation methods assume the scene to be static, and the recent deep learning-based methods do not separate pixel velocities into object- and ego-motion components. We propose a learning-based approach to predict both ego-motion parameters and object-motion field (OMF) from image sequences using a convolutional autoencoder while being robust to variations due to the unconstrained scene depth. This is achieved by: 1) training with continuous ego-motion constraints that allow solving for ego-motion parameters independently of depth and 2) learning a sparsely activated overcomplete ego-motion field (EMF) basis set, which eliminates the irrelevant components in both static and dynamic segments for the task of ego-motion estimation. In order to learn the EMF basis set, we propose a new differentiable sparsity penalty function that approximates the number of nonzero activations in the bottleneck layer of the autoencoder and enforces sparsity more effectively than L1- and L2-norm-based penalties. Unlike the existing direct ego-motion estimation methods, the predicted global EMF can be used to extract OMF directly by comparing it against the optic flow. Compared with the state-of-the-art baselines, the proposed model performs favorably on pixelwise object- and ego-motion estimation tasks when evaluated on real and synthetic data sets of dynamic scenes.
Collapse
|
26
|
Pitzalis S, Hadj-Bouziane F, Dal Bò G, Guedj C, Strappini F, Meunier M, Farnè A, Fattori P, Galletti C. Optic flow selectivity in the macaque parieto-occipital sulcus. Brain Struct Funct 2021; 226:2911-2930. [PMID: 34043075 DOI: 10.1007/s00429-021-02293-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 05/08/2021] [Indexed: 01/16/2023]
Abstract
In humans, several neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates higher-level motion areas, like V6 and the cingulate sulcus visual area (CSv). In macaque, there are few studies on the sensitivity of V6 and CSv to egomotion compatible optic flow. The only fMRI study on this issue revealed selectivity to egomotion compatible optic flow in macaque CSv but not in V6 (Cotterau et al. Cereb Cortex 27(1):330-343, 2017, but see Fan et al. J Neurosci. 35:16303-16314, 2015). Yet, it is unknown whether monkey visual motion areas MT + and V6 display any distinctive fMRI functional profile relative to the optic flow stimulation, as it is the case for the homologous human areas (Pitzalis et al., Cereb Cortex 20(2):411-424, 2010). Here, we described the sensitivity of the monkey brain to two motion stimuli (radial rings and flow fields) originally used in humans to functionally map the motion middle temporal area MT + (Tootell et al. J Neurosci 15: 3215-3230, 1995a; Nature 375:139-141, 1995b) and the motion medial parietal area V6 (Pitzalis et al. 2010), respectively. In both animals, we found regions responding only to optic flow or radial rings stimulation, and regions responding to both stimuli. A region in the parieto-occipital sulcus (likely including V6) was one of the most highly selective area for coherently moving fields of dots, further demonstrating the power of this type of stimulation to activate V6 in both humans and monkeys. We did not find any evidence that putative macaque CSv responds to Flow Fields.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Fadila Hadj-Bouziane
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Giulia Dal Bò
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Carole Guedj
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | | | - Martine Meunier
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
27
|
Motyka P, Akbal M, Litwin P. Forward optic flow is prioritised in visual awareness independently of walking direction. PLoS One 2021; 16:e0250905. [PMID: 33945563 PMCID: PMC8096117 DOI: 10.1371/journal.pone.0250905] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/15/2021] [Indexed: 12/31/2022] Open
Abstract
When two different images are presented separately to each eye, one experiences smooth transitions between them-a phenomenon called binocular rivalry. Previous studies have shown that exposure to signals from other senses can enhance the access of stimulation-congruent images to conscious perception. However, despite our ability to infer perceptual consequences from bodily movements, evidence that action can have an analogous influence on visual awareness is scarce and mainly limited to hand movements. Here, we investigated whether one's direction of locomotion affects perceptual access to optic flow patterns during binocular rivalry. Participants walked forwards and backwards on a treadmill while viewing highly-realistic visualisations of self-motion in a virtual environment. We hypothesised that visualisations congruent with walking direction would predominate in visual awareness over incongruent ones, and that this effect would increase with the precision of one's active proprioception. These predictions were not confirmed: optic flow consistent with forward locomotion was prioritised in visual awareness independently of walking direction and proprioceptive abilities. Our findings suggest the limited role of kinaesthetic-proprioceptive information in disambiguating visually perceived direction of self-motion and indicate that vision might be tuned to the (expanding) optic flow patterns prevalent in everyday life.
Collapse
Affiliation(s)
- Paweł Motyka
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| | - Mert Akbal
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Academy of Fine Arts Saar, Saarbrücken, Germany
| | - Piotr Litwin
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| |
Collapse
|
28
|
Abstract
A universal signature of developmental dyslexia is literacy acquisition impairments. Besides, dyslexia may be related to deficits in selective spatial attention, in the sensitivity to global visual motion, speed processing, oculomotor coordination, and integration of auditory and visual information. Whether motion-sensitive brain areas of children with dyslexia can recognize different speeds of expanded optic flow and segregate the slow-speed from high-speed contrast of motion was a main question of the study. A combined event-related EEG experiment with optic flow visual stimulation and functional frequency-based graph approach (small-world propensity ϕ) were applied to research the responsiveness of areas, which are sensitive to motion, and also distinguish slow/fast -motion conditions on three groups of children: controls, untrained (pre-D) and trained dyslexics (post-D) with visual intervention programs. Lower ϕ at θ, α, γ1-frequencies (low-speed contrast) for controls than other groups represent that the networks rewire, expressed at β frequencies (both speed contrasts) in the post-D, whose network was most segregated. Functional connectivity nodes have not existed in pre-D at dorsal medial temporal area MT+/V5 (middle, superior temporal gyri), left-hemispheric middle occipital gyrus/visual V2, ventral occipitotemporal (fusiform gyrus/visual V4), ventral intraparietal (supramarginal, angular gyri), derived from θ-frequency network for both conditions. After visual training, compensatory mechanisms appeared to implicate/regain these brain areas in the left hemisphere through plasticity across extended brain networks. Specifically, for high-speed contrast, the nodes were observed in pre-D (θ-frequency) and post-D (β2-frequency) relative to controls in hyperactivity of the right dorsolateral prefrontal cortex, which might account for the attentional network and oculomotor control impairments in developmental dyslexia.
Collapse
|
29
|
Vaina LM, Calabro FJ, Samal A, Rana KD, Mamashli F, Khan S, Hämäläinen M, Ahlfors SP, Ahveninen J. Auditory cues facilitate object movement processing in human extrastriate visual cortex during simulated self-motion: A pilot study. Brain Res 2021; 1765:147489. [PMID: 33882297 DOI: 10.1016/j.brainres.2021.147489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 04/12/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
Collapse
Affiliation(s)
- Lucia M Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Abhisek Samal
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Kunjan D Rana
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
30
|
Ceyte G, Casanova R, Bootsma RJ. Reversals in Movement Direction in Locomotor Interception of Uniformly Moving Targets. Front Psychol 2021; 12:562806. [PMID: 33679504 PMCID: PMC7929975 DOI: 10.3389/fpsyg.2021.562806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Accepted: 01/21/2021] [Indexed: 11/19/2022] Open
Abstract
Here we studied how participants steer to intercept uniformly moving targets in a virtual driving task. We tested the hypothesis that locomotor interception behavior cannot fully be explained by a strategy of nulling rate of change in pertinent agent-target relations such as the target-heading angle or target’s bearing angle. In line with a previously reported observation and model simulations, we found that, under specific combinations of initial target eccentricity and target motion direction, locomotor paths revealed reversals in movement direction. This phenomenon is not compatible with unique reliance on first-order (i.e., rate-of-change based) information in the case of uniformly moving targets. We also found that, as expected, such reversals in movement direction were not observed consistently over all trials of the same experimental condition: their presence depended on the timing of the first steering action effected by the participant, with only early steering actions leading to reversals in movement direction. These particular characteristics of the direction-reversal phenomenon demonstrated here for a locomotor interception-by-steering task correspond to those reported for lateral manual interception. Together, these findings suggest that control strategies operating in manual and locomotor interception may at least share certain characteristics.
Collapse
Affiliation(s)
- Gwenaelle Ceyte
- Institut des Sciences du Mouvement, Aix-Marseille Université, CNRS, Marseille, France
| | - Remy Casanova
- Institut des Sciences du Mouvement, Aix-Marseille Université, CNRS, Marseille, France
| | - Reinoud J Bootsma
- Institut des Sciences du Mouvement, Aix-Marseille Université, CNRS, Marseille, France
| |
Collapse
|
31
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
32
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
33
|
Xie M, Niehorster DC, Lappe M, Li L. Roles of visual and non-visual information in the perception of scene-relative object motion during walking. J Vis 2020; 20:15. [PMID: 33052410 PMCID: PMC7571284 DOI: 10.1167/jov.20.10.15] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.
Collapse
Affiliation(s)
- Mingyang Xie
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,
| | | | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany.,
| | - Li Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,Faculty of Arts and Science, New York University Shanghai, Shanghai, China.,
| |
Collapse
|
34
|
Abstract
Previous work shows that observers can use information from optic flow to perceive the direction of self-motion (i.e. heading) and that perceived heading exhibits a bias towards the center of the display (center bias). More recent work shows that the brain is sensitive to serial correlations and the perception of current stimuli can be affected by recently seen stimuli, a phenomenon known as serial dependence. In the current study, we examined whether, apart from center bias, serial dependence could be independently observed in heading judgments and how adding noise to optic flow affected center bias and serial dependence. We found a repulsive serial dependence effect in heading judgments after factoring out center bias in heading responses. The serial effect expands heading estimates away from the previously seen heading to increase overall sensitivity to changes in heading directions. Both the center bias and repulsive serial dependence effects increased with increasing noise in optic flow, and the noise-dependent changes in the serial effect were consistent with an ideal observer model. Our results suggest that the center bias effect is due to a prior of the straight-ahead direction in the Bayesian inference account for heading perception, whereas the repulsive serial dependence is an effect that reduces response errors and has the added utility of counteracting the center bias in heading judgments.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,
| | - Huihui Zhang
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Faculty of Arts and Science, New York University Shanghai, Shanghai, People's Republic of China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, People's Republic of China.,
| |
Collapse
|
35
|
Evans L, Champion RA, Rushton SK, Montaldi D, Warren PA. Detection of scene-relative object movement and optic flow parsing across the adult lifespan. J Vis 2020; 20:12. [PMID: 32945848 PMCID: PMC7509779 DOI: 10.1167/jov.20.9.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20–76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.
Collapse
Affiliation(s)
- Lucy Evans
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rebecca A Champion
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | | | - Daniela Montaldi
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paul A Warren
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
36
|
Berti S, Keshavarz B. Neuropsychological Approaches to Visually-Induced Vection: an Overview and Evaluation of Neuroimaging and Neurophysiological Studies. Multisens Res 2020; 34:153-186. [DOI: 10.1163/22134808-bja10035] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 04/29/2020] [Indexed: 11/19/2022]
Abstract
Abstract
Moving visual stimuli can elicit the sensation of self-motion in stationary observers, a phenomenon commonly referred to as vection. Despite the long history of vection research, the neuro-cognitive processes underlying vection have only recently gained increasing attention. Various neuropsychological techniques such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have been used to investigate the temporal and spatial characteristics of the neuro-cognitive processing during vection in healthy participants. These neuropsychological studies allow for the identification of different neuro-cognitive correlates of vection, which (a) will help to unravel the neural basis of vection and (b) offer opportunities for applying vection as a tool in other research areas. The purpose of the current review is to evaluate these studies in order to show the advances in neuropsychological vection research and the challenges that lie ahead. The overview of the literature will also demonstrate the large methodological variability within this research domain, limiting the integration of results. Next, we will summarize methodological considerations and suggest helpful recommendations for future vection research, which may help to enhance the comparability across neuropsychological vection studies.
Collapse
Affiliation(s)
- Stefan Berti
- 1Institute of Psychology, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany
| | - Behrang Keshavarz
- 2Kite-Toronto Rehabilitation Institute, University Health Network (UHN), 550 University Ave., Toronto, ON, M5G 2A2, Canada
- 3Department of Psychology, Ryerson University, 350 Victoria St., Toronto, ON, M5B 2K3, Canada
| |
Collapse
|
37
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
38
|
Field DT, Biagi N, Inman LA. The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion. Neuroimage 2020; 213:116679. [DOI: 10.1016/j.neuroimage.2020.116679] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 01/15/2020] [Accepted: 02/23/2020] [Indexed: 10/24/2022] Open
|
39
|
Kozhemiako N, Nunes AS, Samal A, Rana KD, Calabro FJ, Hämäläinen MS, Khan S, Vaina LM. Neural activity underlying the detection of an object movement by an observer during forward self-motion: Dynamic decoding and temporal evolution of directional cortical connectivity. Prog Neurobiol 2020; 195:101824. [PMID: 32446882 DOI: 10.1016/j.pneurobio.2020.101824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 05/09/2020] [Accepted: 05/18/2020] [Indexed: 01/13/2023]
Abstract
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Collapse
Affiliation(s)
- N Kozhemiako
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - A S Nunes
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - A Samal
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA.
| | - K D Rana
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; National Institute of Mental Health, Bethesda, MD, USA.
| | - F J Calabro
- Department of Psychiatry and Biomedical Engineering, University of Pittsburgh, PA, USA.
| | - M S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - S Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA
| | - L M Vaina
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
40
|
Computational Mechanisms for Perceptual Stability using Disparity and Motion Parallax. J Neurosci 2020; 40:996-1014. [PMID: 31699889 DOI: 10.1523/jneurosci.0036-19.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 09/24/2019] [Accepted: 10/07/2019] [Indexed: 11/21/2022] Open
Abstract
Walking and other forms of self-motion create global motion patterns across our eyes. With the resulting stream of visual signals, how do we perceive ourselves as moving through a stable world? Although the neural mechanisms are largely unknown, human studies (Warren and Rushton, 2009) provide strong evidence that the visual system is capable of parsing the global motion into two components: one due to self-motion and the other due to independently moving objects. In the present study, we use computational modeling to investigate potential neural mechanisms for stabilizing visual perception during self-motion that build on neurophysiology of the middle temporal (MT) and medial superior temporal (MST) areas. One such mechanism leverages direction, speed, and disparity tuning of cells in dorsal MST (MSTd) to estimate the combined motion parallax and disparity signals attributed to the observer's self-motion. Feedback from the most active MSTd cell subpopulations suppresses motion signals in MT that locally match the preference of the MSTd cell in both parallax and disparity. This mechanism combined with local surround inhibition in MT allows the model to estimate self-motion while maintaining a sparse motion representation that is compatible with perceptual stability. A key consequence is that after signals compatible with the observer's self-motion are suppressed, the direction of independently moving objects is represented in a world-relative rather than observer-relative reference frame. Our analysis explicates how temporal dynamics and joint motion parallax-disparity tuning resolve the world-relative motion of moving objects and establish perceptual stability. Together, these mechanisms capture findings on the perception of object motion during self-motion.SIGNIFICANCE STATEMENT The image integrated by our eyes as we move through our environment undergoes constant flux as trees, buildings, and other surroundings stream by us. If our view can change so radically from one moment to the next, how do we perceive a stable world? Although progress has been made in understanding how this works, little is known about the underlying brain mechanisms. We propose a computational solution whereby multiple brain areas communicate to suppress the motion attributed to our movement relative to the stationary world, which is often responsible for a large proportion of the flux across the visual field. We simulated the proposed neural mechanisms and tested model estimates using data from human perceptual studies.
Collapse
|
41
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
42
|
Pitzalis S, Serra C, Sulpizio V, Committeri G, de Pasquale F, Fattori P, Galletti C, Sepe R, Galati G. Neural bases of self- and object-motion in a naturalistic vision. Hum Brain Mapp 2019; 41:1084-1111. [PMID: 31713304 PMCID: PMC7267932 DOI: 10.1002/hbm.24862] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 10/19/2019] [Accepted: 10/31/2019] [Indexed: 12/16/2022] Open
Abstract
To plan movements toward objects our brain must recognize whether retinal displacement is due to self-motion and/or to object-motion. Here, we aimed to test whether motion areas are able to segregate these types of motion. We combined an event-related functional magnetic resonance imaging experiment, brain mapping techniques, and wide-field stimulation to study the responsivity of motion-sensitive areas to pure and combined self- and object-motion conditions during virtual movies of a train running within a realistic landscape. We observed a selective response in MT to the pure object-motion condition, and in medial (PEc, pCi, CSv, and CMA) and lateral (PIC and LOR) areas to the pure self-motion condition. Some other regions (like V6) responded more to complex visual stimulation where both object- and self-motion were present. Among all, we found that some motion regions (V3A, LOR, MT, V6, and IPSmot) could extract object-motion information from the overall motion, recognizing the real movement of the train even when the images remain still (on the screen), or moved, because of self-movements. We propose that these motion areas might be good candidates for the "flow parsing mechanism," that is the capability to extract object-motion information from retinal motion signals by subtracting out the optic flow components.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome Foro Italico, Rome, Italy.,Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome Foro Italico, Rome, Italy.,Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Giorgia Committeri
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy
| | - Francesco de Pasquale
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy.,Faculty of Veterinary Medicine, University of Teramo, Teramo, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Rosamaria Sepe
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy
| | - Gaspare Galati
- Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| |
Collapse
|
43
|
Rezai O, Stoffl L, Tripp B. How are response properties in the middle temporal area related to inference on visual motion patterns? Neural Netw 2019; 121:122-131. [PMID: 31541880 DOI: 10.1016/j.neunet.2019.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 08/04/2019] [Accepted: 08/22/2019] [Indexed: 10/26/2022]
Abstract
Neurons in the primate middle temporal area (MT) respond to moving stimuli, with strong tuning for motion speed and direction. These responses have been characterized in detail, but the functional significance of these details (e.g. shapes and widths of speed tuning curves) is unclear, because they cannot be selectively manipulated. To estimate their functional significance, we used a detailed model of MT population responses as input to convolutional networks that performed sophisticated motion processing tasks (visual odometry and gesture recognition). We manipulated the distributions of speed and direction tuning widths, and studied the effects on task performance. We also studied performance with random linear mixtures of the responses, and with responses that had the same representational dissimilarity as the model populations, but were otherwise randomized. The width of speed and direction tuning both affected task performance, despite the networks having been optimized individually for each tuning variation, but the specific effects were different in each task. Random linear mixing improved performance of the odometry task, but not the gesture recognition task. Randomizing the responses while maintaining representational dissimilarity resulted in poor odometry performance. In summary, despite full optimization of the deep networks in each case, each manipulation of the representation affected performance of sophisticated visual tasks. Representation properties such as tuning width and representational similarity have been studied extensively from other perspectives, but this work provides new insight into their possible roles in sophisticated visual inference.
Collapse
|
44
|
Spatial suppression promotes rapid figure-ground segmentation of moving objects. Nat Commun 2019; 10:2732. [PMID: 31266956 PMCID: PMC6606582 DOI: 10.1038/s41467-019-10653-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 05/21/2019] [Indexed: 12/21/2022] Open
Abstract
Segregation of objects from their backgrounds is a fundamental visual function and one that is particularly effective when objects are in motion. Theoretically, suppressive center-surround mechanisms are well suited for accomplishing motion segregation. This longstanding hypothesis, however, has received limited empirical support. We report converging correlational and causal evidence that spatial suppression of background motion signals is critical for rapid segmentation of moving objects. Motion segregation ability is strongly predicted by both individual and stimulus-driven variations in spatial suppression strength. Moreover, aging-related superiority in perceiving background motion is associated with profound impairments in motion segregation. This segregation deficit is alleviated via perceptual learning, but only when motion segregation training also causes decreased sensitivity to background motion. We argue that perceptual insensitivity to large moving stimuli effectively implements background subtraction, which, in turn, enhances the visibility of moving objects and accounts for the observed link between spatial suppression and motion segregation. The visual system excels at segregating moving objects from their backgrounds, a key visual function hypothesized to be driven by suppressive centre-surround mechanisms. Here, the authors show that spatial suppression of background motion signals is critical for rapid segmentation of moving objects.
Collapse
|
45
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
46
|
Serra C, Galletti C, Di Marco S, Fattori P, Galati G, Sulpizio V, Pitzalis S. Egomotion-related visual areas respond to active leg movements. Hum Brain Mapp 2019; 40:3174-3191. [PMID: 30924264 DOI: 10.1002/hbm.24589] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/07/2019] [Accepted: 03/20/2019] [Indexed: 12/13/2022] Open
Abstract
Monkey neurophysiology and human neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates a cortical network of temporal, parietal, insular, and cingulate visual motion regions. Here, we tested whether the human visual motion areas involved in processing optic flow signals simulating self-motion are also activated by active lower limb movements, and hence are likely involved in guiding human locomotion. To this aim, we used a combined approach of task-evoked activity and resting-state functional connectivity by fMRI. We localized a set of six egomotion-responsive visual areas (V6+, V3A, intraparietal motion/ventral intraparietal [IPSmot/VIP], cingulate sulcus visual area [CSv], posterior cingulate sulcus area [pCi], posterior insular cortex [PIC]) by using optic flow. We tested their response to a motor task implying long-range active leg movements. Results revealed that, among these visually defined areas, CSv, pCi, and PIC responded to leg movements (visuomotor areas), while V6+, V3A, and IPSmot/VIP did not (visual areas). Functional connectivity analysis showed that visuomotor areas are connected to the cingulate motor areas, the supplementary motor area, and notably to the medial portion of the somatosensory cortex, which represents legs and feet. We suggest that CSv, pCi, and PIC perform the visual analysis of egomotion-like signals to provide sensory information to the motor system with the aim of guiding locomotion.
Collapse
Affiliation(s)
- Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Sara Di Marco
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| | - Valentina Sulpizio
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| |
Collapse
|
47
|
Koppen M, Ter Horst AC, Medendorp WP. Weighted Visual and Vestibular Cues for Spatial Updating During Passive Self-Motion. Multisens Res 2019; 32:165-178. [PMID: 31059483 DOI: 10.1163/22134808-20191364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 02/12/2019] [Indexed: 11/19/2022]
Abstract
When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance.
Collapse
Affiliation(s)
- Mathieu Koppen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Arjan C Ter Horst
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
48
|
Churan J, von Hopffgarten A, Bremmer F. Eye movements during path integration. Physiol Rep 2018; 6:e13921. [PMID: 30450739 PMCID: PMC6240582 DOI: 10.14814/phy2.13921] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 10/19/2018] [Indexed: 11/24/2022] Open
Abstract
Self-motion induces spontaneous eye movements which serve the purpose of stabilizing the visual image on the retina. Previous studies have mainly focused on their reflexive nature and how the perceptual system disentangles visual flow components caused by eye movements and self-motion. Here, we investigated the role of eye movements in distance reproduction (path integration). We used bimodal (visual-auditory)-simulated self-motion: visual optic flow was paired with an auditory stimulus whose frequency was scaled with simulated speed. The task of the subjects in each trial was, first, to observe the simulated self-motion over a certain distance (Encoding phase) and, second, to actively reproduce the observed distance using only visual, only auditory, or bimodal feedback (Reproduction phase). We found that eye positions and eye speeds were strongly correlated between the Encoding and the Reproduction phases. This was the case even when reproduction relied solely on auditory information and thus no visual stimulus was presented. We believe that these correlations are indicative of a contribution of eye movements to path integration.
Collapse
Affiliation(s)
- Jan Churan
- Department of NeurophysicsPhilipps‐Universität MarburgMarburgGermany
- Center for Mind, Brain and BehaviorPhilipps‐Universität MarburgMarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐Universität MarburgMarburgGermany
- Center for Mind, Brain and BehaviorPhilipps‐Universität MarburgMarburgGermany
| |
Collapse
|
49
|
Pobric G, Hulleman J, Lavidor M, Silipo G, Rohrig S, Dias E, Javitt DC. Seeing the World as it is: Mimicking Veridical Motion Perception in Schizophrenia Using Non-invasive Brain Stimulation in Healthy Participants. Brain Topogr 2018; 31:827-837. [PMID: 29516204 PMCID: PMC6097741 DOI: 10.1007/s10548-018-0639-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 02/26/2018] [Indexed: 11/06/2022]
Abstract
Schizophrenia (Sz) is a mental health disorder characterized by severe cognitive, emotional, social, and perceptual deficits. Visual deficits are found in tasks relying on the magnocellular/dorsal stream. In our first experiment we established deficits in global motion processing in Sz patients compared to healthy controls. We used a novel task in which background optic flow produces a distortion of the apparent trajectory of a moving stimulus, leading control participants to provide biased estimates of the true motion trajectory under conditions of global stimulation. Sz patients were significantly less affected by the global background motion, and reported trajectories that were more veridically accurate than those of controls. In order to study the mechanism of this effect, we performed a second experiment where we applied transcranial electrical stimulation over area MT+ to selectively modify global motion processing of optic flow displays in healthy participants. Cathodal and high frequency random noise stimulation had opposite effects on trajectory perception in optic flow. The brain stimulation over a control site and in a control task revealed that the effect of stimulation was specific for global motion processing in area MT+. These findings both support prior studies of impaired early visual processing in Sz and provide novel approaches for measurement and manipulation of the underlying circuits.
Collapse
Affiliation(s)
- Gorana Pobric
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, University of Manchester, Oxford Road, Manchester, M13 9PL, UK.
- Schizophrenia Research Division, Nathan Kline Institute, Orangeburg, NY, 10962, USA.
| | - Johan Hulleman
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, University of Manchester, Oxford Road, Manchester, M13 9PL, UK
| | - Michal Lavidor
- Department of Psychology, Bar Ilan University, Ramat Gan, Tel Aviv, Israel
| | - Gail Silipo
- Schizophrenia Research Division, Nathan Kline Institute, Orangeburg, NY, 10962, USA
| | - Stephanie Rohrig
- Schizophrenia Research Division, Nathan Kline Institute, Orangeburg, NY, 10962, USA
| | - Elisa Dias
- Schizophrenia Research Division, Nathan Kline Institute, Orangeburg, NY, 10962, USA
| | - Daniel C Javitt
- Schizophrenia Research Division, Nathan Kline Institute, Orangeburg, NY, 10962, USA
- Division of Experimental Therapeutics, Department of Psychiatry, Columbia University Medical Center, New York, NY, 10032, USA
| |
Collapse
|
50
|
Abstract
Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.
Collapse
Affiliation(s)
- Shin'ya Nishida
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Takahiro Kawabe
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Masataka Sawayama
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Taiki Fukiage
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| |
Collapse
|