1
|
Sulpizio V, von Gal A, Galati G, Fattori P, Galletti C, Pitzalis S. Neural sensitivity to translational self- and object-motion velocities. Hum Brain Mapp 2024; 45:e26571. [PMID: 38224544 PMCID: PMC10785198 DOI: 10.1002/hbm.26571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 12/04/2023] [Accepted: 12/07/2023] [Indexed: 01/17/2024] Open
Abstract
The ability to detect and assess world-relative object-motion is a critical computation performed by the visual system. This computation, however, is greatly complicated by the observer's movements, which generate a global pattern of motion on the observer's retina. How the visual system implements this computation is poorly understood. Since we are potentially able to detect a moving object if its motion differs in velocity (or direction) from the expected optic flow generated by our own motion, here we manipulated the relative motion velocity between the observer and the object within a stationary scene as a strategy to test how the brain accomplishes object-motion detection. Specifically, we tested the neural sensitivity of brain regions that are known to respond to egomotion-compatible visual motion (i.e., egomotion areas: cingulate sulcus visual area, posterior cingulate sulcus area, posterior insular cortex [PIC], V6+, V3A, IPSmot/VIP, and MT+) to a combination of different velocities of visually induced translational self- and object-motion within a virtual scene while participants were instructed to detect object-motion. To this aim, we combined individual surface-based brain mapping, task-evoked activity by functional magnetic resonance imaging, and parametric and representational similarity analyses. We found that all the egomotion regions (except area PIC) responded to all the possible combinations of self- and object-motion and were modulated by the self-motion velocity. Interestingly, we found that, among all the egomotion areas, only MT+, V6+, and V3A were further modulated by object-motion velocities, hence reflecting their possible role in discriminating between distinct velocities of self- and object-motion. We suggest that these egomotion regions may be involved in the complex computation required for detecting scene-relative object-motion during self-motion.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of PsychologySapienza UniversityRomeItaly
| | | | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of PsychologySapienza UniversityRomeItaly
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor SciencesUniversity of BolognaBolognaItaly
| | - Claudio Galletti
- Department of Biomedical and Neuromotor SciencesUniversity of BolognaBolognaItaly
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and NeuroimagingSanta Lucia Foundation (IRCCS Fondazione Santa Lucia)RomeItaly
- Department of Movement, Human and Health SciencesUniversity of Rome “Foro Italico”RomeItaly
| |
Collapse
|
2
|
Sun Q, Zhan LZ, Zhang BY, Jia S, Gong XM. Heading perception from optic flow occurs at both perceptual representation and working memory stages with EEG evidence. Vision Res 2023; 208:108235. [PMID: 37094419 DOI: 10.1016/j.visres.2023.108235] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 04/10/2023] [Accepted: 04/13/2023] [Indexed: 04/26/2023]
Abstract
Psychophysical studies have demonstrated that heading perception from optic flow occurs in perceptual and post-perceptual stages. The post-perception stage is a complex concept, containing working memory. The current study examined whether working memory was involved in heading perception from optic flow by asking participants to conduct a heading perception task and recording their scalp EEG. On each trial, an optic flow display was presented, followed by a blank display. Participants were then asked to report their perceived heading. We know that participants would tend to automatically forget previous headings when they learned that previously presented headings were unrelated to the current heading perception to save cognitive resources. As a result, we could not decode previous headings from the EEG data of current trials. More importantly, if we successfully decoded previous headings when the blank display (optic flow) was presented, then working memory (perceptual representation stage) was involved in heading perception. Our results showed that the decoding accuracy was significantly higher than the chance level when the optic flow and blank displays were presented. Therefore, the current study provided electrophysiological evidence that heading perception from optic flow occurred in the perceptual representation and working memory stages, against the previous perceptual claim.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China.
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, China
| | - Bao-Yuan Zhang
- School of Psychology, Zhejiang Normal University, Jinhua, China
| | - Shiwei Jia
- School of Psychology, Shandong Normal University, Jinan, China.
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
3
|
Lemaire BS, Rosa-Salva O, Fraja M, Lorenzi E, Vallortigara G. Spontaneous preference for unpredictability in the temporal contingencies between agents' motion in naive domestic chicks. Proc Biol Sci 2022; 289:20221622. [PMID: 36350221 PMCID: PMC9653227 DOI: 10.1098/rspb.2022.1622] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/12/2022] [Indexed: 08/24/2023] Open
Abstract
The ability to recognize animate agents based on their motion has been investigated in humans and animals alike. When the movements of multiple objects are interdependent, humans perceive the presence of social interactions and goal-directed behaviours. Here, we investigated how visually naive domestic chicks respond to agents whose motion was reciprocally contingent in space and time (i.e. the time and direction of motion of one object can be predicted from the time and direction of motion of another object). We presented a 'social aggregation' stimulus, in which three smaller discs repeatedly converged towards a bigger disc, moving in a manner resembling a mother hen and chicks (versus a control stimulus lacking such interactions). Remarkably, chicks preferred stimuli in which the timing of the motion of one object could not be predicted by that of other objects. This is the first demonstration of a sensitivity to the temporal relationships between the motion of different objects in naive animals, a trait that could be at the basis of the development of the perception of social interaction and goal-directed behaviours.
Collapse
Affiliation(s)
- Bastien S. Lemaire
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Orsola Rosa-Salva
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Margherita Fraja
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Elena Lorenzi
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Giorgio Vallortigara
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| |
Collapse
|
4
|
Sun Q, Yan R, Wang J, Li X. Heading perception from optic flow is affected by heading distribution. Iperception 2022; 13:20416695221133406. [PMID: 36457854 PMCID: PMC9706071 DOI: 10.1177/20416695221133406] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023] Open
Abstract
Recent studies have revealed a central tendency in the perception of physical features. That is, the perceived feature was biased toward the mean of recently experienced features (i.e., previous feature distribution). However, no study explored whether the central tendency was in heading perception or not. In this study, we conducted three experiments to answer this question. The results showed that the perceived heading was not biased toward the mean of the previous heading distribution, suggesting that the central tendency was not in heading perception. However, the perceived headings were overall biased toward the left side, where headings rarely appeared in the right-heavied distribution (Experiment 3), suggesting that heading perception from optic flow was affected by previously seen headings. It indicated that the participants learned the heading distributions and used them to adjust their heading perception. Our study revealed that heading perception from optic flow was not purely perceptual and that postperceptual stages (e.g., attention and working memory) might be involved in the heading perception from optic flow.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China; Key Laboratory of Intelligent Education
Technology and Application of Zhejiang Province, Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Ruifang Yan
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Jingyi Wang
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China
| | - Xinyu Li
- Department of Psychology,
Zhejiang Normal University,
Jinhua, People’s Republic of China; Key Laboratory of Intelligent Education
Technology and Application of Zhejiang Province, Zhejiang Normal University,
Jinhua, People’s Republic of China
| |
Collapse
|
5
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
6
|
Abstract
Previous work shows that observers can use information from optic flow to perceive the direction of self-motion (i.e. heading) and that perceived heading exhibits a bias towards the center of the display (center bias). More recent work shows that the brain is sensitive to serial correlations and the perception of current stimuli can be affected by recently seen stimuli, a phenomenon known as serial dependence. In the current study, we examined whether, apart from center bias, serial dependence could be independently observed in heading judgments and how adding noise to optic flow affected center bias and serial dependence. We found a repulsive serial dependence effect in heading judgments after factoring out center bias in heading responses. The serial effect expands heading estimates away from the previously seen heading to increase overall sensitivity to changes in heading directions. Both the center bias and repulsive serial dependence effects increased with increasing noise in optic flow, and the noise-dependent changes in the serial effect were consistent with an ideal observer model. Our results suggest that the center bias effect is due to a prior of the straight-ahead direction in the Bayesian inference account for heading perception, whereas the repulsive serial dependence is an effect that reduces response errors and has the added utility of counteracting the center bias in heading judgments.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,
| | - Huihui Zhang
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Faculty of Arts and Science, New York University Shanghai, Shanghai, People's Republic of China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, People's Republic of China.,
| |
Collapse
|
7
|
Zhang W, Sun X, Yu Q. Moving Object Detection under a Moving Camera via Background Orientation Reconstruction. SENSORS 2020; 20:s20113103. [PMID: 32486336 PMCID: PMC7309005 DOI: 10.3390/s20113103] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 05/17/2020] [Accepted: 05/28/2020] [Indexed: 11/16/2022]
Abstract
Moving object detection under a moving camera is a challenging question, especially in a complex background. This paper proposes a background orientation field reconstruction method based on Poisson fusion for detecting moving objects under a moving camera. As enlightening by the optical flow orientation of a background is not dependent on the scene depth, this paper reconstructs the background orientation through Poisson fusion based on the modified gradient. Then, the motion saliency map is calculated by the difference between the original and the reconstructed orientation field. Based on the similarity in appearance and motion, the paper also proposes a weighted accumulation enhancement method. It can highlight the motion saliency of the moving objects and improve the consistency within the object and background region simultaneously. Furthermore, the proposed method incorporates the motion continuity to reject the false positives. The experimental results obtained by employing publicly available datasets indicate that the proposed method can achieve excellent performance compared with current state-of-the-art methods.
Collapse
|
8
|
Kozhemiako N, Nunes AS, Samal A, Rana KD, Calabro FJ, Hämäläinen MS, Khan S, Vaina LM. Neural activity underlying the detection of an object movement by an observer during forward self-motion: Dynamic decoding and temporal evolution of directional cortical connectivity. Prog Neurobiol 2020; 195:101824. [PMID: 32446882 DOI: 10.1016/j.pneurobio.2020.101824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 05/09/2020] [Accepted: 05/18/2020] [Indexed: 01/13/2023]
Abstract
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Collapse
Affiliation(s)
- N Kozhemiako
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - A S Nunes
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - A Samal
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA.
| | - K D Rana
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; National Institute of Mental Health, Bethesda, MD, USA.
| | - F J Calabro
- Department of Psychiatry and Biomedical Engineering, University of Pittsburgh, PA, USA.
| | - M S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - S Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA
| | - L M Vaina
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
9
|
Anthwal S, Ganotra D. An overview of optical flow-based approaches for motion segmentation. THE IMAGING SCIENCE JOURNAL 2019. [DOI: 10.1080/13682199.2019.1641316] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Shivangi Anthwal
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - Dinesh Ganotra
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
10
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
11
|
Lemasson B, Tanner C, Woodley C, Threadgill T, Qarqish S, Smith D. Motion cues tune social influence in shoaling fish. Sci Rep 2018; 8:9785. [PMID: 29955069 PMCID: PMC6023868 DOI: 10.1038/s41598-018-27807-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 06/07/2018] [Indexed: 11/30/2022] Open
Abstract
Social interactions have important consequences for individual fitness. Collective actions, however, are notoriously context-dependent and identifying how animals rapidly weigh the actions of others despite environmental uncertainty remains a fundamental challenge in biology. By exposing zebrafish (Danio rerio) to virtual fish silhouettes in a maze we isolated how the relative strength of a visual feature guides individual directional decisions and, subsequently, tunes social influence. We varied the relative speed and coherency with which a portion of silhouettes adopted a direction (leader/distractor ratio) and established that solitary zebrafish display a robust optomotor response to follow leader silhouettes that moved much faster than their distractors, regardless of stimulus coherency. Although recruitment time decreased as a power law of zebrafish group size, individual decision times retained a speed-accuracy trade-off, suggesting a benefit to smaller group sizes in collective decision-making. Directional accuracy improved regardless of group size in the presence of the faster moving leader silhouettes, but without these stimuli zebrafish directional decisions followed a democratic majority rule. Our results show that a large difference in movement speeds can guide directional decisions within groups, thereby providing individuals with a rapid and adaptive means of evaluating social information in the face of uncertainty.
Collapse
Affiliation(s)
- Bertrand Lemasson
- Environmental Lab, U.S. Army Engineer Research and Development Center (ERDC), Newport, Oregon, USA.
| | | | | | | | - Shea Qarqish
- Environmental Laboratory, ERDC, Vicksburg, MS, USA.,College of Osteopathic Medicine, William Carey University, Hattiesburg, MS, USA
| | - David Smith
- Environmental Laboratory, ERDC, Vicksburg, MS, USA
| |
Collapse
|
12
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|
13
|
Rogers C, Rushton SK, Warren PA. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement. Iperception 2017; 8:2041669517736072. [PMID: 29201335 PMCID: PMC5700793 DOI: 10.1177/2041669517736072] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.
Collapse
Affiliation(s)
| | | | - Paul A Warren
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
14
|
Niehorster DC, Li L. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion. Iperception 2017; 8:2041669517708206. [PMID: 28567272 PMCID: PMC5439648 DOI: 10.1177/2041669517708206] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.
Collapse
Affiliation(s)
| | - Li Li
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong; Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, China
| |
Collapse
|
15
|
Royden CS, Parsons D, Travatello J. The effect of monocular depth cues on the detection of moving objects by moving observers. Vision Res 2016; 124:7-14. [PMID: 27264029 DOI: 10.1016/j.visres.2016.05.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 05/19/2016] [Accepted: 05/23/2016] [Indexed: 11/26/2022]
Abstract
An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Daniel Parsons
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| | - Joshua Travatello
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
16
|
Yost WA, Zhong X, Najam A. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3293-310. [PMID: 26627802 DOI: 10.1121/1.4935091] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Collapse
Affiliation(s)
- William A Yost
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Xuan Zhong
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Anbar Najam
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|
17
|
Noel JP, Grivaz P, Marmaroli P, Lissek H, Blanke O, Serino A. Full body action remapping of peripersonal space: The case of walking. Neuropsychologia 2015; 70:375-84. [PMID: 25193502 DOI: 10.1016/j.neuropsychologia.2014.08.030] [Citation(s) in RCA: 78] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Revised: 07/04/2014] [Accepted: 08/24/2014] [Indexed: 01/08/2023]
|
18
|
Vaina LM, Buonanno F, Rushton SK. Spared ability to perceive direction of locomotor heading and scene-relative object movement despite inability to perceive relative motion. Med Sci Monit 2014; 20:1563-71. [PMID: 25183375 PMCID: PMC4161606 DOI: 10.12659/msm.892199] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. MATERIAL AND METHODS We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. RESULTS Patients' performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR's performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. CONCLUSIONS This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation.
Collapse
Affiliation(s)
- Lucia Maria Vaina
- Brain and Vision Research Laboratory, Boston University, Boston, USA
| | - Ferdinando Buonanno
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Neurology of Vision Laboratory, Boston, USA
| | - Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
19
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
20
|
Fajen BR, Parade MS, Matthis JS. Humans perceive object motion in world coordinates during obstacle avoidance. J Vis 2013; 13:25. [PMID: 23887048 PMCID: PMC3726133 DOI: 10.1167/13.8.25] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A fundamental question about locomotion in the presence of moving objects is whether movements are guided based upon perceived object motion in an observer-centered or world-centered reference frame. The former captures object motion relative to the moving observer and depends on both observer and object motion. The latter captures object motion relative to the stationary environment and is independent of observer motion. Subjects walked through a virtual environment (VE) viewed through a head-mounted display and indicated whether they would pass in front of or behind a moving obstacle that was on course to cross their future path. Subjects' movement through the VE was manipulated such that object motion in observer coordinates was affected while object motion in world coordinates was the same. We found that when moving observers choose routes around moving obstacles, they rely on object motion perceived in world coordinates. This entails a process, which has been called flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a), that recovers the component of optic flow due to object motion independent of self-motion. We found that when self-motion is real and actively generated, the process by which object motion is recovered relies on both visual and nonvisual information to factor out the influence of self-motion. The remaining component contains information about object motion in world coordinates that is needed to guide locomotion.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | | | | |
Collapse
|
21
|
Raudies F, Neumann H. Modeling heading and path perception from optic flow in the case of independently moving objects. Front Behav Neurosci 2013; 7:23. [PMID: 23554589 PMCID: PMC3612589 DOI: 10.3389/fnbeh.2013.00023] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2012] [Accepted: 03/13/2013] [Indexed: 11/18/2022] Open
Abstract
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
| | - Heiko Neumann
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
- Institute for Neural Information Processing, University of UlmUlm, Germany
| |
Collapse
|
22
|
Interaction of cortical networks mediating object motion detection by moving observers. Exp Brain Res 2012; 221:177-89. [PMID: 22811215 DOI: 10.1007/s00221-012-3159-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
Abstract
The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.
Collapse
|