1
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
2
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
3
|
Layton OW, Parade MS, Fajen BR. The accuracy of object motion perception during locomotion. Front Psychol 2023; 13:1068454. [PMID: 36710725 PMCID: PMC9878598 DOI: 10.3389/fpsyg.2022.1068454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/19/2022] [Indexed: 01/15/2023] Open
Abstract
Human observers are capable of perceiving the motion of moving objects relative to the stationary world, even while undergoing self-motion. Perceiving world-relative object motion is complicated because the local optical motion of objects is influenced by both observer and object motion, and reflects object motion in observer coordinates. It has been proposed that observers recover world-relative object motion using global optic flow to factor out the influence of self-motion. However, object-motion judgments during simulated self-motion are biased, as if the visual system cannot completely compensate for the influence of self-motion. Recently, Xie et al. demonstrated that humans are capable of accurately judging world-relative object motion when self-motion is real, actively generated by walking, and accompanied by optic flow. However, the conditions used in that study differ from those found in the real world in that the moving object was a small dot with negligible optical expansion that moved at a fixed speed in retinal (rather than world) coordinates and was only visible for 500 ms. The present study investigated the accuracy of object motion perception under more ecologically valid conditions. Subjects judged the trajectory of an object that moved through a virtual environment viewed through a head-mounted display. Judgments exhibited bias in the case of simulated self-motion but were accurate when self-motion was real, actively generated, and accompanied by optic flow. The findings are largely consistent with the conclusions of Xie et al. and demonstrate that observers are capable of accurately perceiving world-relative object motion under ecologically valid conditions.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States,Department of Computer Science, Colby College, Waterville, ME, United States,*Correspondence: Oliver W. Layton, ✉
| | - Melissa S. Parade
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
4
|
French RL, DeAngelis GC. Scene-relative object motion biases depth percepts. Sci Rep 2022; 12:18480. [PMID: 36323845 PMCID: PMC9630409 DOI: 10.1038/s41598-022-23219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022] Open
Abstract
An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.
Collapse
Affiliation(s)
- Ranran L. French
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| | - Gregory C. DeAngelis
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| |
Collapse
|
5
|
Xing X, Saunders JA. Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion. J Vis 2022; 22:8. [PMID: 36223109 PMCID: PMC9583749 DOI: 10.1167/jov.22.11.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This study investigated the relationship between perceived heading direction and perceived motion of an independently moving object during self-motion. Using a dual task paradigm, we tested whether object motion judgments showed biases consistent with heading perception, both across conditions and from trial to trial. Subjects viewed simulated self-motion and estimated their heading direction (Experiment 1), or walked toward a target in virtual reality with conflicting physical and visual cues (Experiment 2). During self-motion, an independently moving object briefly appeared, with varied horizontal velocity, and observers judged whether the object was moving leftward or rightward. In Experiment 1, heading estimates showed an expected center bias, and object motion judgments showed corresponding biases. Trial-to-trial variations were also correlated: on trials with a more rightward heading bias, object motion judgments were consistent with a more rightward heading, and vice versa. In Experiment 2, we estimated the relative weighting of visual and physical cues in control of walking and object motion judgments. Both were strongly influenced by nonvisual cues, with less weighting for object motion (86% vs. 63%). There were also trial-to-trial correlations between biases in walking direction and object motion judgments. The results provide evidence that shared mechanisms contribute to heading perception and perception of object motion.
Collapse
Affiliation(s)
- Xing Xing
- Department of Psychology, University of Hong Kong, Hong Kong.,
| | | |
Collapse
|
6
|
Falconbridge M, Hewitt K, Haille J, Badcock DR, Edwards M. The induced motion effect is a high-level visual phenomenon: Psychophysical evidence. Iperception 2022; 13:20416695221118111. [PMID: 36092511 PMCID: PMC9459461 DOI: 10.1177/20416695221118111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/20/2022] [Indexed: 11/16/2022] Open
Abstract
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway where self-motion is assessed. We provide evidence for a high-level mechanism in two broad ways. Firstly, we show that the effect is insensitive to a set of low-level spatial aspects of the scene, namely, the spatial arrangement, the spatial frequency content and the orientation content of the background relative to the target. Secondly, we show that the effect is the same whether the target and background are composed of the same kind of local elements-one-dimensional (1D) or two-dimensional (2D)-or one is composed of one, and the other composed of the other. The latter finding is significant because 1D and 2D local elements are integrated by two different mechanisms so the induced motion effect is likely to be mediated in a visual motion processing area that follows the two separate integration mechanisms. Area medial superior temporal in monkeys and the equivalent in humans is suggested as a viable site. We present a simple flow-parsing-inspired model and demonstrate a good fit to our data and to data from a previous induced motion study.
Collapse
|
7
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
8
|
Tan MJH, Park JE, Freire-Fernández F, Guan J, Juarez XG, Odom TW. Lasing Action from Quasi-Propagating Modes. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2203999. [PMID: 35734937 DOI: 10.1002/adma.202203999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/15/2022] [Indexed: 06/15/2023]
Abstract
Band edges at the high symmetry points in reciprocal space of periodic structures hold special interest in materials engineering for their high density of states. In optical metamaterials, standing waves found at these points have facilitated lasing, bound-states-in-the-continuum, and Bose-Einstein condensation. However, because high symmetry points by definition are localized, properties associated with them are limited to specific energies and wavevectors. Conversely, quasi-propagating modes along the high symmetry directions are predicted to enable similar phenomena over a continuum of energies and wavevectors. Here, quasi-propagating modes in 2D nanoparticle lattices are shown to support lasing action over a continuous range of wavelengths and symmetry-determined directions from a single device. Using lead halide perovskite nanocrystal films as gain materials, lasing is achieved from waveguide-surface lattice resonance (W-SLR) modes that can be decomposed into propagating waves along high symmetry directions, and standing waves in the orthogonal direction that provide optical feedback. The characteristics of the lasing beams are analyzed using an analytical 3D model that describes diffracted light in 2D lattices. Demonstrations of lasing across different wavelengths and lattice designs highlight how quasi-propagating modes offer possibilities to engineer chromatic multibeam emission important in hyperspectral 3D sensing, high-bandwidth Li-Fi communication, and laser projection displays.
Collapse
Affiliation(s)
- Max J H Tan
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Jeong-Eun Park
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | | | - Jun Guan
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Xitlali G Juarez
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| | - Teri W Odom
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| |
Collapse
|
9
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
10
|
Jörges B, Harris LR. Object speed perception during lateral visual self-motion. Atten Percept Psychophys 2022; 84:25-46. [PMID: 34704212 PMCID: PMC8547725 DOI: 10.3758/s13414-021-02372-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2021] [Indexed: 11/08/2022]
Abstract
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| | - Laurence R. Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| |
Collapse
|
11
|
Lubetzky AV, Coker E, Arie L, Aharoni MMH, Krasovsky T. Postural Control under Cognitive Load: Evidence of Increased Automaticity Revealed by Center-of-Pressure and Head Kinematics. J Mot Behav 2021; 54:466-479. [PMID: 34902292 DOI: 10.1080/00222895.2021.2013768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
How postural responses change with sensory perturbations while also performing a cognitive task is still debatable. This study investigated this question via comprehensive assessment of postural sway, head kinematics and their coupling. Twenty-three healthy young adults stood in tandem with eyes open or wearing the HTC Vive Head-Mounted Display (HMD) with a static or dynamic (i.e., movement in the anterior-posterior direction at 5 mm or 32 mm at 0.2 Hz) 3-wall stars display. On half of the trials, participants performed a cognitive serial subtraction task. Medio-lateral center-of-pressure (COP) path significantly increased with the cognitive task, particularly with dynamic visuals whereas medio-lateral variance decreased with the cognitive task. Head path and velocity significantly increased with the cognitive task in both directions while variance decreased. Head-COP cross-correlations ranged between 0.78 and 0.66. These findings, accompanied by frequency analysis, suggest that postural control switched to primarily relying on somatosensory input under challenging cognitive load conditions. Several differences between head and COP suggest that head kinematics contribute an important additional facet of postural control and the relationship between head and COP may depend on task and stance position. The potential of HMDs for clinical assessments of balance needs to be further explored.
Collapse
Affiliation(s)
- Anat V Lubetzky
- Department of Physical Therapy, Steinhardt School of Culture Education and Human Development, New York University, New York, New York, USA
| | - Elizabeth Coker
- Department of Dance, Tisch School of the Arts, New York University, New York, New York, USA
| | - Liraz Arie
- Department of Physical Therapy, Steinhardt School of Culture Education and Human Development, New York University, New York, New York, USA
| | - Moshe M H Aharoni
- Physical Therapy Department, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Tal Krasovsky
- Physical Therapy Department, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel.,Pediatric Rehabilitation Department, Sheba Medical Center, Ramat Gan, Israel
| |
Collapse
|
12
|
Niehorster DC. Optic Flow: A History. Iperception 2021; 12:20416695211055766. [PMID: 34900212 PMCID: PMC8652193 DOI: 10.1177/20416695211055766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 09/02/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022] Open
Abstract
The concept of optic flow, a global pattern of visual motion that is both caused by and signals self-motion, is canonically ascribed to James Gibson's 1950 book "The Perception of the Visual World." There have, however, been several other developments of this concept, chiefly by Gwilym Grindley and Edward Calvert. Based on rarely referenced scientific literature and archival research, this article describes the development of the concept of optic flow by the aforementioned authors and several others. The article furthermore presents the available evidence for interactions between these authors, focusing on whether parts of Gibson's proposal were derived from the work of Grindley or Calvert. While Grindley's work may have made Gibson aware of the geometrical facts of optic flow, Gibson's work is not derivative of Grindley's. It is furthermore shown that Gibson only learned of Calvert's work in 1956, almost a decade after Gibson first published his proposal. In conclusion, the development of the concept of optic flow presents an intriguing example of convergent thought in the progress of science.
Collapse
Affiliation(s)
- Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| |
Collapse
|
13
|
Dupuis F, Sole G, Wassinger CA, Osborne H, Beilmann M, Mercier C, Campeau‐Lecours A, Bouyer LJ, Roy J. The impact of experimental pain on shoulder movement during an arm elevated reaching task in a virtual reality environment. Physiol Rep 2021; 9:e15025. [PMID: 34542241 PMCID: PMC8451030 DOI: 10.14814/phy2.15025] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 08/04/2021] [Accepted: 08/11/2021] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND People with chronic shoulder pain have been shown to present with motor adaptations during arm movements. These adaptations may create abnormal physical stress on shoulder tendons and muscles. However, how and why these adaptations develop from the acute stage of pain is still not well-understood. OBJECTIVE To investigate motor adaptations following acute experimental shoulder pain during upper limb reaching. METHODS Forty participants were assigned to the Control or Pain group. They completed a task consisting of reaching targets in a virtual reality environment at three time points: (1) baseline (both groups pain-free), (2) experimental phase (Pain group experiencing acute shoulder pain induced by injecting hypertonic saline into subacromial space), and (3) Post experimental phase (both groups pain-free). Electromyographic (EMG) activity, kinematics, and performance data were collected. RESULTS The Pain group showed altered movement planning and execution as shown by a significant increased delay to reach muscles EMG peak and a loss of accuracy, compared to controls that have decreased their mean delay to reach muscles peak and improved their movement speed through the phases. The Pain group also showed protective kinematic adaptations using less shoulder elevation and elbow flexion, which persisted when they no longer felt the experimental pain. CONCLUSION Acute experimental pain altered movement planning and execution, which affected task performance. Kinematic data also suggest that such adaptations may persist over time, which could explain those observed in chronic pain populations.
Collapse
Affiliation(s)
- Frédérique Dupuis
- Faculty of MedicineUniversité LavalQuebec CityCanada
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
| | - Gisela Sole
- Centre for Health, Activity and Rehabilitation ResearchSchool of PhysiotherapyUniversity of OtagoDunedinNew Zealand
| | - Craig A. Wassinger
- Physical Therapy ProgramEast Tennessee State UniversityJohnson CityTNUSA
| | - Hamish Osborne
- Department of MedicineOtago Medical SchoolUniversity of OtagoDunedinNew Zealand
| | - Mathieu Beilmann
- Faculty of MedicineUniversité LavalQuebec CityCanada
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
| | - Catherine Mercier
- Faculty of MedicineUniversité LavalQuebec CityCanada
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
| | - Alexandre Campeau‐Lecours
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
- Faculty of Science and EngineeringUniversité LavalQuebec CityCanada
| | - Laurent J. Bouyer
- Faculty of MedicineUniversité LavalQuebec CityCanada
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
| | - Jean‐Sébastien Roy
- Faculty of MedicineUniversité LavalQuebec CityCanada
- Centre for Interdisciplinary Research in Rehabilitation and Social IntegrationQuebec CityCanada
| |
Collapse
|
14
|
Vaina LM, Calabro FJ, Samal A, Rana KD, Mamashli F, Khan S, Hämäläinen M, Ahlfors SP, Ahveninen J. Auditory cues facilitate object movement processing in human extrastriate visual cortex during simulated self-motion: A pilot study. Brain Res 2021; 1765:147489. [PMID: 33882297 DOI: 10.1016/j.brainres.2021.147489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 04/12/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
Collapse
Affiliation(s)
- Lucia M Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Abhisek Samal
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Kunjan D Rana
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
15
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
16
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
17
|
Xie M, Niehorster DC, Lappe M, Li L. Roles of visual and non-visual information in the perception of scene-relative object motion during walking. J Vis 2020; 20:15. [PMID: 33052410 PMCID: PMC7571284 DOI: 10.1167/jov.20.10.15] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.
Collapse
Affiliation(s)
- Mingyang Xie
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,
| | | | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany.,
| | - Li Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,Faculty of Arts and Science, New York University Shanghai, Shanghai, China.,
| |
Collapse
|
18
|
Evans L, Champion RA, Rushton SK, Montaldi D, Warren PA. Detection of scene-relative object movement and optic flow parsing across the adult lifespan. J Vis 2020; 20:12. [PMID: 32945848 PMCID: PMC7509779 DOI: 10.1167/jov.20.9.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20–76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.
Collapse
Affiliation(s)
- Lucy Evans
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rebecca A Champion
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | | | - Daniela Montaldi
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paul A Warren
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
19
|
Jost TA, Nelson B, Rylander J. Quantitative analysis of the Oculus Rift S in controlled movement. Disabil Rehabil Assist Technol 2019; 16:632-636. [DOI: 10.1080/17483107.2019.1688398] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Tyler A. Jost
- Department of Mechanical Engineering, Baylor University, Waco, TX, USA
| | - Bradley Nelson
- Department of Mechanical Engineering, Baylor University, Waco, TX, USA
| | - Jonathan Rylander
- Department of Mechanical Engineering, Baylor University, Waco, TX, USA
| |
Collapse
|
20
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
21
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
22
|
Morone G, Spitoni GF, De Bartolo D, Ghanbari Ghooshchy S, Di Iulio F, Paolucci S, Zoccolotti P, Iosa M. Rehabilitative devices for a top-down approach. Expert Rev Med Devices 2019; 16:187-195. [PMID: 30677307 DOI: 10.1080/17434440.2019.1574567] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
INTRODUCTION In recent years, neurorehabilitation has moved from a 'bottom-up' to a 'top down' approach. This change has also involved the technological devices developed for motor and cognitive rehabilitation. It implies that during a task or during therapeutic exercises, new 'top-down' approaches are being used to stimulate the brain in a more direct way to elicit plasticity-mediated motor re-learning. This is opposed to 'Bottom up' approaches, which act at the physical level and attempt to bring about changes at the level of the central neural system. AREAS COVERED In the present unsystematic review, we present the most promising innovative technological devices that can effectively support rehabilitation based on a top-down approach, according to the most recent neuroscientific and neurocognitive findings. In particular, we explore if and how the use of new technological devices comprising serious exergames, virtual reality, robots, brain computer interfaces, rhythmic music and biofeedback devices might provide a top-down based approach. EXPERT COMMENTARY Motor and cognitive systems are strongly harnessed in humans and thus cannot be separated in neurorehabilitation. Recently developed technologies in motor-cognitive rehabilitation might have a greater positive effect than conventional therapies.
Collapse
Affiliation(s)
- Giovanni Morone
- a Private Inpatient Unit , Santa Lucia foundation IRCCS , Rome , Italy.,b Clinical Laboratory of Experimental Neurorehabilitation , Santa Lucia Foundation IRCCS , Rome , Italy
| | - Grazia Fernanda Spitoni
- c Department of Psychology , Sapienza University of Rome , Rome , Italy.,d Laboratory of Neuropsychology , IRCCS Santa Lucia Foundation , Rome , Italy
| | - Daniela De Bartolo
- b Clinical Laboratory of Experimental Neurorehabilitation , Santa Lucia Foundation IRCCS , Rome , Italy.,c Department of Psychology , Sapienza University of Rome , Rome , Italy
| | - Sheida Ghanbari Ghooshchy
- b Clinical Laboratory of Experimental Neurorehabilitation , Santa Lucia Foundation IRCCS , Rome , Italy.,c Department of Psychology , Sapienza University of Rome , Rome , Italy
| | - Fulvia Di Iulio
- e UOC 3 Neurorihabilitation Santa Lucia Foundation IRCCS , Rome , Italy
| | - Stefano Paolucci
- a Private Inpatient Unit , Santa Lucia foundation IRCCS , Rome , Italy.,b Clinical Laboratory of Experimental Neurorehabilitation , Santa Lucia Foundation IRCCS , Rome , Italy
| | | | - Marco Iosa
- b Clinical Laboratory of Experimental Neurorehabilitation , Santa Lucia Foundation IRCCS , Rome , Italy
| |
Collapse
|
23
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|