1
|
What Happens in Your Brain When You Walk Down the Street? Implications of Architectural Proportions, Biophilia, and Fractal Geometry for Urban Science. URBAN SCIENCE 2022. [DOI: 10.3390/urbansci6010003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
This article reviews current research in visual urban perception. The temporal sequence of the first few milliseconds of visual stimulus processing sheds light on the historically ambiguous topic of aesthetic experience. Automatic fractal processing triggers initial attraction/avoidance evaluations of an environment’s salubriousness, and its potentially positive or negative impacts upon an individual. As repeated cycles of visual perception occur, the attractiveness of urban form affects the user experience much more than had been previously suspected. These perceptual mechanisms promote walkability and intuitive navigation, and so they support the urban and civic interactions for which we establish communities and cities in the first place. Therefore, the use of multiple fractals needs to reintegrate with biophilic and traditional architecture in urban design for their proven positive effects on health and well-being. Such benefits include striking reductions in observers’ stress and mental fatigue. Due to their costs to individual well-being, urban performance, environmental quality, and climatic adaptation, this paper recommends that nontraditional styles should be hereafter applied judiciously to the built environment.
Collapse
|
2
|
Eye-hand coordination: memory-guided grasping during obstacle avoidance. Exp Brain Res 2021; 240:453-466. [PMID: 34787684 DOI: 10.1007/s00221-021-06271-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
When reaching to grasp previously seen, now out-of-view objects, we rely on stored perceptual representations to guide our actions, likely encoded by the ventral visual stream. So-called memory-guided actions are numerous in daily life, for instance, as we reach to grasp a coffee cup hidden behind our morning newspaper. Little research has examined obstacle avoidance during memory-guided grasping, though it is possible obstacles with increased perceptual salience will provoke exacerbated avoidance maneuvers, like exaggerated deviations in eye and hand position away from obtrusive obstacles. We examined the obstacle avoidance strategies adopted as subjects reached to grasp a 3D target object under visually-guided (closed loop or open loop with full vision prior to movement onset) and memory-guided (short- or long-delay) conditions. On any given trial, subjects reached between a pair of flanker obstacles to grasp a target object. The positions and widths of the obstacles were manipulated, though their inner edges remained a constant distance apart. While reach and grasp behavior was consistent with the obstacle avoidance literature, in that reach, grasp, and gaze positions were biased away from obstacles most obtrusive to the reaching hand, our results reveal distinctive avoidance approaches undertaken depend on the availability of visual feedback. Contrary to expectation, we found subjects reaching to grasp after a long delay in the absence of visual feedback failed to modify their final fixation and grasp positions to accommodate the different positions of obstacles, demonstrating a more moderate, rather than exaggerative, obstacle avoidance strategy.
Collapse
|
3
|
Gredin NV, Bishop DT, Williams AM, Broadbent DP. Integrating explicit contextual priors and kinematic information during anticipation. J Sports Sci 2020; 39:783-791. [PMID: 33320053 DOI: 10.1080/02640414.2020.1845494] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We examined the interaction between explicit contextual priors and kinematic information during anticipation in soccer. We employed a video-based anticipation task where skilled soccer players had to predict the direction of the imminent actions of an attacking opponent in possession of the ball. The players performed the task both with and without explicit contextual priors pertaining to the opponent's action tendencies. The strength of the opponent's action tendencies was altered in order to manipulate the reliability of contextual priors (low vs. high). Moreover, the reliability of kinematic information (low vs. high) was manipulated using the temporal occlusion paradigm. The explicit provision of contextual priors biased anticipation towards the most likely direction, given the opponent's action tendencies, and resulted in enhanced performance. This effect was greater under conditions where the reliability of kinematic information was low rather than high. When the reliability of kinematic information was high, the players used explicit contextual priors of high, but not low, reliability to inform their judgements. Findings suggest that athletes employ reliability-based strategies when integrating contextual priors with kinematic information during anticipation. The impact of explicit contextual priors is dependent on the reliability both of the priors and the evolving kinematic information.
Collapse
Affiliation(s)
- N Viktor Gredin
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK
| | - Daniel T Bishop
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK.,Centre for Cognitive Neuroscience, College of Health and Life Sciences, Brunel University London, London, UK
| | - A Mark Williams
- Department of Health, Kinesiology, and Recreation, University of Utah, Salt Lake City, UT, USA
| | - David P Broadbent
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK.,Centre for Cognitive Neuroscience, College of Health and Life Sciences, Brunel University London, London, UK
| |
Collapse
|
4
|
Valdés BA, Khoshnam M, Neva JL, Menon C. Robotics-assisted visual-motor training influences arm position sense in three-dimensional space. J Neuroeng Rehabil 2020; 17:96. [PMID: 32664955 PMCID: PMC7362539 DOI: 10.1186/s12984-020-00727-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/06/2020] [Indexed: 01/09/2023] Open
Abstract
Background Performing activities of daily living depends, among other factors, on awareness of the position and movements of limbs. Neural injuries, such as stroke, might negatively affect such an awareness and, consequently, lead to degrading the quality of life and lengthening the motor recovery process. With the goal of improving the sense of hand position in three-dimensional (3D) space, we investigate the effects of integrating a pertinent training component within a robotic reaching task. Methods In the proof-of-concept study presented in this paper, 12 healthy participants, during a single session, used their dominant hand to attempt reaching without vision to two targets in 3D space, which were placed at locations that resembled the functional task of self-feeding. After each attempt, participants received visual and haptic feedback about their hand’s position to accurately locate the target. Performance was evaluated at the beginning and end of each session during an assessment in which participants reached without visual nor haptic feedback to three targets: the same two targets employed during the training phase and an additional one to evaluate the generalization of training. Results Collected data showed a statistically significant [39.81% (p=0.001)] reduction of end-position reaching error when results of reaching to all targets were combined. End-position error to the generalization target, although not statistically significant, was reduced by 15.47%. Conclusions These results provide support for the effectiveness of combining an arm position sense training component with functional motor tasks, which could be implemented in the design of future robot-assisted rehabilitation paradigms to potentially expedite the recovery process of individuals with neurological injuries.
Collapse
Affiliation(s)
- Bulmaro A Valdés
- Menrva Research Group, Schools of Mechatronic System and Engineering Science, Simon Fraser University, Metro Vancouver, BC, Canada
| | - Mahta Khoshnam
- Menrva Research Group, Schools of Mechatronic System and Engineering Science, Simon Fraser University, Metro Vancouver, BC, Canada
| | - Jason L Neva
- Université de Montréal, École de kinésiologie et des sciences de l'activité physique, Faculté de médecine, Montréal, QC, Canada.,Centre de recherche de l'institut universitaire de gériatrie de Montréal, Montréal, QC, Canada
| | - Carlo Menon
- Menrva Research Group, Schools of Mechatronic System and Engineering Science, Simon Fraser University, Metro Vancouver, BC, Canada.
| |
Collapse
|
5
|
Magnaguagno L, Hossner EJ. The impact of self-generated and explicitly acquired contextual knowledge on anticipatory performance. J Sports Sci 2020; 38:2108-2117. [PMID: 32501176 DOI: 10.1080/02640414.2020.1774142] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The present study aimed to investigate the impact of self-generated and explicitly acquired contextual knowledge of teammates' defensive qualities on anticipatory performance in a complex sensorimotor task. Twelve expert and twelve near-expert handball players were examined in a domain-specific defence task presented in an immersive virtual-reality environment. In two-thirds of the trials, 1:1 situations (i.e., teammate versus opponent) were presented in which the teammates next to the participant played a specific role. Whilst the weak teammate lost every situation, which required the participant to block a throw, the strong teammate won every situation, which required the participant to stay in his position. Since explicit knowledge of this pattern was only provided in a later phase of the experiment, participants would have to generate the respective knowledge themselves beforehand. To this end, the following variables were analysed: the detection of experimentally induced patterns, the correctness of the participants' motor responses and their positioning as a function of the respective teammate's defensive quality. Main results showed that experts are better able to utilize both self-generated as well as explicitly acquired knowledge regarding teammates' defensive qualities, whereas near-experts' performance was enhanced only by explicitly provided contextual knowledge.
Collapse
|
6
|
McDonough KL, Costantini M, Hudson M, Ward E, Bach P. Affordance matching predictively shapes the perceptual representation of others' ongoing actions. J Exp Psychol Hum Percept Perform 2020; 46:847-859. [PMID: 32378934 PMCID: PMC7391862 DOI: 10.1037/xhp0000745] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Predictive processing accounts of social perception argue that action observation is a predictive process, in which inferences about others' goals are tested against the perceptual input, inducing a subtle perceptual confirmation bias that distorts observed action kinematics toward the inferred goals. Here we test whether such biases are induced even when goals are not explicitly given but have to be derived from the unfolding action kinematics. In 2 experiments, participants briefly saw an actor reach ambiguously toward a large object and a small object, with either a whole-hand power grip or an index-finger and thumb precision grip. During its course, the hand suddenly disappeared, and participants reported its last seen position on a touch-screen. As predicted, judgments were consistently biased toward apparent action targets, such that power grips were perceived closer to large objects and precision grips closer to small objects, even if the reach kinematics were identical. Strikingly, these biases were independent of participants' explicit goal judgments. They were of equal size when action goals had to be explicitly derived in each trial (Experiment 1) or not (Experiment 2) and, across trials and across participants, explicit judgments and perceptual biases were uncorrelated. This provides evidence, for the first time, that people make online adjustments of observed actions based on the match between hand grip and object goals, distorting their perceptual representation toward implied goals. These distortions may not reflect high-level goal assumptions, but emerge from relatively low-level processing of kinematic features within the perceptual system. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
7
|
Abstract
Information stored in working memory (WM) is incorporated into many daily decisions and actions, and many complex decisions involve WM; however, there has been little work on investigating what WM information is used in memory decisions. Here we try to draw connections between WM and decision making by manipulating prior beliefs in a standard WM task with rewards. We use this paradigm to show that WM contains a representation of the trial-by-trial uncertainty of visual stimuli. This uncertainty is incorporated into rewarded decisions along with other information, such as expectations about the environment. By studying WM in parallel with decision making, we can gain new insight into how these systems work together. Working memory (WM) plays an important role in action planning and decision making; however, both the informational content of memory and how that information is used in decisions remain poorly understood. To investigate this, we used a color WM task in which subjects viewed colored stimuli and reported both an estimate of a stimulus color and a measure of memory uncertainty, obtained through a rewarded decision. Reported memory uncertainty is correlated with memory error, showing that people incorporate their trial-to-trial memory quality into rewarded decisions. Moreover, memory uncertainty can be combined with other sources of information; after inducing expectations (prior beliefs) about stimuli probabilities, we found that estimates became shifted toward expected colors, with the shift increasing with reported uncertainty. The data are best fit by models in which people incorporate their trial-to-trial memory uncertainty with potential rewards and prior beliefs. Our results suggest that WM represents uncertainty information, and that this can be combined with prior beliefs. This highlights the potential complexity of WM representations and shows that rewarded decision can be a powerful tool for examining WM and informing and constraining theoretical, computational, and neurobiological models of memory.
Collapse
|
8
|
Brenner E, Smeets JBJ. Continuously updating one’s predictions underlies successful interception. J Neurophysiol 2018; 120:3257-3274. [DOI: 10.1152/jn.00517.2018] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This paper reviews our understanding of the interception of moving objects. Interception is a demanding task that requires both spatial and temporal precision. The required precision must be achieved on the basis of imprecise and sometimes biased sensory information. We argue that people make precise interceptive movements by continuously adjusting their movements. Initial estimates of how the movement should progress can be quite inaccurate. As the movement evolves, the estimate of how the rest of the movement should progress gradually becomes more reliable as prediction is replaced by sensory information about the progress of the movement. The improvement is particularly important when things do not progress as anticipated. Constantly adjusting one’s estimate of how the movement should progress combines the opportunity to move in a way that one anticipates will best meet the task demands with correcting for any errors in such anticipation. The fact that the ongoing movement might have to be adjusted can be considered when determining how to move, and any systematic anticipation errors can be corrected on the basis of the outcome of earlier actions.
Collapse
Affiliation(s)
- Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Deravet N, Blohm G, de Xivry JJO, Lefèvre P. Weighted integration of short-term memory and sensory signals in the oculomotor system. J Vis 2018; 18:16. [PMID: 29904791 DOI: 10.1167/18.5.16] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
Collapse
Affiliation(s)
- Nicolas Deravet
- Institute of Information and Communication Technologies, Electronics, and Applied Mathematics and Institute of Neuroscience, Université catholique de Louvain, B-1348 Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada.,Canadian Action and Perception Network (CAPnet)
| | - Jean-Jacques Orban de Xivry
- Department of Kinesiology, Movement Control and Neuroplasticity Research Group, and Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics, and Applied Mathematics and Institute of Neuroscience, Université catholique de Louvain, B-1348 Louvain-La-Neuve, Belgium
| |
Collapse
|
10
|
Oostwoud Wijdenes L, Medendorp WP. State Estimation for Early Feedback Responses in Reaching: Intramodal or Multimodal? Front Integr Neurosci 2017; 11:38. [PMID: 29311860 PMCID: PMC5742230 DOI: 10.3389/fnint.2017.00038] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 12/08/2017] [Indexed: 11/13/2022] Open
Abstract
Humans are highly skilled in controlling their reaching movements, making fast and task-dependent movement corrections to unforeseen perturbations. To guide these corrections, the neural control system requires a continuous, instantaneous estimate of the current state of the arm and body in the world. According to Optimal Feedback Control theory, this estimate is multimodal and constructed based on the integration of forward motor predictions and sensory feedback, such as proprioceptive, visual and vestibular information, modulated by context, and shaped by past experience. But how can a multimodal estimate drive fast movement corrections, given that the involved sensory modalities have different processing delays, different coordinate representations, and different noise levels? We develop the hypothesis that the earliest online movement corrections are based on multiple single modality state estimates rather than one combined multimodal estimate. We review studies that have investigated online multimodal integration for reach control and offer suggestions for experiments to test for the existence of intramodal state estimates. If proven true, the framework of Optimal Feedback Control needs to be extended with a stage of intramodal state estimation, serving to drive short-latency movement corrections.
Collapse
Affiliation(s)
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
11
|
Abstract
Investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions by highlighting the importance of behavioral goals and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. This conceptualization of visually guided actions as a sequence of sensory-motor decisions has been formalized within the framework of statistical decision theory, which structures the problem and provides the context for much recent progress in vision and action. Components of a good decision include the task, which defines the behavioral goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge.
Collapse
Affiliation(s)
- Mary M Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Texas 78712;
| |
Collapse
|
12
|
Brenner E, Smeets JB. Accumulating visual information for action. PROGRESS IN BRAIN RESEARCH 2017; 236:75-95. [DOI: 10.1016/bs.pbr.2017.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
13
|
Decision theory, motor planning, and visual memory: deciding where to reach when memory errors are costly. Exp Brain Res 2016; 234:1589-97. [PMID: 26821320 DOI: 10.1007/s00221-016-4553-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2015] [Accepted: 01/04/2016] [Indexed: 10/22/2022]
Abstract
Limitations in visual working memory (VWM) have been extensively studied in psychophysical tasks, but not well understood in terms of how these memory limits translate to performance in more natural domains. For example, in reaching to grasp an object based on a spatial memory representation, overshooting the intended target may be more costly than undershooting, such as when reaching for a cup of hot coffee. The current body of literature lacks a detailed account of how the costs or consequences of memory error influence what we encode in visual memory and how we act on the basis of remembered information. Here, we study how externally imposed monetary costs influence behavior in a motor decision task that involves reach planning based on recalled information from VWM. We approach this from a decision theoretic perspective, viewing decisions of where to aim in relation to the utility of their outcomes given the uncertainty of memory representations. Our results indicate that subjects accounted for the uncertainty in their visual memory, showing a significant difference in their reach planning when monetary costs were imposed for memory errors. However, our findings indicate that subjects memory representations per se were not biased by the imposed costs, but rather subjects adopted a near-optimal post-mnemonic decision strategy in their motor planning.
Collapse
|
14
|
Issen L, Huxlin KR, Knill D. Spatial integration of optic flow information in direction of heading judgments. J Vis 2015; 15:14. [PMID: 26024461 DOI: 10.1167/15.6.14] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world.
Collapse
|
15
|
Zhang L, Yang J, Inai Y, Huang Q, Wu J. Effects of aging on pointing movements under restricted visual feedback conditions. Hum Mov Sci 2014; 40:1-13. [PMID: 25506638 DOI: 10.1016/j.humov.2014.11.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Revised: 11/11/2014] [Accepted: 11/13/2014] [Indexed: 11/17/2022]
Abstract
The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects.
Collapse
Affiliation(s)
- Liancun Zhang
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-Naka, Kita-Ku, Okayama 700-8530, Japan
| | - Jiajia Yang
- Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-Naka, Kita-Ku, Okayama 700-8530, Japan
| | - Yoshinobu Inai
- Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-Naka, Kita-Ku, Okayama 700-8530, Japan
| | - Qiang Huang
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China.
| | - Jinglong Wu
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-Naka, Kita-Ku, Okayama 700-8530, Japan.
| |
Collapse
|
16
|
An active system for visually-guided reaching in 3D across binocular fixations. ScientificWorldJournal 2014; 2014:179391. [PMID: 24672295 PMCID: PMC3932251 DOI: 10.1155/2014/179391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Accepted: 11/12/2013] [Indexed: 11/20/2022] Open
Abstract
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data.
Collapse
|
17
|
Kalia AA, Schrater PR, Legge GE. Combining path integration and remembered landmarks when navigating without vision. PLoS One 2013; 8:e72170. [PMID: 24039742 PMCID: PMC3764103 DOI: 10.1371/journal.pone.0072170] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2013] [Accepted: 07/12/2013] [Indexed: 11/23/2022] Open
Abstract
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Collapse
Affiliation(s)
- Amy A. Kalia
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- * E-mail:
| | - Paul R. Schrater
- Department of Psychology, University of Minnesota Twin-Cities, Minneapolis, Minnesota, United States of America
| | - Gordon E. Legge
- Department of Psychology, University of Minnesota Twin-Cities, Minneapolis, Minnesota, United States of America
| |
Collapse
|
18
|
Byrne PA, Henriques DYP. When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process. Neuropsychologia 2012; 51:26-37. [PMID: 23142707 DOI: 10.1016/j.neuropsychologia.2012.10.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 08/16/2012] [Accepted: 10/05/2012] [Indexed: 10/27/2022]
Abstract
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Collapse
Affiliation(s)
- Patrick A Byrne
- Centre for Vision Research, Science, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3.
| | | |
Collapse
|
19
|
Hesse C, Schenk T, Deubel H. Attention is needed for action control: Further evidence from grasping. Vision Res 2012; 71:37-43. [DOI: 10.1016/j.visres.2012.08.014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 08/07/2012] [Accepted: 08/21/2012] [Indexed: 10/27/2022]
|
20
|
Abstract
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM.
Collapse
Affiliation(s)
- Chris R Sims
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627, USA .
| | | | | |
Collapse
|
21
|
Knowing how much you don't know: a neural organization of uncertainty estimates. Nat Rev Neurosci 2012; 13:572-86. [PMID: 22781958 DOI: 10.1038/nrn3289] [Citation(s) in RCA: 191] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How we estimate uncertainty is important in decision neuroscience and has wide-ranging implications in basic and clinical neuroscience, from computational models of optimality to ideas on psychopathological disorders including anxiety, depression and schizophrenia. Empirical research in neuroscience, which has been based on divergent theoretical assumptions, has focused on the fundamental question of how uncertainty is encoded in the brain and how it influences behaviour. Here, we integrate several theoretical concepts about uncertainty into a decision-making framework. We conclude that the currently available evidence indicates that distinct neural encoding (including summary statistic-type representations) of uncertainty occurs in distinct neural systems.
Collapse
|
22
|
Vilares I, Kording K. Bayesian models: the structure of the world, uncertainty, behavior, and the brain. Ann N Y Acad Sci 2011; 1224:22-39. [PMID: 21486294 DOI: 10.1111/j.1749-6632.2011.05965.x] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Experiments on humans and other animals have shown that uncertainty due to unreliable or incomplete information affects behavior. Recent studies have formalized uncertainty and asked which behaviors would minimize its effect. This formalization results in a wide range of Bayesian models that derive from assumptions about the world, and it often seems unclear how these models relate to one another. In this review, we use the concept of graphical models to analyze differences and commonalities across Bayesian approaches to the modeling of behavioral and neural data. We review behavioral and neural data associated with each type of Bayesian model and explain how these models can be related. We finish with an overview of different theories that propose possible ways in which the brain can represent uncertainty.
Collapse
Affiliation(s)
- Iris Vilares
- Departments of Physical Medicine and Rehabilitation, Physiology, and Applied Mathematics, Northwestern University, Chicago, Illinois. Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois.International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Institutio Gulbenkian de Ciência, Oeiras, Portugal
| | - Konrad Kording
- Departments of Physical Medicine and Rehabilitation, Physiology, and Applied Mathematics, Northwestern University, Chicago, Illinois. Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois.International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Institutio Gulbenkian de Ciência, Oeiras, Portugal
| |
Collapse
|
23
|
Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Res 2010; 50:2661-70. [PMID: 20816887 DOI: 10.1016/j.visres.2010.08.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2010] [Revised: 08/16/2010] [Accepted: 08/31/2010] [Indexed: 11/22/2022]
Abstract
Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.
Collapse
|
24
|
Wu J, Yang J, Honda T. Fitts' law holds for pointing movements under conditions of restricted visual feedback. Hum Mov Sci 2010; 29:882-92. [PMID: 20659774 DOI: 10.1016/j.humov.2010.03.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2009] [Revised: 03/19/2010] [Accepted: 03/21/2010] [Indexed: 10/19/2022]
Abstract
Fitts' law robustly predicts the time required to move rapidly to a target. However, it is unclear whether Fitts' law holds for visually guided actions under visually restricted conditions. We tested whether Fitts' law applies under various conditions of visual restriction and compared pointing movements in each condition. Ten healthy participants performed four pointing movement tasks under different visual feedback conditions, including full-vision (FV), no-hand-movement (NM), no-target-location (NT), and no-vision (NV) feedback conditions. The movement times (MTs) for each task exhibited highly linear relationships with the index of difficulty (r(2)>.96). These findings suggest that pointing movements follow Fitts' law even when visual feedback is restricted or absent. However, the MTs and accuracy of pointing movements decreased for difficult tasks involving visual restriction.
Collapse
Affiliation(s)
- Jinglong Wu
- Graduate School of Natural Science and Technology, Okayama University, Okayama, 3-1-1 Tsushima-Naka, Kita-Ku, Okayama 700-8530, Japan.
| | | | | |
Collapse
|
25
|
Byrne PA, Crawford JD. Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach. J Neurophysiol 2010; 103:3054-69. [DOI: 10.1152/jn.01008.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark “shift” during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric–allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration—despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment—had a strong influence on egocentric–allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Collapse
Affiliation(s)
- Patrick A. Byrne
- Centre for Vision Research,
- Canadian Action and Perception Network, and
| | - J. Douglas Crawford
- Centre for Vision Research,
- Canadian Action and Perception Network, and
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| |
Collapse
|
26
|
Baldauf D, Deubel H. Attentional landscapes in reaching and grasping. Vision Res 2010; 50:999-1013. [DOI: 10.1016/j.visres.2010.02.008] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2009] [Revised: 02/06/2010] [Accepted: 02/10/2010] [Indexed: 11/30/2022]
|