1
|
Chen Y, Mou W. Path integration, rather than being suppressed, is used to update spatial views in familiar environments with constantly available landmarks. Cognition 2024; 242:105662. [PMID: 37952370 DOI: 10.1016/j.cognition.2023.105662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 11/01/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
This project tested three hypotheses conceptualizing the interaction between path integration based on self-motion and piloting based on landmarks in a familiar environment with persistent landmarks. The first hypothesis posits that path integration functions automatically, as in environments lacking persistent landmarks (environment-independent hypothesis). The second hypothesis suggests that persistent landmarks suppress path integration (suppression hypothesis). The third hypothesis proposes that path integration updates the spatial views of the environment (updating-spatial-views hypothesis). Participants learned a specific object's location. Subsequently, they undertook an outbound path originating from the object and then indicated the object's location (homing). In Experiments 1&1b, there were landmarks throughout the first 9 trials. On some later trials, the landmarks were presented during the outbound path but unexpectedly removed during homing (catch trials). On the last trials, there were no landmarks throughout (baseline trials). Experiments 2-3 were similar but added two identical objects (the original one and a rotated distractor) during homing on the catch and baseline trials. Experiment 4 replaced two identical objects with two groups of landmarks. The results showed that in Experiments 1&1b, homing angular error on the first catch trial was significantly larger than the matched baseline trial, undermining the environment-independent hypothesis. Conversely, in Experiment 2-4, the proportion of participants who recognized the original object or landmarks was similar between the first catch and the matched baseline trial, favoring the updating-spatial-views hypothesis over the suppression hypothesis. Therefore, while mismatches between updated spatial views and actual views of unexpected removal of landmarks impair homing performance, the updated spatial views help eliminate ambiguous targets or landmarks within the familiar environment.
Collapse
Affiliation(s)
- Yue Chen
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| | - Weimin Mou
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| |
Collapse
|
2
|
Solbach MD, Tsotsos JK. The psychophysics of human three-dimensional active visuospatial problem-solving. Sci Rep 2023; 13:19967. [PMID: 37968501 PMCID: PMC10651907 DOI: 10.1038/s41598-023-47188-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/09/2023] [Indexed: 11/17/2023] Open
Abstract
Our understanding of how visual systems detect, analyze and interpret visual stimuli has advanced greatly. However, the visual systems of all animals do much more; they enable visual behaviours. How well the visual system performs while interacting with the visual environment and how vision is used in the real world is far from fully understood, especially in humans. It has been suggested that comparison is the most primitive of psychophysical tasks. Thus, as a probe into these active visual behaviours, we use a same-different task: Are two physical 3D objects visually the same? This task is a fundamental cognitive ability. We pose this question to human subjects who are free to move about and examine two real objects in a physical 3D space. The experimental design is such that all behaviours are directed to viewpoint change. Without any training, our participants achieved a mean accuracy of 93.82%. No learning effect was observed on accuracy after many trials, but some effect was seen for response time, number of fixations and extent of head movement. Our probe task, even though easily executed at high-performance levels, uncovered a surprising variety of complex strategies for viewpoint control, suggesting that solutions were developed dynamically and deployed in a seemingly directed hypothesize-and-test manner tailored to the specific task. Subjects need not acquire task-specific knowledge; instead, they formulate effective solutions right from the outset, and as they engage in a series of attempts, those solutions progressively refine, becoming more efficient without compromising accuracy.
Collapse
Affiliation(s)
- Markus D Solbach
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada.
| | - John K Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
3
|
Peacock CE, Hall EH, Henderson JM. Objects are selected for attention based upon meaning during passive scene viewing. Psychon Bull Rev 2023; 30:1874-1886. [PMID: 37095319 PMCID: PMC11164276 DOI: 10.3758/s13423-023-02286-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/26/2023]
Abstract
While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
4
|
Jeung S, Hilton C, Berg T, Gehrke L, Gramann K. Virtual Reality for Spatial Navigation. Curr Top Behav Neurosci 2023; 65:103-129. [PMID: 36512288 DOI: 10.1007/7854_2022_403] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Immersive virtual reality (VR) allows its users to experience physical space in a non-physical world. It has developed into a powerful research tool to investigate the neural basis of human spatial navigation as an embodied experience. The task of wayfinding can be carried out by using a wide range of strategies, leading to the recruitment of various sensory modalities and brain areas in real-life scenarios. While traditional desktop-based VR setups primarily focus on vision-based navigation, immersive VR setups, especially mobile variants, can efficiently account for motor processes that constitute locomotion in the physical world, such as head-turning and walking. When used in combination with mobile neuroimaging methods, immersive VR affords a natural mode of locomotion and high immersion in experimental settings, designing an embodied spatial experience. This in turn facilitates ecologically valid investigation of the neural underpinnings of spatial navigation.
Collapse
Affiliation(s)
- Sein Jeung
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christopher Hilton
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Timotheus Berg
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Lukas Gehrke
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Klaus Gramann
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany.
- Center for Advanced Neurological Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
5
|
Reggente N. VR for Cognition and Memory. Curr Top Behav Neurosci 2023; 65:189-232. [PMID: 37440126 DOI: 10.1007/7854_2023_425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023]
Abstract
This chapter will provide a review of research into human cognition through the lens of VR-based paradigms for studying memory. Emphasis is placed on why VR increases the ecological validity of memory research and the implications of such enhancements.
Collapse
Affiliation(s)
- Nicco Reggente
- Institute for Advanced Consciousness Studies, Santa Monica, CA, USA.
| |
Collapse
|
6
|
Nardini M. Merging familiar and new senses to perceive and act in space. Cogn Process 2021; 22:69-75. [PMID: 34410554 PMCID: PMC8423643 DOI: 10.1007/s10339-021-01052-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 07/27/2021] [Indexed: 11/30/2022]
Abstract
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants’ behaviour with the predictions of alternative information processing models. This lets us see when and how—during development, and with experience—the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Science Site, Durham, DH1 3LE, UK.
| |
Collapse
|
7
|
Alignment Effects in Spatial Perspective Taking from an External Vantage Point. Brain Sci 2021; 11:brainsci11020204. [PMID: 33562245 PMCID: PMC7915451 DOI: 10.3390/brainsci11020204] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/02/2021] [Accepted: 02/03/2021] [Indexed: 11/17/2022] Open
Abstract
In three experiments, we examined, using a perceptual task, the difficulties of spatial perspective taking. Participants imagined adopting perspectives around a table and pointed from them towards the positions of a target. Depending on the condition, the scene was presented on a virtual screen in Virtual Reality or projected on an actual screen in the real world (Experiment 1), or viewed as immediate in Virtual Reality (Experiment 2). Furthermore, participants pointed with their arm (Experiments 1 and 2) vs. a joystick (Experiment 3). Results showed a greater alignment effect (i.e., a larger difference in performance between trials with imagined perspectives that were aligned vs. misaligned with the orientation of the participant) when executing the task in a virtual rather than in the real environment, suggesting that visual access to body information and room geometry, which is typically lacking in Virtual Reality, influences perspective taking performance. The alignment effect was equal across the Virtual Reality conditions of Experiment 1 and Experiment 2, suggesting that being an internal (compared to an external) observer to the scene induces no additional difficulties for perspective taking. Equal alignment effects were also found when pointing with the arm vs. a joystick, indicating that a body-dependent response mode such as pointing with the arm creates no further difficulties for reasoning from imagined perspectives.
Collapse
|
8
|
Iaria G, Slone E. The relationship between mental and physical space and its impact on topographical disorientation. HANDBOOK OF CLINICAL NEUROLOGY 2021; 178:195-211. [PMID: 33832677 DOI: 10.1016/b978-0-12-821377-3.00009-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We generate mental representations of space to facilitate our ability to remember things and navigate our environment. Many studies implicitly assume that these representations simply reflect the environments that they represent without considering other factors that influence the extent to which this is the case. Here, we bring together findings from cognitive psychology, environmental psychology, geography, urban planning, and neuroscience to discuss how internalizing the environment involves a complex interplay between bottom-up and top-down mental processes and depends on key characteristics of the physical environment itself. We describe how mental space is structured, the ways in which mental and physical space converge and diverge, and the disparate but complementary techniques used to assess these relationships. Finally, we contextualize this knowledge in the clinical populations affected by acquired and developmental topographical disorientation, exploring mechanisms that cause these patients to get lost in familiar surroundings.
Collapse
Affiliation(s)
- Giuseppe Iaria
- Department of Psychology, University of Calgary, Calgary, AB, Canada.
| | - Edward Slone
- Department of Psychology, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
9
|
Heywood-Everett E, Baker DH, Hartley T. Testing the precision of spatial memory representations using a change-detection task: effects of viewpoint change. JOURNAL OF COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1080/20445911.2020.1863414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Edward Heywood-Everett
- Department of Psychology and York Biomedical Research Institute, University of York, York, UK
| | - Daniel H. Baker
- Department of Psychology and York Biomedical Research Institute, University of York, York, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, York, UK
| |
Collapse
|
10
|
Updating perception and action across real-world viewpoint changes. Atten Percept Psychophys 2020; 82:2603-2617. [PMID: 32333370 DOI: 10.3758/s13414-020-02026-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A growing body of research suggests that performing actions can distort the perception of size, distance, and other visual information. These distortions have been observed under a variety of circumstances, and appear to persist in both perception and memory. However, it is unclear whether these distortions persist as observers move to new viewpoints. To address this issue, the present study assessed whether action-specific distortions persist across changes in viewpoint. Participants viewed an object that was projected onto a table, then reached for it with their index finger or a reach-extending tool. After reaching for the object, participants remained stationary or moved to a new viewpoint, then estimated the object's distance from their current viewpoint. When participants remained stationary, using a reach-extending tool led them to report shorter distance estimates. However, when participants moved to a new viewpoint, these distortions were eliminated. Similar effects were observed when participants produced different types of movement, including when participants rotated in place, moved to a new location, or simply walked in place. Together, these findings suggest that action-specific distortions are eliminated when observers move and perform other actions.
Collapse
|
11
|
Wang Y, Yu X, Dou Y, McNamara TP, Li J. Mental representations of recently learned nested environments. PSYCHOLOGICAL RESEARCH 2020; 85:2922-2934. [PMID: 33211160 DOI: 10.1007/s00426-020-01447-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 11/03/2020] [Indexed: 10/22/2022]
Abstract
Two experiments investigated the mental representations of objects' location in a virtual nested environment. In Experiment 1, participants learned the locations of objects (buildings or related accessories) in an exterior environment and then learned the locations of objects inside one of the centrally located buildings (interior environment). Participants completed judgments of relative direction in which the imagined heading was established by pairs of objects from the interior environment and the target was one of the objects in the exterior environment. Performance was best for the imagined heading and allocentric target direction parallel to the learning heading of the exterior environment, but the effect of allocentric target direction was only significant for the imagined headings aligned with the reference axes of both environments; in addition, performance was best along the front-back egocentric axis (parallel to the imagined heading). Experiment 2 used the same learning procedure. After learning, the viewpoint was moved from the exterior environment along a smooth path into a side entrance of the building/interior environment. There participants saw the array of interior objects in the orientation consistent with their movement (correct cue), the array of objects in an orientation inconsistent with their movement (misleading cue), or no array of objects (no cue), and then pointed to objects in the exterior environment. Pointing performance was best for the correct-cue condition. Collectively the results indicated that memories of nested spaces are segregated by spatial conceptual level, and that spatial relations between levels are specified in terms of the dominant reference directions.
Collapse
Affiliation(s)
- Yao Wang
- School of Psychology, Nanjing Normal University, Nanjing, 210097, People's Republic of China
| | - Xiaohan Yu
- School of Psychology, Nanjing Normal University, Nanjing, 210097, People's Republic of China
| | - Yan Dou
- School of Psychology, Nanjing Normal University, Nanjing, 210097, People's Republic of China
| | | | - Jing Li
- School of Psychology, Nanjing Normal University, Nanjing, 210097, People's Republic of China.
| |
Collapse
|
12
|
The Difficulty of Effectively Using Allocentric Prior Information in a Spatial Recall Task. Sci Rep 2020; 10:7000. [PMID: 32332793 PMCID: PMC7181880 DOI: 10.1038/s41598-020-62775-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/30/2022] Open
Abstract
Prior information represents the long-term statistical structure of an environment. For example, colds develop more often than throat cancer, making the former a more likely diagnosis for a sore throat. There is ample evidence for effective use of prior information during a variety of perceptual tasks, including the ability to recall locations using an egocentric (self-based) frame. However, it is not yet known if people can use prior information effectively when using an allocentric (world-based) frame. Forty-eight adults were shown sixty sets of three target locations in a sparse virtual environment with three beacons. The targets were drawn from one of four prior distributions. They were then asked to point to the targets after a delay and a change in perspective. While searches were biased towards the beacons, we did not find any evidence that participants successfully exploited the prior distributions of targets. These results suggest that allocentric reasoning does not conform to normative Bayesian models: we saw no evidence for use of priors in our cognitively-complex (allocentric) task, unlike in previous, simpler (egocentric) recall tasks. It is possible that this reflects the high biological cost of processing precise allocentric information.
Collapse
|
13
|
Janzen G, van Roij CJM, Oosterman JM, Kessels RPC. Egocentric and Allocentric Spatial Memory in Korsakoff's Amnesia. Front Hum Neurosci 2020; 14:121. [PMID: 32296321 PMCID: PMC7136515 DOI: 10.3389/fnhum.2020.00121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 03/16/2020] [Indexed: 11/27/2022] Open
Abstract
The goal of the present study was to investigate spatial memory in a group of patients with amnesia due to Korsakoff’s syndrome (KS). We used a virtual spatial memory task that allowed us to separate the use of egocentric and allocentric spatial reference frames to determine object locations. Research investigating the ability of patients with Korsakoff’s amnesia to use different reference frames is scarce and it remains unclear whether these patients are impaired in using ego- and allocentric reference frames to the same extent. Twenty Korsakoff patients and 24 matched controls watched an animation of a bird flying in one of three trees standing in a virtual environment. After the bird disappeared, the camera turned around, by which the trees were briefly out of sight and then turned back to the center of the environment. Participants were asked in which tree the bird was hiding. In half of the trials, a landmark was shown. Half of the trials required an immediate response whereas in the other half a delay of 10 s was present. Patients performed significantly worse than controls. For all participants trials with a landmark were easier than without a landmark and trials without a delay were easier than with a delay. While controls were above chance on all trials patients were at chance in allocentric trials without a landmark present and with a memory delay. Patients showed no difference in the ego- and the allocentric condition. Together the findings suggest that despite the amnesia, spatial memory and especially the use of ego- and allocentric reference frames in Korsakoff patients are spared.
Collapse
Affiliation(s)
- Gabriele Janzen
- Behavioral Science Institute, Radboud University Nijmegen, Nijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Claudette J M van Roij
- Centre of Excellence for Neuropsychiatry, Vincent van Gogh Institute for Psychiatry, Venray, Netherlands
| | - Joukje M Oosterman
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Roy P C Kessels
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands.,Centre of Excellence for Korsakoff and Alcohol-Related Cognitive Disorders, Vincent van Gogh Institute for Psychiatry, Venray, Netherlands.,Department of Medical Psychology, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
14
|
Kitazaki M. Virtual Walking Sensation by Prerecorded Oscillating Optic Flow and Synchronous Foot Vibration. Iperception 2019; 10:2041669519882448. [PMID: 31662838 PMCID: PMC6796215 DOI: 10.1177/2041669519882448] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 09/23/2019] [Indexed: 11/15/2022] Open
Abstract
This article reports the first psychological evidence that the combination of oscillating optic flow and synchronous foot vibration evokes a walking sensation. In this study, we first captured a walker's first-person-view scenes with footstep timings. Participants observed the naturally oscillating scenes on a head-mounted display with vibrations on their feet and rated walking-related sensations using a Visual Analogue Scale. They perceived stronger sensations of self-motion, walking, leg action, and telepresence from the oscillating visual flow with foot vibrations than with randomized-timing vibrations or without vibrations. The artificial delay of foot vibrations with respect to the scenes diminished the walking-related sensations. These results suggest that the oscillating visual scenes and synchronous foot vibrations are effective for creating virtual walking sensations.
Collapse
Affiliation(s)
- Michiteru Kitazaki
- Department of Computer Science and Engineering,
Toyohashi University of Technology, Japan
| |
Collapse
|
15
|
No single, stable 3D representation can explain pointing biases in a spatial updating task. Sci Rep 2019; 9:12578. [PMID: 31467296 PMCID: PMC6715735 DOI: 10.1038/s41598-019-48379-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 07/26/2019] [Indexed: 11/23/2022] Open
Abstract
People are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer’s location but the form this might take is debated. We tested the accuracy and reliability of observers’ estimates of the visual direction of previously-viewed targets. Participants viewed four objects from one location, with binocular vision and small head movements then, without any further sight of the targets, they walked to another location and pointed towards them. All conditions were tested in an immersive virtual environment and some were also carried out in a real scene. Participants made large, consistent pointing errors that are poorly explained by any stable 3D representation. Any explanation based on a 3D representation would have to posit a different layout of the remembered scene depending on the orientation of the obscuring wall at the moment the participant points. Our data show that the mechanisms for updating visual direction of unseen targets are not based on a stable 3D model of the scene, even a distorted one.
Collapse
|
16
|
Li P, Abarbanell L. Alternative spin on phylogenetically inherited spatial reference frames. Cognition 2019; 191:103983. [PMID: 31254747 DOI: 10.1016/j.cognition.2019.05.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 05/18/2019] [Accepted: 05/25/2019] [Indexed: 10/26/2022]
Abstract
People make use of different frames of reference (north-south; left-right) to talk about space. To explore the cognitive capacity that children bring to learning spatial language, Haun, Rapold, Call, Janzen, and Levinson (2006) examined children's ability to notice and abstract invariant frames of references across instances. They found that 4-year-olds and non-human great apes often noticed environment-defined allocentric relations and not body-defined egocentric ones, leading them to conclude that preschoolers are ready to learn environment-defined terms (e.g. "uphill"), but not body-defined ones (e.g., "left"). However, such a conclusion may be premature. In four new experiments we demonstrate that the previous findings could be an artifact of specific task constraints. With minor experiment modifications, similar-aged children readily noticed egocentric relations. Reviewing additional research, we provide an account of what makes acquiring frames of reference easy or difficult, and why full mastery of terms like "left" and "right" may take many years under normal circumstances.
Collapse
Affiliation(s)
- Peggy Li
- Harvard University, United States.
| | | |
Collapse
|
17
|
Wilke F, Bender A, Beller S. Flexibility in adopting relative frames of reference in dorsal and lateral settings. Q J Exp Psychol (Hove) 2019; 72:2393-2407. [PMID: 30874472 DOI: 10.1177/1747021819841310] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The relative frame of reference (FoR) is used to describe spatial relations between two objects from an observer's perspective. Standard, frontal referencing situations with objects located in the observer's visual field afford three well-established variants: translation, reflection, and rotation. Here, we focus on references in non-standard situations with objects located at the back or at the side of an observer (dorsal and lateral, respectively). We scrutinise the consistency assumption, which was introduced to infer the covert strategy used in dorsal tasks from an ambiguous overt response: that, when confronted with a non-standard situation, people adopt a strategy consistent with how they construct the relative FoR in frontal situations. Lateral tasks enable us to disentangle the ambiguous response. The results of a study in Norway and Germany support the consistency assumption in part: Nearly all participants with a preference for translation in frontal tasks applied translation in lateral tasks, and some participants with a preference for reflection in frontal tasks turned towards the objects before applying reflection in lateral tasks. Most other participants with a preference for reflection in frontal tasks, however, switched to translation in lateral tasks. The latter may be due to a specific affordance of the lateral arrangements, which invite translation as the easier strategy compared to the alternative derived from reflection. Our findings indicate that people do not apply their preferred variant of the relative FoR to all kinds of situations, but rather flexibly adapt their strategy when it is more convenient to do so.
Collapse
Affiliation(s)
- Fiona Wilke
- 1 Department of Psychology, University of Freiburg, Freiburg, Germany
| | - Andrea Bender
- 2 Department of Psychosocial Science, University of Bergen, Bergen, Norway.,3 SFF Centre for Early Sapiens Behaviour (SapienCE), University of Bergen, Bergen, Norway
| | - Sieghard Beller
- 2 Department of Psychosocial Science, University of Bergen, Bergen, Norway.,3 SFF Centre for Early Sapiens Behaviour (SapienCE), University of Bergen, Bergen, Norway
| |
Collapse
|
18
|
Descloux V, Maurer R. Perspective taking to assess topographical disorientation: Group study and preliminary normative data. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:199-218. [DOI: 10.1080/23279095.2018.1528262] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Virginie Descloux
- Faculty of Psychology and Educational Sciences, University of Geneva, Genève, Switzerland
| | - Roland Maurer
- Faculty of Psychology and Educational Sciences, University of Geneva, Genève, Switzerland
| |
Collapse
|
19
|
Rigutti S, Stragà M, Jez M, Baldassi G, Carnaghi A, Miceu P, Fantoni C. Don't worry, be active: how to facilitate the detection of errors in immersive virtual environments. PeerJ 2018; 6:e5844. [PMID: 30397547 PMCID: PMC6211266 DOI: 10.7717/peerj.5844] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 09/26/2018] [Indexed: 11/23/2022] Open
Abstract
The current research aims to study the link between the type of vision experienced in a collaborative immersive virtual environment (active vs. multiple passive), the type of error one looks for during a cooperative multi-user exploration of a design project (affordance vs. perceptual violations), and the type of setting in which multi-user perform (field in Experiment 1 vs. laboratory in Experiment 2). The relevance of this link is backed by the lack of conclusive evidence on an active vs. passive vision advantage in cooperative search tasks within software based on immersive virtual reality (IVR). Using a yoking paradigm based on the mixed usage of simultaneous active and multiple passive viewings, we found that the likelihood of error detection in a complex 3D environment was characterized by an active vs. multi-passive viewing advantage depending on: (1) the degree of knowledge dependence of the type of error the passive/active observers were looking for (low for perceptual violations, vs. high for affordance violations), as the advantage tended to manifest itself irrespectively from the setting for affordance, but not for perceptual violations; and (2) the degree of social desirability possibly induced by the setting in which the task was performed, as the advantage occurred irrespectively from the type of error in the laboratory (Experiment 2) but not in the field (Experiment 1) setting. Results are relevant to future development of cooperative software based on IVR used for supporting the design review. A multi-user design review experience in which designers, engineers and end-users all cooperate actively within the IVR wearing their own head mounted display, seems more suitable for the detection of relevant errors than standard systems characterized by a mixed usage of active and passive viewing.
Collapse
Affiliation(s)
- Sara Rigutti
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marta Stragà
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marco Jez
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Giulio Baldassi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Andrea Carnaghi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Piero Miceu
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Carlo Fantoni
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| |
Collapse
|
20
|
Holmes CA, Newcombe NS, Shipley TF. Move to learn: Integrating spatial information from multiple viewpoints. Cognition 2018; 178:7-25. [DOI: 10.1016/j.cognition.2018.05.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 04/26/2018] [Accepted: 05/01/2018] [Indexed: 12/27/2022]
|
21
|
Frick A. Spatial transformation abilities and their relation to later mathematics performance. PSYCHOLOGICAL RESEARCH 2018; 83:1465-1484. [DOI: 10.1007/s00426-018-1008-5] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 04/03/2018] [Indexed: 12/23/2022]
|
22
|
Bülthoff I, Mohler BJ, Thornton IM. Face recognition of full-bodied avatars by active observers in a virtual environment. Vision Res 2018; 157:242-251. [PMID: 29274811 DOI: 10.1016/j.visres.2017.12.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/01/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning. Half of the learned faces were shown at test in an orientation close to that experienced during learning while the others were viewed from a new viewing angle. All observers found novel views more difficult to recognize than familiar ones. Overall, the active group performed better than both other groups. Furthermore, the group learning faces from static images was the only one to be at chance level in the novel-view condition. These findings suggest that active exploration combined with a dynamic experience of the faces to learn allow for more robust face recognition and point out the value of such techniques for integrating facial visual information and enhancing recognition from novel viewpoints.
Collapse
Affiliation(s)
- Isabelle Bülthoff
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany.
| | - Betty J Mohler
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Ian M Thornton
- Department of Cognitive Science, University of Malta, Malta
| |
Collapse
|
23
|
Krüger M, Krist H. Does the Motor System Facilitate Spatial Imagery? ZEITSCHRIFT FUR ENTWICKLUNGSPSYCHOLOGIE UND PADAGOGISCHE PSYCHOLOGIE 2017. [DOI: 10.1026/0049-8637/a000175] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract. Recent studies have ascertained a link between the motor system and imagery in children. A motor effect on imagery is demonstrated by the influence of stimuli-related movement constraints (i. e., constraints defined by the musculoskeletal system) on mental rotation, or by interference effects due to participants’ own body movements or body postures. This link is usually seen as qualitatively different or stronger in children as opposed to adults. In the present research, we put this interpretation to further scrutiny using a new paradigm: In a motor condition we asked our participants (kindergartners and third-graders) to manually rotate a circular board with a covered picture on it. This condition was compared with a perceptual condition where the board was rotated by an experimenter. Additionally, in a pure imagery condition, children were instructed to merely imagine the rotation of the board. The children’s task was to mark the presumed end position of a salient detail of the respective picture. The children’s performance was clearly the worst in the pure imagery condition. However, contrary to what embodiment theories would suggest, there was no difference in participants’ performance between the active rotation (i. e., motor) and the passive rotation (i. e., perception) condition. Control experiments revealed that this was also the case when, in the perception condition, gaze shifting was controlled for and when the board was rotated mechanically rather than by the experimenter. Our findings indicate that young children depend heavily on external support when imagining physical events. Furthermore, they indicate that motor-assisted imagery is not generally superior to perceptually driven dynamic imagery.
Collapse
Affiliation(s)
- Markus Krüger
- Ernst-Moritz-Arndt-Universität Greifswald, Institut für Psychologie
| | - Horst Krist
- Ernst-Moritz-Arndt-Universität Greifswald, Institut für Psychologie
| |
Collapse
|
24
|
Xie C, Li S, Tao W, Wei Y, Sun HJ. Representing Spatial Layout According to Intrinsic Frames of Reference: Limitations From Position Regularity and Instructions. Psychol Rep 2017; 120:846-869. [PMID: 28580837 DOI: 10.1177/0033294117711129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Mou and McNamara have suggested that object locations are represented according to intrinsic reference frames. In three experiments, we investigated the limitations of intrinsic reference frames as a mean to represent object locations in spatial memory. Participants learned the locations of seven or eight common objects in a rectangular room and then made judgments of relative direction based on their memory of the layout. The results of all experiments showed that when all objects were positioned regularly, judgments of relative direction were faster or more accurate for novel headings that were aligned with the primary intrinsic structure than for other novel headings; however, when one irregularly positioned object was added to the layout, this advantage was eliminated. The experiments further indicated that with a single view at study, participants could represent the layout from either an egocentric orientation or a different orientation, according to experimental instructions. Together, these results suggest that environmental reference frames and intrinsic axes can influence performance for novel headings, but their role in spatial memory depends on egocentric experience, layout regularity, and instructions.
Collapse
Affiliation(s)
- Chaoxiang Xie
- Faculty of Education, Guangxi Normal University, China; Faculty of Psychology, Southwest University, Chongqing, China
| | - Shiyi Li
- Academy of Psychology and Behaviour, Tianjin Normal University, China
| | - Weidong Tao
- Department of Psychology, School of Education, Lingnan Normal University, Guangdong, China
| | - Yiping Wei
- Faculty of Education, Guangxi Normal University, China
| | - Hong-Jin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
25
|
Studying visual attention using the multiple object tracking paradigm: A tutorial review. Atten Percept Psychophys 2017; 79:1255-1274. [DOI: 10.3758/s13414-017-1338-1] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
26
|
Huang X, Voyer D. Timing and sex effects on the “Spatial Orientation Test”: A World War II map reading test. SPATIAL COGNITION AND COMPUTATION 2017. [DOI: 10.1080/13875868.2017.1319836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Xing Huang
- Department of Psychology, University of New Brunswick, Fredericton, NB, Canada
| | - Daniel Voyer
- Department of Psychology, University of New Brunswick, Fredericton, NB, Canada
| |
Collapse
|
27
|
Nakashima R, Kumada T. Peripersonal versus extrapersonal visual scene information for egocentric direction and position perception. Q J Exp Psychol (Hove) 2017; 71:1090-1099. [PMID: 28326888 DOI: 10.1080/17470218.2017.1310267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
When perceiving the visual environment, people simultaneously perceive their own direction and position in the environment (i.e., egocentric spatial perception). This study investigated what visual information in a scene is necessary for egocentric spatial perceptions. In two perception tasks (the egocentric direction and position perception tasks), observers viewed two static road images presented sequentially. In Experiment 1, the critical manipulation involved an occluded region in the road image, an extrapersonal region (far-occlusion) and a peripersonal region (near-occlusion). Egocentric direction perception was worse in the far-occlusion condition than in the no-occlusion condition, and egocentric position perceptions were worse in the far- and near-occlusion conditions than in the no-occlusion condition. In Experiment 2, we conducted the same tasks manipulating the observers' gaze location in a scene-an extrapersonal region (far-gaze), a peripersonal region (near-gaze) and the intermediate region between the former two (middle-gaze). Egocentric direction perception performance was the best in the far-gaze condition, and egocentric position perception performances were not different among gaze location conditions. These results suggest that egocentric direction perception is based on fine visual information about the extrapersonal region in a road landscape, and egocentric position perception is based on information about the entire visual scene.
Collapse
Affiliation(s)
- Ryoichi Nakashima
- 1 RIKEN BSI-TOYOTA Collaboration Center, RIKEN, Saitama, Japan.,2 The University of Tokyo, Tokyo, Japan
| | - Takatsune Kumada
- 1 RIKEN BSI-TOYOTA Collaboration Center, RIKEN, Saitama, Japan.,3 Kyoto University, Kyoto, Japan
| |
Collapse
|
28
|
Negen J, Heywood-Everett E, Roome HE, Nardini M. Development of allocentric spatial recall from new viewpoints in virtual reality. Dev Sci 2017; 21. [DOI: 10.1111/desc.12496] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 07/22/2016] [Indexed: 12/01/2022]
Affiliation(s)
- James Negen
- Department of Psychology; Durham University; Durham UK
| | | | | | - Marko Nardini
- Department of Psychology; Durham University; Durham UK
| |
Collapse
|
29
|
Kaplan R, Bush D, Bisby JA, Horner AJ, Meyer SS, Burgess N. Medial Prefrontal-Medial Temporal Theta Phase Coupling in Dynamic Spatial Imagery. J Cogn Neurosci 2017; 29:507-519. [PMID: 27779906 PMCID: PMC5321531 DOI: 10.1162/jocn_a_01064] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Hippocampal-medial prefrontal interactions are thought to play a crucial role in mental simulation. Notably, the frontal midline/medial pFC (mPFC) theta rhythm in humans has been linked to introspective thought and working memory. In parallel, theta rhythms have been proposed to coordinate processing in the medial temporal cortex, retrosplenial cortex (RSc), and parietal cortex during the movement of viewpoint in imagery, extending their association with physical movement in rodent models. Here, we used noninvasive whole-head MEG to investigate theta oscillatory power and phase-locking during the 18-sec postencoding delay period of a spatial working memory task, in which participants imagined previously learned object sequences either on a blank background (object maintenance), from a first-person viewpoint in a scene (static imagery), or moving along a path past the objects (dynamic imagery). We found increases in 4- to 7-Hz theta power in mPFC when comparing the delay period with a preencoding baseline. We then examined whether the mPFC theta rhythm was phase-coupled with ongoing theta oscillations elsewhere in the brain. The same mPFC region showed significantly higher theta phase coupling with the posterior medial temporal lobe/RSc for dynamic imagery versus either object maintenance or static imagery. mPFC theta phase coupling was not observed with any other brain region. These results implicate oscillatory coupling between mPFC and medial temporal lobe/RSc theta rhythms in the dynamic mental exploration of imagined scenes.
Collapse
Affiliation(s)
- Raphael Kaplan
- University College London
- Universitat Pompeu Fabra, Barcelona, Spain
| | | | | | | | | | | |
Collapse
|
30
|
Abstract
Path integration and cognitive mapping are two of the most important mechanisms for navigation. Path integration is a primitive navigation system which computes a homing vector based on an animal's self-motion estimation, while cognitive map is an advanced spatial representation containing richer spatial information about the environment that is persistent and can be used to guide flexible navigation to multiple locations. Most theories of navigation conceptualize them as two distinctive, independent mechanisms, although the path integration system may provide useful information for the integration of cognitive maps. This paper demonstrates a fundamentally different scenario, where a cognitive map is constructed in three simple steps by assembling multiple path integrators and extending their basic features. The fact that a collection of path integration systems can be turned into a cognitive map suggests the possibility that cognitive maps may have evolved directly from the path integration system.
Collapse
|
31
|
Sulpizio V, Boccia M, Guariglia C, Galati G. Implicit coding of location and direction in a familiar, real-world "vista" space. Behav Brain Res 2016; 319:16-24. [PMID: 27840248 DOI: 10.1016/j.bbr.2016.10.052] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 10/27/2016] [Accepted: 10/31/2016] [Indexed: 02/02/2023]
Abstract
Keeping oriented in the surrounding space requires an accurate representation of one's spatial position and facing direction. Although previous studies provided evidence of specific spatial codes for position and direction within room-sized and large-scale navigational environments, little is known about the mechanisms by which these spatial quantities are represented in a real small-scale environment. Here, we used two spatial tasks requiring participants to encode their own position and facing direction on a series of pictures taken from a familiar circular square. Crucially, directions and positions were incidentally manipulated, so that when participants were required to encode their current position in the square, the perceived direction across consecutive trials was the same, and vice versa. We found a behavioral advantage (priming effect: reduced reaction times and increased accuracy) for repeated directions and positions, even in the absence of any explicit demand to encode either of them. The advantage was higher for repeated directions, indicating that representation of one's own direction is more automatic than representation of one's own location. Furthermore, priming effects were partially mediated by gender: females (but not males) showed a stronger priming effect for repeated directions than for repeated positions. Finally, although priming effects were not linearly related to the physical distances between consecutive positions and directions, they revealed a rough preservation of real-world distance relationships.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Unit of Motor and Cognitive Rehabilitation, Santa Lucia Foundation, Rome, Italy.
| | - Maddalena Boccia
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Unit of Motor and Cognitive Rehabilitation, Santa Lucia Foundation, Rome, Italy
| | - Cecilia Guariglia
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Unit of Motor and Cognitive Rehabilitation, Santa Lucia Foundation, Rome, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Unit of Motor and Cognitive Rehabilitation, Santa Lucia Foundation, Rome, Italy
| |
Collapse
|
32
|
Ebersbach M, Nawroth C. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds' Search Performance in Spatial Rotation Tasks. Front Psychol 2016; 7:1648. [PMID: 27812346 PMCID: PMC5071628 DOI: 10.3389/fpsyg.2016.01648] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Accepted: 10/07/2016] [Indexed: 01/25/2023] Open
Abstract
Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.
Collapse
Affiliation(s)
| | - Christian Nawroth
- School of Biological and Chemical Sciences, Queen Mary University of London, London UK
| |
Collapse
|
33
|
Hazenberg SJ, van Lier R. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition. Iperception 2016; 7:2041669516664530. [PMID: 27698985 PMCID: PMC5030757 DOI: 10.1177/2041669516664530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions.
Collapse
Affiliation(s)
- Simon J Hazenberg
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Rob van Lier
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
34
|
Motes MA, Finlay CA, Kozhevnikov M. Scene Recognition following Locomotion around a Scene. Perception 2016; 35:1507-20. [PMID: 17286121 DOI: 10.1068/p5459] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40° to 360°). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36° to 180°. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.
Collapse
Affiliation(s)
- Michael A Motes
- Department of Psychology, Rutgers University, 333 Smith Hall, 101 Warren Street, Newark, NJ 07102, USA
| | | | | |
Collapse
|
35
|
Abstract
Transformations of visuospatial mental images are important for action, navigation, and reasoning. They depend on representations in multiple spatial reference frames, implemented in the posterior parietal cortex and other brain regions. The multiple systems framework proposes that different transformations can be distinguished in terms of which spatial reference frame is updated. In an object-based transformation, the reference frame of an object moves relative to those of the observer and the environment. In a perspective transformation, the observer's egocentric reference frame moves relative to those of the environment and of salient objects. These two types of spatial reference frame updating rely on distinct neural processing resources in the parietal, occipital, and temporal cortex. They are characterized by different behavioral patterns and unique individual differences. Both object-based transformations and perspective transformations interact with posterior frontal cortical regions subserving the simulation of body movements. These interactions indicate that multiple systems coordinate to support everyday spatial problem solving.
Collapse
|
36
|
Slone E, Burles F, Iaria G. Environmental layout complexity affects neural activity during navigation in humans. Eur J Neurosci 2016; 43:1146-55. [PMID: 26990572 DOI: 10.1111/ejn.13218] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 02/09/2016] [Accepted: 02/15/2016] [Indexed: 11/29/2022]
Abstract
Navigating large-scale surroundings is a fundamental ability. In humans, it is commonly assumed that navigational performance is affected by individual differences, such as age, sex, and cognitive strategies adopted for orientation. We recently showed that the layout of the environment itself also influences how well people are able to find their way within it, yet it remains unclear whether differences in environmental complexity are associated with changes in brain activity during navigation. We used functional magnetic resonance imaging to investigate how the brain responds to a change in environmental complexity by asking participants to perform a navigation task in two large-scale virtual environments that differed solely in interconnection density, a measure of complexity defined as the average number of directional choices at decision points. The results showed that navigation in the simpler, less interconnected environment was faster and more accurate relative to the complex environment, and such performance was associated with increased activity in a number of brain areas (i.e. precuneus, retrosplenial cortex, and hippocampus) known to be involved in mental imagery, navigation, and memory. These findings provide novel evidence that environmental complexity not only affects navigational behaviour, but also modulates activity in brain regions that are important for successful orientation and navigation.
Collapse
Affiliation(s)
- Edward Slone
- NeuroLab, Department of Psychology, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, University of Calgary, Admin 062, 2500 University Drive, NW, Calgary, AB, T2N 1N4, Canada
| | - Ford Burles
- NeuroLab, Department of Psychology, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, University of Calgary, Admin 062, 2500 University Drive, NW, Calgary, AB, T2N 1N4, Canada
| | - Giuseppe Iaria
- NeuroLab, Department of Psychology, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, University of Calgary, Admin 062, 2500 University Drive, NW, Calgary, AB, T2N 1N4, Canada
| |
Collapse
|
37
|
Modality dependence and intermodal transfer in the Corsi Spatial Sequence Task: Screen vs. Floor. Exp Brain Res 2016; 234:1849-1862. [DOI: 10.1007/s00221-016-4582-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 01/30/2016] [Indexed: 01/29/2023]
|
38
|
Krüger M, Jahn G. Children's Spatial Representations: 3- and 4-Year-Olds are Affected by Irrelevant Peripheral References. Front Psychol 2015; 6:1677. [PMID: 26617537 PMCID: PMC4639604 DOI: 10.3389/fpsyg.2015.01677] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 10/19/2015] [Indexed: 11/13/2022] Open
Abstract
Children as young as 3 years can remember an object’s location within an arrangement and can retrieve it from a novel viewpoint (Nardini et al., 2006). However, this ability is impaired if the arrangement is rotated to compensate for the novel viewpoint, or, if the arrangement is rotated and children stand still. There are two dominant explanations for this phenomenon: self-motion induces an automatic spatial updating process which is beneficial if children move around the arrangement, but misleading if the children’s movement is matched by the arrangement and not activated if children stand still and only the arrangement is moved (see spatial updating; Simons and Wang, 1998). Another explanation concerns reference frames: spatial representations might depend on peripheral spatial relations concerning the surrounding room instead on proximal relations within the arrangement, even if these proximal relations are sufficient or more informative. To evaluate these possibilities, we rotated children (N = 120) aged between 3 and 6 years with an occluded arrangement. When the arrangement was in misalignment to the surrounding room, 3- and 4-year-olds’ spatial memory was impaired and 5-year-olds’ was lightly impaired suggesting that they relied on peripheral references of the surrounding room for retrieval. In contrast, 6-years-olds’ spatial representation seemed robust against misalignment indicating a successful integration of spatial representations.
Collapse
Affiliation(s)
- Markus Krüger
- Entwicklungspsychologie und Pädagogische Psychologie, Institut für Psychologie, Ernst-Moritz-Arndt-Universität Greifswald Greifswald, Germany
| | - Georg Jahn
- Institute for Multimedia and Interactive Systems, University of Lübeck Lübeck, Germany
| |
Collapse
|
39
|
Sulpizio V, Committeri G, Lambrey S, Berthoz A, Galati G. Role of the human retrosplenial cortex/parieto-occipital sulcus in perspective priming. Neuroimage 2015; 125:108-119. [PMID: 26484830 DOI: 10.1016/j.neuroimage.2015.10.040] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 10/15/2015] [Indexed: 10/22/2022] Open
Abstract
The ability to imagine the world from a different viewpoint is a fundamental competence for spatial reorientation and for imagining what another individual sees in the environment. Here, we investigated the neural bases of such an ability using functional magnetic resonance imaging. Healthy participants detected target displacements across consecutive views of a familiar virtual room, either from the perspective of an avatar (primed condition) or in the absence of such a prime (unprimed condition). In the primed condition, the perspective at test always corresponded to the avatar's perspective, while in the unprimed condition it was randomly chosen as 0, 45 or 135deg of viewpoint rotation. We observed a behavioral advantage in performing a perspective transformation during the primed condition as compared to an equivalent amount of unprimed perspective change. Although many cortical regions (dorsal parietal, parieto-temporo-occipital junction, precuneus and retrosplenial cortex/parieto-occipital sulcus or RSC/POS) were involved in encoding and retrieving target location from different perspectives and were modulated by the amount of viewpoint rotation, the RSC/POS was the only area showing decreased activity in the primed as compared to the unprimed condition, suggesting that this region anticipates the upcoming perspective change. The retrosplenial cortex/parieto-occipital sulcus appears to play a special role in the allocentric coding of heading directions.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Psychology, Sapienza Università di Roma, Italy; Laboratory of Neuropsychology, Fondazione Santa Lucia IRCCS, Roma, Italy.
| | - Giorgia Committeri
- Department of Neuroscience, Imaging and Clinical Sciences, and ITAB, Institute for Advanced Biomedical Technologies, University G. d'Annunzio, Chieti, Italy
| | - Simon Lambrey
- LPPA, Collège de France-CNRS, Paris, France; Service de Psychiatrie Adulte, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| | | | - Gaspare Galati
- Department of Psychology, Sapienza Università di Roma, Italy; Laboratory of Neuropsychology, Fondazione Santa Lucia IRCCS, Roma, Italy
| |
Collapse
|
40
|
Abstract
Performance on visual short-term memory for features has been known to depend on stimulus complexity, spatial layout, and feature context. However, with few exceptions, memory capacity has been measured for abruptly appearing, single-instance displays. In everyday life, objects often have a spatiotemporal history as they or the observer move around. In three experiments, we investigated the effect of spatiotemporal history on explicit memory for color. Observers saw a memory display emerge from behind a wall, after which it disappeared again. The test display then emerged from either the same side as the memory display or the opposite side. In the first two experiments, memory improved for intermediate set sizes when the test display emerged in the same way as the memory display. A third experiment then showed that the benefit was tied to the original motion trajectory and not to the display object per se. The results indicate that memory for color is embedded in a richer episodic context that includes the spatiotemporal history of the display.
Collapse
|
41
|
Jiang YV, Won BY. Spatial scale, rather than nature of task or locomotion, modulates the spatial reference frame of attention. J Exp Psychol Hum Percept Perform 2015; 41:866-78. [PMID: 25867510 DOI: 10.1037/xhp0000056] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visuospatial attention is strongly biased to locations that had frequently contained a search target before. However, the function of this bias depends on the reference frame in which attended locations are coded. Previous research has shown a striking difference between tasks administered on a computer monitor and those administered in a large environment, with the former inducing viewer-centered learning and the latter environment-centered learning. Why does environment-centered learning fail on a computer? Here, we tested 3 possibilities: differences in spatial scale, the nature of task, and locomotion may each influence the reference frame of attention. Participants searched for a target on a monitor placed flat on a stand. On each trial, they stood at a different location around the monitor. The target was frequently located in a fixed area of the monitor, but changes in participants' perspective rendered this area random relative to the participants. Under incidental learning conditions, participants failed to acquire environment-centered learning even when (a) the task and display resembled those of a large-scale task and (b) the search task required locomotion. The difficulty in inducing environment-centered learning on a computer underscores the egocentric nature of visual attention. It supports the idea that spatial scale modulates the reference frame of attention.
Collapse
|
42
|
Banta Lavenex P, Boujon V, Ndarugendamwo A, Lavenex P. Human short-term spatial memory: Precision predicts capacity. Cogn Psychol 2015; 77:1-19. [DOI: 10.1016/j.cogpsych.2015.02.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Revised: 02/03/2015] [Accepted: 02/04/2015] [Indexed: 11/28/2022]
|
43
|
Intraub H, Morelli F, Gagnier KM. Visual, haptic and bimodal scene perception: evidence for a unitary representation. Cognition 2015; 138:132-47. [PMID: 25725370 DOI: 10.1016/j.cognition.2015.01.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Revised: 01/21/2015] [Accepted: 01/25/2015] [Indexed: 11/25/2022]
Abstract
Participants studied seven meaningful scene-regions bordered by removable boundaries (30s each). In Experiment 1 (N = 80) participants used visual or haptic exploration and then minutes later, reconstructed boundary position using the same or the alternate modality. Participants in all groups shifted boundary placement outward (boundary extension), but visual study yielded the greater error. Critically, this modality-specific difference in boundary extension transferred without cost in the cross-modal conditions, suggesting a functionally unitary scene representation. In Experiment 2 (N = 20), bimodal study led to boundary extension that did not differ from haptic exploration alone, suggesting that bimodal spatial memory was constrained by the more "conservative" haptic modality. In Experiment 3 (N = 20), as in picture studies, boundary memory was tested 30s after viewing each scene-region and as with pictures, boundary extension still occurred. Results suggest that scene representation is organized around an amodal spatial core that organizes bottom-up information from multiple modalities in combination with top-down expectations about the surrounding world.
Collapse
|
44
|
Gomez A, Rousset S, Bonniot C, Charnallet A, Moreaud O. Deficits in egocentric-updating and spatial context memory in a case of developmental amnesia. Neurocase 2015; 21:226-43. [PMID: 24579921 DOI: 10.1080/13554794.2014.890730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Patients with developmental amnesia usually suffer from both episodic and spatial memory deficits. DM, a developmental amnesic, was impaired in her ability to process self-motion (i.e., idiothetic) information while her ability to process external stable landmarks (i.e., allothetic) was preserved when no self-motion processing was required. On a naturalistic and incidental episodic task, DM was severely and predictably impaired on both free and cued recall tasks. Interestingly, when cued, she was more impaired at recalling spatial context than factual or temporal information. Theoretical implications of that co-occurrence of deficits and those dissociations are discussed and testable cerebral hypothesis are proposed.
Collapse
Affiliation(s)
- A Gomez
- a LPNC , CNRS, UMR 5105, Université Grenoble Alpes , Grenoble , France
| | | | | | | | | |
Collapse
|
45
|
Pani JR, Chariker JH, Naaz F, Mattingly W, Roberts J, Sephton SE. Learning with interactive computer graphics in the undergraduate neuroscience classroom. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:507-28. [PMID: 24449123 PMCID: PMC4107209 DOI: 10.1007/s10459-013-9483-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2013] [Accepted: 11/17/2013] [Indexed: 05/26/2023]
Abstract
Instruction of neuroanatomy depends on graphical representation and extended self-study. As a consequence, computer-based learning environments that incorporate interactive graphics should facilitate instruction in this area. The present study evaluated such a system in the undergraduate neuroscience classroom. The system used the method of adaptive exploration, in which exploration in a high fidelity graphical environment is integrated with immediate testing and feedback in repeated cycles of learning. The results of this study were that students considered the graphical learning environment to be superior to typical classroom materials used for learning neuroanatomy. Students managed the frequency and duration of study, test, and feedback in an efficient and adaptive manner. For example, the number of tests taken before reaching a minimum test performance of 90 % correct closely approximated the values seen in more regimented experimental studies. There was a wide range of student opinion regarding the choice between a simpler and a more graphically compelling program for learning sectional anatomy. Course outcomes were predicted by individual differences in the use of the software that reflected general work habits of the students, such as the amount of time committed to testing. The results of this introduction into the classroom are highly encouraging for development of computer-based instruction in biomedical disciplines.
Collapse
Affiliation(s)
- John R Pani
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA,
| | | | | | | | | | | |
Collapse
|
46
|
Perspective image comprehension depends on both visual and proprioceptive information. Atten Percept Psychophys 2014; 76:2477-84. [PMID: 25027831 DOI: 10.3758/s13414-014-0731-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Proprioceptive information can supplement visual information in the comprehension of ambiguous perspective images. The importance of proprioceptive information in unambiguous perspective image comprehension is untested, however. We explored the role of proprioception in perspective image comprehension using three experiments in which participants took or imagined taking an upward- or downward-oriented posture and then made judgments about images viewed from below or viewed from above. Participants were faster and more accurate in their judgments when their actual or simulated posture was consistent with the posture implied by the perspective of the image they were judging. These results support a role for proprioception in the comprehension of unambiguous perspective images as well as ambiguous perspective images.
Collapse
|
47
|
Frick A, Möhring W, Newcombe NS. Picturing perspectives: development of perspective-taking abilities in 4- to 8-year-olds. Front Psychol 2014; 5:386. [PMID: 24817860 PMCID: PMC4012199 DOI: 10.3389/fpsyg.2014.00386] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Accepted: 04/11/2014] [Indexed: 11/14/2022] Open
Abstract
Although the development of perspective taking has been well researched, there is no uniform methodology for assessing this ability across a wide age span when frames of reference conflict. To address this gap, we created scenes of toy photographers taking pictures of layouts of objects from different angles, and presented them to 4- to 8-year-olds (N = 80). Children were asked to choose which one of four pictures could have been taken from a specific viewpoint. Results showed that this new technique confirmed the classic pattern of developmental progress on this kind of spatial skill: (1) 4-year-olds responded near chance level, regardless of layout complexity, (2) there was a growing ability to inhibit egocentric choices around age 6 with layouts of low complexity (one object), (3) performance increased and egocentric responses decreased dramatically around age 7, (4) even at age 8, children still showed considerable individual variability. This perspective taking task can thus be used to address important questions about the supports for early spatial development and the structure of early intellect.
Collapse
Affiliation(s)
- Andrea Frick
- Department of Psychology, Temple University Philadelphia, PA, USA ; Department of Psychology, University of Bern Bern, Switzerland
| | - Wenke Möhring
- Department of Psychology, Temple University Philadelphia, PA, USA
| | - Nora S Newcombe
- Department of Psychology, Temple University Philadelphia, PA, USA
| |
Collapse
|
48
|
van den Brink D, Janzen G. Visual spatial cue use for guiding orientation in two-to-three-year-old children. Front Psychol 2013; 4:904. [PMID: 24368903 PMCID: PMC3857639 DOI: 10.3389/fpsyg.2013.00904] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 11/15/2013] [Indexed: 11/28/2022] Open
Abstract
In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.
Collapse
Affiliation(s)
- Danielle van den Brink
- Behavioural Science Institute, Radboud University NijmegenNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| | - Gabriele Janzen
- Behavioural Science Institute, Radboud University NijmegenNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| |
Collapse
|
49
|
Wilkins LK, Girard TA, King J, King MJ, Herdman KA, Christensen BK, King J. Spatial-memory deficit in schizophrenia spectrum disorders under viewpoint-independent demands in the virtual courtyard task. J Clin Exp Neuropsychol 2013; 35:1082-93. [DOI: 10.1080/13803395.2013.857389] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
50
|
Abstract
Identifying the neural mechanisms underlying spatial orientation and navigation has long posed a challenge for researchers. Multiple approaches incorporating a variety of techniques and animal models have been used to address this issue. More recently, virtual navigation has become a popular tool for understanding navigational processes. Although combining this technique with functional imaging can provide important information on many aspects of spatial navigation, it is important to recognize some of the limitations these techniques have for gaining a complete understanding of the neural mechanisms of navigation. Foremost among these is that, when participants perform a virtual navigation task in a scanner, they are lying motionless in a supine position while viewing a video monitor. Here, we provide evidence that spatial orientation and navigation rely to a large extent on locomotion and its accompanying activation of motor, vestibular, and proprioceptive systems. Researchers should therefore consider the impact on the absence of these motion-based systems when interpreting virtual navigation/functional imaging experiments to achieve a more accurate understanding of the mechanisms underlying navigation.
Collapse
|