1
|
Does path integration contribute to human navigation in large-scale space? Psychon Bull Rev 2022:10.3758/s13423-022-02216-8. [DOI: 10.3758/s13423-022-02216-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 11/19/2022]
|
2
|
Yang Y, Merrill EC. Wayfinding in Children: A Descriptive Literature Review of Research Methods. J Genet Psychol 2022; 183:580-608. [DOI: 10.1080/00221325.2022.2103789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Yingying Yang
- Department of Psychology, Montclair State University, Montclair, New Jersey, USA
| | - Edward C. Merrill
- Department of Psychology, University of Alabama, Tuscaloosa, Alabama, USA
| |
Collapse
|
3
|
Nardi D, Singer KJ, Price KM, Carpenter SE, Bryant JA, Hatheway MA, Johnson JN, Pairitz AK, Young KL, Newcombe NS. Navigating without vision: spontaneous use of terrain slant in outdoor place learning. SPATIAL COGNITION AND COMPUTATION 2021. [DOI: 10.1080/13875868.2021.1916504] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Daniele Nardi
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Katelyn J. Singer
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Krista M. Price
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | | | - Joseph A. Bryant
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | | | - Jada N. Johnson
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Annika K. Pairitz
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Keldyn L. Young
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Nora S. Newcombe
- Department of Psychology, Temple University, Philadelphia, PA, USA
| |
Collapse
|
4
|
Gourgou E, Adiga K, Goettemoeller A, Chen C, Hsu AL. Caenorhabditis elegans learning in a structured maze is a multisensory behavior. iScience 2021; 24:102284. [PMID: 33889812 PMCID: PMC8050377 DOI: 10.1016/j.isci.2021.102284] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 11/23/2020] [Accepted: 03/04/2021] [Indexed: 11/05/2022] Open
Abstract
We show that C. elegans nematodes learn to associate food with a combination of proprioceptive cues and information on the structure of their surroundings (maze), perceived through mechanosensation. By using the custom-made Worm-Maze platform, we demonstrate that C. elegans young adults can locate food in T-shaped mazes and, following that experience, learn to reach a specific maze arm. C. elegans learning inside the maze is possible after a single training session, it resembles working memory, and it prevails over conflicting environmental cues. We provide evidence that the observed learning is a food-triggered multisensory behavior, which requires mechanosensory and proprioceptive input, and utilizes cues about the structural features of nematodes' environment and their body actions. The CREB-like transcription factor and dopamine signaling are also involved in maze performance. Lastly, we show that the observed aging-driven decline of C. elegans learning ability in the maze can be reversed by starvation.
Collapse
Affiliation(s)
- Eleni Gourgou
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
- Institute of Gerontology, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Kavya Adiga
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Anne Goettemoeller
- Neuroscience Program, College of Literature, Science and the Arts, University of Michigan, Ann Arbor, MI 41809, USA
| | - Chieh Chen
- Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan
| | - Ao-Lin Hsu
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA
- Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan
- Research Center for Healthy Aging and Institute of New Drug Development, China Medical University, Taichung, 404, Taiwan
| |
Collapse
|
5
|
Nardi D, Carpenter SE, Johnson SR, Gilliland GA, Melo VL, Pugliese R, Coppola VJ, Kelly DM. Spatial reorientation with a geometric array of auditory cues. Q J Exp Psychol (Hove) 2020; 75:362-373. [PMID: 32111145 DOI: 10.1177/1747021820913295] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A visuocentric bias has dominated the literature on spatial navigation and reorientation. Studies on visually accessed environments indicate that, during reorientation, human and non-human animals encode the geometric shape of the environment, even if this information is unnecessary and insufficient for the task. In an attempt to extend our limited knowledge on the similarities and differences between visual and non-visual navigation, here we examined whether the same phenomenon would be observed during auditory-guided reorientation. Provided with a rectangular array of four distinct auditory landmarks, blindfolded, sighted participants had to learn the location of a target object situated on a panel of an octagonal arena. Subsequent test trials were administered to understand how the task was acquired. Crucially, in a condition in which the auditory cues were indistinguishable (same sound sample), participants could still identify the correct target location, suggesting that the rectangular array of auditory landmarks was encoded as a geometric configuration. This is the first evidence of incidental encoding of geometric information with auditory cues and, consistent with the theory of functional equivalence, it supports the generalisation of mechanisms of spatial learning across encoding modalities.
Collapse
Affiliation(s)
- Daniele Nardi
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | | | - Somer R Johnson
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Greg A Gilliland
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Viveka L Melo
- Department of Psychological Science, Ball State University, Muncie, IN, USA
| | - Roberto Pugliese
- Academy of Fine Arts, University of the Arts Helsinki, Helsinki, Finland
| | - Vincent J Coppola
- Department of Psychology, Eastern Illinois University, Charleston, IL, USA
| | - Debbie M Kelly
- Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada
| |
Collapse
|
6
|
Nardi D, Twyman AD, Holden MP, Clark JM. Tuning in: can humans use auditory cues for spatial reorientation? SPATIAL COGNITION AND COMPUTATION 2019. [DOI: 10.1080/13875868.2019.1702665] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Daniele Nardi
- Department of Psychological Science, Ball State University, Muncie, IN, USA
- Department of Psychology, Eastern Illinois University, Charleston, IL, USA
| | - Alexandra D. Twyman
- Department of Psychology, University of Calgary, Calgary, Canada
- Department of Psychology, Mount Royal University, Calgary, Canada
- Department of Psychology, Athabasca University, Athabasca, Canada
| | - Mark P. Holden
- Department of Psychology, University of Calgary, Calgary, Canada
| | - Josie M. Clark
- Department of Educational Leadership, Southern Illinois University Edwardsville, Edwardsville, IL, USA
| |
Collapse
|
7
|
Du Y, Mou W, Zhang L. Unidirectional influence of vision on locomotion in multimodal spatial representations acquired from navigation. PSYCHOLOGICAL RESEARCH 2018; 84:1284-1303. [PMID: 30542972 DOI: 10.1007/s00426-018-1131-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 12/07/2018] [Indexed: 11/24/2022]
Abstract
Visual and idiothetic information is coupled in forming multimodal spatial representations during navigation (Tcheang et al. in Proc Natl Acad Sci USA 108(3):1152-1157, 2011). We investigated whether idiothetic representations activate visual representations but not vice versa (unidirectional coupling) or whether these two representations activate each other (bidirectional coupling). In a virtual reality environment, participants actively rotated in place to face certain orientations to become adapted to a new vision-locomotion relationship (gain). In particular, the visual turning angle was equal to 0.7 times the physical turning angle. After adaptation, participants walked a path with a turn in darkness (idiothetic input only) or watched a video of the traversed path (visual input only). Then, the participants pointed to the origin of the path. The participants who were presented with only idiothetic input showed that their pointing responses were influenced by the new gain (adaptation effect). By contrast, the participants who were presented with only visual input did not show any adaptation effect. These results suggest that idiothetic input contributed to spatial representations indirectly via the coupling, which resulted in the adaptation effect, whereas vision alone contributed to spatial representations directly, which did not result in the adaptation effect. Hence, the coupling between vision and locomotion is unidirectional.
Collapse
Affiliation(s)
- Yu Du
- Department of Psychology, University of Alberta, P-217, Biological Science Building, Edmonton, AB, T6G 2E9, Canada.
| | - Weimin Mou
- Department of Psychology, University of Alberta, P-217, Biological Science Building, Edmonton, AB, T6G 2E9, Canada.
| | - Lei Zhang
- Department of Psychology, University of Alberta, P-217, Biological Science Building, Edmonton, AB, T6G 2E9, Canada
| |
Collapse
|
8
|
Abstract
Spatial memories are often hierarchically organized with different regions of space represented in unique clusters within the hierarchy. Each cluster is thought to be organized around its own microreference frame selected during learning, whereas relationships between clusters are organized by a macroreference frame. Two experiments were conducted in order to better understand important characteristics of macroreference frames. Participants learned overlapping spatial layouts of objects within a room-sized environment before performing a perspective-taking task from memory. Of critical importance were between-layout judgments thought to reflect the macroreference frame. The results indicate that (1) macroreference frames characterize overlapping spatial layouts, (2) macroreference frames are used even when microreference frames are aligned with one another, and (3) macroreference frame selection depends on an interaction between the global macroaxis (defined by characteristics of the layout of all learned objects), the relational macroaxis (defined by characteristics of the two layouts being related on a perspective-taking trial), and the learning view. These results refine the current understanding of macroreference frames and document their broad role in spatial memory.
Collapse
|
9
|
Nardi D, Anzures BJ, Clark JM, Griffith BV. Spatial reorientation with non-visual cues: Failure to spontaneously use auditory information. Q J Exp Psychol (Hove) 2018; 72:1141-1154. [DOI: 10.1177/1747021818780715] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Among the environmental stimuli that can guide navigation in space, most attention has been dedicated to visual information. The process of determining where you are and which direction you are facing (called reorientation) has been extensively examined by providing the navigator with two sources of information—typically the shape of the environment and its features—with an interest in the extent to which they are used. Similar questions with non-visual cues are lacking. Here, blindfolded sighted participants had to learn the location of a target in a real-world, circular search space. In Experiment 1, two ecologically relevant non-visual cues were provided: the slope of the floor and an array of two identical auditory landmarks. Slope successfully guided behaviour, suggesting that proprioceptive/kinesthetic access is sufficient to navigate on a slanted environment. However, despite the fact that participants could localise the auditory sources, this information was not encoded. In Experiment 2, the auditory cue was made more useful for the task because it had greater predictive value and there were no competing spatial cues. Nonetheless, again, the auditory landmark was not encoded. Finally, in Experiment 3, after being prompted, participants were able to reorient by using the auditory landmark. Overall, participants failed to spontaneously rely on the auditory cue, regardless of how informative it was.
Collapse
Affiliation(s)
- Daniele Nardi
- Department of Psychology, Eastern Illinois University, Charleston, IL, USA
| | - Brian J Anzures
- Department of Psychology, Eastern Illinois University, Charleston, IL, USA
| | - Josie M Clark
- Department of Psychology, Eastern Illinois University, Charleston, IL, USA
| | | |
Collapse
|
10
|
Meilinger T, Strickrodt M, Bülthoff HH. Qualitative differences in memory for vista and environmental spaces are caused by opaque borders, not movement or successive presentation. Cognition 2016; 155:77-95. [PMID: 27367592 DOI: 10.1016/j.cognition.2016.06.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Revised: 05/31/2016] [Accepted: 06/11/2016] [Indexed: 12/01/2022]
Abstract
Two classes of space define our everyday experience within our surrounding environment: vista spaces, such as rooms or streets which can be perceived from one vantage point, and environmental spaces, for example, buildings and towns which are grasped from multiple views acquired during locomotion. However, theories of spatial representations often treat both spaces as equal. The present experiments show that this assumption cannot be upheld. Participants learned exactly the same layout of objects either within a single room or spread across multiple corridors. By utilizing a pointing and a placement task we tested the acquired configurational memory. In Experiment 1 retrieving memory of the object layout acquired in environmental space was affected by the distance of the traveled path and the order in which the objects were learned. In contrast, memory retrieval of objects learned in vista space was not bound to distance and relied on different ordering schemes (e.g., along the layout structure). Furthermore, spatial memory of both spaces differed with respect to the employed reference frame orientation. Environmental space memory was organized along the learning experience rather than layout intrinsic structure. In Experiment 2 participants memorized the object layout presented within the vista space room of Experiment 1 while the learning procedure emulated environmental space learning (movement, successive object presentation). Neither factor rendered similar results as found in environmental space learning. This shows that memory differences between vista and environmental space originated mainly from the spatial compartmentalization which was unique to environmental space learning. Our results suggest that transferring conclusions from findings obtained in vista space to environmental spaces and vice versa should be made with caution.
Collapse
Affiliation(s)
- Tobias Meilinger
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | | | | |
Collapse
|
11
|
Intraub H, Morelli F, Gagnier KM. Visual, haptic and bimodal scene perception: evidence for a unitary representation. Cognition 2015; 138:132-47. [PMID: 25725370 DOI: 10.1016/j.cognition.2015.01.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Revised: 01/21/2015] [Accepted: 01/25/2015] [Indexed: 11/25/2022]
Abstract
Participants studied seven meaningful scene-regions bordered by removable boundaries (30s each). In Experiment 1 (N = 80) participants used visual or haptic exploration and then minutes later, reconstructed boundary position using the same or the alternate modality. Participants in all groups shifted boundary placement outward (boundary extension), but visual study yielded the greater error. Critically, this modality-specific difference in boundary extension transferred without cost in the cross-modal conditions, suggesting a functionally unitary scene representation. In Experiment 2 (N = 20), bimodal study led to boundary extension that did not differ from haptic exploration alone, suggesting that bimodal spatial memory was constrained by the more "conservative" haptic modality. In Experiment 3 (N = 20), as in picture studies, boundary memory was tested 30s after viewing each scene-region and as with pictures, boundary extension still occurred. Results suggest that scene representation is organized around an amodal spatial core that organizes bottom-up information from multiple modalities in combination with top-down expectations about the surrounding world.
Collapse
|
12
|
Yamamoto N, Meléndez JA, Menzies DT. Homing by path integration when a locomotion trajectory crosses itself. Perception 2015; 43:1049-60. [PMID: 25509682 DOI: 10.1068/p7624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.
Collapse
|
13
|
Viaud-Delmon I, Warusfel O. From ear to body: the auditory-motor loop in spatial cognition. Front Neurosci 2014; 8:283. [PMID: 25249933 PMCID: PMC4155796 DOI: 10.3389/fnins.2014.00283] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 08/19/2014] [Indexed: 11/30/2022] Open
Abstract
Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.
Collapse
Affiliation(s)
- Isabelle Viaud-Delmon
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| | - Olivier Warusfel
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| |
Collapse
|
14
|
Abstract
Terrain slope can be used to encode the location of a goal. However, this directional information may be encoded using a conceptual north (i.e., invariantly with respect to the environment), or in an observer-relative fashion (i.e., varying depending on the direction one faces when learning the goal). This study examines which representation is used, whether the sensory modality in which slope is encoded (visual, kinaesthetic, or both) influences representations, and whether use of slope varies for men and women. In a square room, with a sloped floor explicitly pointed out as the only useful cue, participants encoded the corner in which a goal was hidden. Without direct sensory access to slope cues, participants used a dial to point to the goal. For each trial, the goal was hidden uphill or downhill, and the participants were informed whether they faced uphill or downhill when pointing. In support of observer-relative representations, participants pointed more accurately and quickly when facing concordantly with the hiding position. There was no effect of sensory modality, providing support for functional equivalence. Sex did not interact with the findings on modality or reference frame, but spatial measures correlated with success on the slope task differently for each sex.
Collapse
Affiliation(s)
- Steven M Weisberg
- a Department of Psychology , Spatial Intelligence and Learning Center, Temple University , Philadelphia , PA , USA
| | | | | | | |
Collapse
|
15
|
Cross-sensory reference frame transfer in spatial memory: the case of proprioceptive learning. Mem Cognit 2013; 42:496-507. [PMID: 24101554 DOI: 10.3758/s13421-013-0373-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
16
|
Mou W, McNamara TP, Zhang L. Global frames of reference organize configural knowledge of paths. Cognition 2013; 129:180-93. [DOI: 10.1016/j.cognition.2013.06.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Revised: 04/30/2013] [Accepted: 06/29/2013] [Indexed: 11/26/2022]
|
17
|
Yamamoto N, Philbeck JW. Intrinsic frames of reference in haptic spatial learning. Cognition 2013; 129:447-56. [PMID: 24007919 DOI: 10.1016/j.cognition.2013.08.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2011] [Revised: 06/04/2013] [Accepted: 08/08/2013] [Indexed: 11/26/2022]
Abstract
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.
Collapse
Affiliation(s)
- Naohide Yamamoto
- Department of Psychology, Cleveland State University, 2121 Euclid Avenue, Cleveland, OH 44115, USA; Department of Psychology, George Washington University, 2125 G Street, NW, Washington, DC 20052, USA.
| | | |
Collapse
|
18
|
Meilinger T, Bülthoff HH. Verbal shadowing and visual interference in spatial memory. PLoS One 2013; 8:e74177. [PMID: 24019953 PMCID: PMC3760797 DOI: 10.1371/journal.pone.0074177] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 07/29/2013] [Indexed: 11/18/2022] Open
Abstract
Spatial memory is thought to be organized along experienced views and allocentric reference axes. Memory access from different perspectives typically yields V-patterns for egocentric encoding (monotonic decline in performance along with the angular deviation from the experienced perspectives) and W-patterns for axes encoding (better performance along parallel and orthogonal perspectives than along oblique perspectives). We showed that learning an object array with a verbal secondary task reduced W-patterns compared with learning without verbal shadowing. This suggests that axes encoding happened in a verbal format; for example, by rows and columns. Alternatively, general cognitive load from the secondary task prevented memorizing relative to a spatial axis. Independent of encoding, pointing with a surrounding room visible yielded stronger W-patterns compared with pointing with no room visible. This suggests that the visible room geometry interfered with the memorized room geometry. With verbal shadowing and without visual interference only V-patterns remained; otherwise, V- and W-patterns were combined. Verbal encoding and visual interference explain when W-patterns can be expected alongside V-patterns and thus can help in resolving different performance patterns in a wide range of experiments.
Collapse
Affiliation(s)
- Tobias Meilinger
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Research Center for Advanced Science and Technology, the University of Tokyo, Tokyo, Japan
- * E-mail: (TM); (HB)
| | - Heinrich H. Bülthoff
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- * E-mail: (TM); (HB)
| |
Collapse
|
19
|
Kelly JW, Sjolund LA, Sturz BR. Geometric cues, reference frames, and the equivalence of experienced-aligned and novel-aligned views in human spatial memory. Cognition 2013; 126:459-74. [DOI: 10.1016/j.cognition.2012.11.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2012] [Revised: 09/19/2012] [Accepted: 11/11/2012] [Indexed: 10/27/2022]
|
20
|
Perception of 3-D location based on vision, touch, and extended touch. Exp Brain Res 2012; 224:141-53. [PMID: 23070234 DOI: 10.1007/s00221-012-3295-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Accepted: 09/29/2012] [Indexed: 10/27/2022]
Abstract
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
Collapse
|
21
|
Integrating spatial information across experiences. PSYCHOLOGICAL RESEARCH 2012; 77:540-54. [PMID: 22941360 DOI: 10.1007/s00426-012-0452-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 08/18/2012] [Indexed: 10/27/2022]
Abstract
The current study examined the potential influence of existing spatial knowledge on the coding of new spatial information. In the Main experiment, participants learned the locations of five objects before completing a perspective-taking task. Subsequently, they studied the same five objects and five additional objects from a new location before completing a second perspective-taking task. Task performance following the first learning phase was best from perspectives aligned with the learning view. However, following the second learning phase, performance was best from the perspective aligned with the second view. A supplementary manipulation increased the salience of the initial view through environmental structure as well as the number of objects present. Results indicated that the initial learning view was preferred throughout the experiment. The role of assimilation and accommodation mechanisms in spatial memory, and the conditions under which they occur, are discussed.
Collapse
|
22
|
Sapkota RP, Pardhan S, van der Linde I. Manual tapping enhances visual short-term memory performance where visual and motor coordinates correspond. Br J Psychol 2012; 104:249-64. [DOI: 10.1111/j.2044-8295.2012.02115.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
23
|
Spatial memory in the real world: long-term representations of everyday environments. Mem Cognit 2011; 39:1401-8. [PMID: 21584854 DOI: 10.3758/s13421-011-0108-x] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
24
|
Giudice NA, Betty MR, Loomis JM. Functional equivalence of spatial images from touch and vision: evidence from spatial updating in blind and sighted individuals. J Exp Psychol Learn Mem Cogn 2011; 37:621-34. [PMID: 21299331 PMCID: PMC5507195 DOI: 10.1037/a0022331] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
Collapse
Affiliation(s)
- Nicholas A Giudice
- Department of Spatial Information Science and Engineering, University of Maine, 348 Boardman Hall, Orono, ME 04469, USA.
| | | | | |
Collapse
|
25
|
Kelly JW, Avraamides MN. Cross-sensory transfer of reference frames in spatial memory. Cognition 2011; 118:444-50. [PMID: 21227408 DOI: 10.1016/j.cognition.2010.12.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2010] [Revised: 10/18/2010] [Accepted: 12/10/2010] [Indexed: 11/30/2022]
Abstract
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective taking when recalling touched objects was best from perspectives aligned with visually-defined axes, providing evidence for cross-sensory reference frame transfer. These findings advance spatial memory theory by demonstrating that multimodal spatial information can be integrated within a common spatial representation.
Collapse
Affiliation(s)
- Jonathan W Kelly
- Department of Psychology, W112 Lagomarcino Hall, Iowa State University, Ames, IA 50011-3180, United States.
| | | |
Collapse
|
26
|
Schifferstein HNJ, Smeets MAM, Postma A. Comparing location memory for 4 sensory modalities. Chem Senses 2009; 35:135-45. [PMID: 20008894 DOI: 10.1093/chemse/bjp090] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Stimuli from all sensory modalities can be linked to places and thus might serve as navigation cues. We compared performance for 4 sensory modalities in a location memory task: Black-and-white drawings of free forms (vision), 1-s manipulated environmental sounds (audition), surface textures of natural and artificial materials (touch), and unfamiliar smells (olfaction) were presented in 10 cubes. In the learning stage, participants walked to a cube, opened it, and perceived its content. Subsequently, in a relocation task, they placed each stimulus back in its original location. Although the proportion of correct locations selected just failed to yield significant differences between the modalities, the proportion of stimuli placed in the vicinity of the correct location or on the correct side of the room was significantly higher for vision than for touch, olfaction, and audition. These outcomes suggest that approximate location memory is superior for vision compared with other sensory modalities.
Collapse
Affiliation(s)
- Hendrik N J Schifferstein
- Department of Industrial Design, Delft University of Technology, Landbergstraat 15, 2628 CE Delft, The Netherlands.
| | | | | |
Collapse
|
27
|
Sherwood DE. Spatial error detection in rapid unimanual and bimanual aiming movements. Percept Mot Skills 2009; 108:3-14. [PMID: 19425441 DOI: 10.2466/pms.108.1.3-14] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal which serves as a basis for a correction. The current study assessed whether error detection is less accurate when feedback from both hands must be analyzed compared to one hand and if error detection is more accurate in longer movements compared to shorter movements. 36 college-age participants (26 women and 10 men) performed a rapid aiming movement of varying distances with one hand or both hands simultaneously. Participants verbally estimated the distance moved on all trials before knowledge of results was given. Error detection was measured by the correlation and the mean absolute difference between the actual and estimated distance. Error detection was not more accurate for the longer movements, and participants underestimated errors in all conditions. Strong positive correlations were shown for both unimanual and bimanual aiming tasks, suggesting that two streams of sensory information can be processed concurrently.
Collapse
Affiliation(s)
- David E Sherwood
- Department of Integrative Physiology, 354 UCB, University of Colorado, Boulder, CO 80309-0354, USA.
| |
Collapse
|
28
|
Yamamoto N, Shelton AL. Sequential versus simultaneous viewing of an environment: Effects of focal attention to individual object locations on visual spatial learning. VISUAL COGNITION 2009. [DOI: 10.1080/13506280701653644] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Naohide Yamamoto
- a Department of Psychological and Brain Sciences , Johns Hopkins University , Baltimore, MD, USA
| | - Amy L. Shelton
- b Department of Psychological and Brain Sciences, and Department of Neuroscience , Johns Hopkins University , Baltimore, MD, USA
| |
Collapse
|
29
|
Orientation dependence of spatial memory acquired from auditory experience. Psychon Bull Rev 2009; 16:301-5. [PMID: 19293098 DOI: 10.3758/pbr.16.2.301] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Arthur JC, Philbeck JW, Sargent J, Dopkins S. Misperception of exocentric directions in auditory space. Acta Psychol (Amst) 2008; 129:72-82. [PMID: 18555205 DOI: 10.1016/j.actpsy.2008.04.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2007] [Revised: 04/22/2008] [Accepted: 04/23/2008] [Indexed: 11/25/2022] Open
Abstract
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
Collapse
|
31
|
Yamamoto N, Shelton AL. Path information effects in visual and proprioceptive spatial learning. Acta Psychol (Amst) 2007; 125:346-60. [PMID: 17067542 DOI: 10.1016/j.actpsy.2006.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2006] [Revised: 09/06/2006] [Accepted: 09/08/2006] [Indexed: 10/24/2022] Open
Abstract
Objects in an environment are often encountered sequentially during spatial learning, forming a path along which object locations are experienced. The present study investigated the effect of spatial information conveyed through the path in visual and proprioceptive learning of a room-sized spatial layout, exploring whether different modalities differentially depend on the integrity of the path. Learning object locations along a coherent path was compared with learning them in a spatially random manner. Path integrity had little effect on visual learning, whereas learning with the coherent path produced better memory performance than random order learning for proprioceptive learning. These results suggest that path information has differential effects in visual and proprioceptive spatial learning, perhaps due to a difference in the way one establishes a reference frame for representing relative locations of objects.
Collapse
Affiliation(s)
- Naohide Yamamoto
- Department of Psychological and Brain Sciences, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA.
| | | |
Collapse
|
32
|
Waller D, Lippa Y, Richardson A. Isolating observer-based reference directions in human spatial memory: head, body, and the self-to-array axis. Cognition 2007; 106:157-83. [PMID: 17316594 PMCID: PMC2167632 DOI: 10.1016/j.cognition.2007.01.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2006] [Revised: 01/05/2007] [Accepted: 01/12/2007] [Indexed: 11/28/2022]
Abstract
Several lines of research have suggested the importance of egocentric reference systems for determining how the spatial properties of one's environment are mentally organized. Yet relatively little is known about the bases for egocentric reference systems in human spatial memory. In three experiments, we examine the relative importance of observer-based reference directions in human memory by controlling the orientation of head and body during acquisition. Experiment 1 suggests that spatial memory is organized by a head-aligned reference direction; however, Experiment 2 shows that a body-aligned reference direction can be more influential than a head-aligned direction when the axis defined by the relative positions of the observer and the learned environment (the "self-to-array" axis) is properly controlled. A third experiment shows that the self-to-array axis is distinct from - and can dominate - retina, head, and body-based egocentric reference systems.
Collapse
Affiliation(s)
- David Waller
- Department of Psychology, Miami University, Oxford, OH 45056, USA.
| | | | | |
Collapse
|
33
|
Shelton AL, Pippitt HA. Fixed versus dynamic orientations in environmental learning from ground-level and aerial perspectives. PSYCHOLOGICAL RESEARCH 2006; 71:333-46. [PMID: 16957953 DOI: 10.1007/s00426-006-0088-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2005] [Accepted: 03/13/2006] [Indexed: 10/24/2022]
Abstract
Ground-level and aerial perspectives in virtual space provide simplified conditions for investigating differences between exploratory navigation and map reading in large-scale environmental learning. General similarities and differences in ground-level and aerial encoding have been identified, but little is known about the specific characteristics that differentiate them. One such characteristic is the need to process orientation; ground-level encoding (and navigation) typically requires dynamic orientations, whereas aerial encoding (and map reading) is typically conducted in a fixed orientation. The present study investigated how this factor affected spatial processing by comparing ground-level and aerial encoding to a hybrid condition: aerial-with-turns. Experiment 1 demonstrated that scene recognition was sensitive to both perspective (ground-level or aerial) and orientation (dynamic or fixed). Experiment 2 investigated brain activation during encoding, revealing regions that were preferentially activated perspective as in previous studies (Shelton and Gabrieli in J Neurosci 22:2711-2717, 2002), but also identifying regions that were preferentially activated as a function of the presence or absence of turns. Together, these results differentiated the behavioral and brain consequences attributable to changes in orientation from those attributable to other characteristics of ground-level and aerial perspectives, providing leverage on how orientation information is processed in everyday spatial learning.
Collapse
Affiliation(s)
- Amy L Shelton
- Department of Psychological & Brain Sciences, Johns Hopkins University, Ames Hall/3400 N. Charles Street, Baltimore, MD 21218, USA.
| | | |
Collapse
|
34
|
Waller D, Greenauer N. The role of body-based sensory information in the acquisition of enduring spatial representations. PSYCHOLOGICAL RESEARCH 2006; 71:322-32. [PMID: 16953434 DOI: 10.1007/s00426-006-0087-x] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2005] [Accepted: 03/14/2006] [Indexed: 10/24/2022]
Abstract
Although many previous studies have shown that body-based sensory modalities such as vestibular, kinesthetic, and efferent information are useful for acquiring spatial information about one's immediate environment, relatively little work has examined how these modalities affect the acquisition of long-term spatial memory. Three groups of participants learned locations along a 146 m indoor route, and subsequently pointed to these locations, estimated distances between them, and constructed maps of the environment. One group had access to visual, proprioceptive, and inertial information, another had access to matched visual and matched inertial information, and another had access only to matched visual information. In contrast to previous findings examining transient, online spatial representations, our results showed very few differences among groups in the accuracy of the spatial memories acquired. The only difference was the improved pointing accuracy of participants who had access to proprioceptive information relative to that of participants in the other conditions. Results are discussed in terms of differential sensory contributions to transient and enduring spatial representations.
Collapse
Affiliation(s)
- David Waller
- Department of Psychology, Miami University, Oxford, OH 45056, USA.
| | | |
Collapse
|