1
|
Wang EHJ, Lai FHY, Leung WM, Shiu TY, Wong H, Tao Y, Zhao X, Zhang TYT, Yee BK. Assessing rapid spatial working memory in community-living older adults in a virtual adaptation of the rodent water maze paradigm. Behav Brain Res 2025; 476:115266. [PMID: 39341462 DOI: 10.1016/j.bbr.2024.115266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 09/14/2024] [Accepted: 09/20/2024] [Indexed: 10/01/2024]
Abstract
Aging often leads to a decline in various cognitive domains, potentially contributing to spatial navigation challenges among older individuals. While the Morris water maze is a common tool in rodents research for evaluating allocentric spatial memory function, its translation to studying aging in humans, particularly its association with hippocampal dysfunction, has predominantly focused on spatial reference memory assessments. This study expanded the adaptation of the Morris water maze for older adults to assess flexible, rapid, one-trial working memory. This adaptation involved a spatial search task guided by allocentric cues within a 3-D virtual reality (VR) environment. The sensitivity of this approach to aging was examined in 146 community-living adults from three Chinese cities, categorized into three age groups. Significant performance deficits were observed in participants over 60 years old compared to younger adults aged between 18 and 43. However, interpreting these findings was complicated by factors such as psychomotor slowness and potential variations in task engagement, except during the probe tests. Notably, the transition from the 60 s to the 70 s was not associated with a substantial deterioration of performance. A distinction only emerged when the pattern of spatial search over the entire maze was examined in the probe tests when the target location was never revealed. The VR task's sensitivity to overall cognitive function in older adults was reinforced by the correlation between Montreal Cognitive Assessment (MoCA) scores and probe test performance, demonstrating up to 17 % shared variance beyond that predicted by chronological age alone. In conclusion, while implementing a VR-based adaptation of rodent water maze paradigms in older adults was feasible, our experience highlighted specific interpretative challenges that must be addressed before such a test can effectively supplement traditional cognitive assessment tools in evaluating age-related cognitive decline.
Collapse
Affiliation(s)
- Eileen H J Wang
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Frank H Y Lai
- Department of Social Work, Education & Community Wellbeing, Northumbria University, Newcastle, UK; The Mental Health Research Centre, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Wing Man Leung
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Tsz Yan Shiu
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Hiuyan Wong
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Yingxia Tao
- School of Rehabilitation Science, Hangzhou Medical College, Hangzhou, China.
| | - Xinlei Zhao
- School of Rehabilitation Science, Hangzhou Medical College, Hangzhou, China.
| | - Tina Y T Zhang
- Department of Rehabilitation Science, West China Medical School, Sichuan University, Chengdu, China.
| | - Benjamin K Yee
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; The Mental Health Research Centre, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| |
Collapse
|
2
|
Shayman CS, McCracken MK, Finney HC, Fino PC, Stefanucci JK, Creem-Regehr SH. Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. J Vis 2024; 24:7. [PMID: 39382867 PMCID: PMC11469273 DOI: 10.1167/jov.24.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/14/2024] [Indexed: 10/10/2024] Open
Abstract
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-5487-0007
| | - Maggie K McCracken
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0006-5280-0546
| | - Hunter C Finney
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0008-2324-5007
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-8621-3706
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0003-4238-2951
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0001-7740-1118
| |
Collapse
|
3
|
Khosla A, Moscovitch M, Ryan JD. Spatial updating of gaze position in younger and older adults - A path integration-like process in eye movements. Cognition 2024; 250:105835. [PMID: 38875941 DOI: 10.1016/j.cognition.2024.105835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 05/19/2024] [Accepted: 05/23/2024] [Indexed: 06/16/2024]
Abstract
Path integration (PI) is a navigation process that allows an organism to update its current location in reference to a starting point. PI can involve updating self-position continuously with respect to the starting point (continuous updating) or creating a map representation of the route which is then used to compute the homing vector (configural updating). One of the brain areas involved in PI, the entorhinal cortex, is modulated similarly by whole-body and eye movements, suggesting that if PI updates self-position, an analogous process may be used to update gaze position, and may undergo age-related changes. Here, we created an eyetracking version of a PI task in which younger and older participants followed routes with their eyes as guided by visual onsets; at the end of each route, participants were cued to return to the starting point or another enroute location. When only memory for the starting location was required for successful task performance, younger and older adults were generally not influenced by the number of locations, indicative of continuous updating. However, when participants could be cued to any enroute location, thereby requiring memory for the entire route, processing times increased, accuracy decreased, and overt revisits to enroute locations increased with the number of locations in a route, indicative of configural updating. Older participants showed evidence for similar updating strategies as younger participants, but they were less accurate and made more overt revisits to mid-route locations. These findings suggest that spatial updating mechanisms are generalizable across effector systems.
Collapse
Affiliation(s)
- Anisha Khosla
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.
| | - Morris Moscovitch
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest, Toronto, Ontario, Canada; Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Kessler F, Frankenstein J, Rothkopf CA. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nat Commun 2024; 15:5677. [PMID: 38971789 PMCID: PMC11227593 DOI: 10.1038/s41467-024-49722-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/14/2024] [Indexed: 07/08/2024] Open
Abstract
Goal-directed navigation requires continuously integrating uncertain self-motion and landmark cues into an internal sense of location and direction, concurrently planning future paths, and sequentially executing motor actions. Here, we provide a unified account of these processes with a computational model of probabilistic path planning in the framework of optimal feedback control under uncertainty. This model gives rise to diverse human navigational strategies previously believed to be distinct behaviors and predicts quantitatively both the errors and the variability of navigation across numerous experiments. This furthermore explains how sequential egocentric landmark observations form an uncertain allocentric cognitive map, how this internal map is used both in route planning and during execution of movements, and reconciles seemingly contradictory results about cue-integration behavior in navigation. Taken together, the present work provides a parsimonious explanation of how patterns of human goal-directed navigation behavior arise from the continuous and dynamic interactions of spatial uncertainties in perception, cognition, and action.
Collapse
Affiliation(s)
- Fabian Kessler
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Julia Frankenstein
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
5
|
Shayman CS, McCracken MK, Finney HC, Katsanevas AM, Fino PC, Stefanucci JK, Creem-Regehr SH. Effects of older age on visual and self-motion sensory cue integration in navigation. Exp Brain Res 2024; 242:1277-1289. [PMID: 38548892 PMCID: PMC11111325 DOI: 10.1007/s00221-024-06818-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/01/2024] [Indexed: 05/16/2024]
Abstract
Older adults demonstrate impairments in navigation that cannot be explained by general cognitive and motor declines. Previous work has shown that older adults may combine sensory cues during navigation differently than younger adults, though this work has largely been done in dark environments where sensory integration may differ from full-cue environments. Here, we test whether aging adults optimally combine cues from two sensory systems critical for navigation: vision (landmarks) and body-based self-motion cues. Participants completed a homing (triangle completion) task using immersive virtual reality to offer the ability to navigate in a well-lit environment including visibility of the ground plane. An optimal model, based on principles of maximum-likelihood estimation, predicts that precision in homing should increase with multisensory information in a manner consistent with each individual sensory cue's perceived reliability (measured by variability). We found that well-aging adults (with normal or corrected-to-normal sensory acuity and active lifestyles) were more variable and less accurate than younger adults during navigation. Both older and younger adults relied more on their visual systems than a maximum likelihood estimation model would suggest. Overall, younger adults' visual weighting matched the model's predictions whereas older adults showed sub-optimal sensory weighting. In addition, high inter-individual differences were seen in both younger and older adults. These results suggest that older adults do not optimally weight each sensory system when combined during navigation, and that older adults may benefit from interventions that help them recalibrate the combination of visual and self-motion cues for navigation.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA.
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, USA.
| | - Maggie K McCracken
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Hunter C Finney
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Andoni M Katsanevas
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, USA
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| |
Collapse
|
6
|
Scherer J, Müller MM, Unterbrink P, Meier S, Egelhaaf M, Bertrand OJN, Boeddeker N. Not seeing the forest for the trees: combination of path integration and landmark cues in human virtual navigation. Front Behav Neurosci 2024; 18:1399716. [PMID: 38835838 PMCID: PMC11148297 DOI: 10.3389/fnbeh.2024.1399716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction In order to successfully move from place to place, our brain often combines sensory inputs from various sources by dynamically weighting spatial cues according to their reliability and relevance for a given task. Two of the most important cues in navigation are the spatial arrangement of landmarks in the environment, and the continuous path integration of travelled distances and changes in direction. Several studies have shown that Bayesian integration of cues provides a good explanation for navigation in environments dominated by small numbers of easily identifiable landmarks. However, it remains largely unclear how cues are combined in more complex environments. Methods To investigate how humans process and combine landmarks and path integration in complex environments, we conducted a series of triangle completion experiments in virtual reality, in which we varied the number of landmarks from an open steppe to a dense forest, thus going beyond the spatially simple environments that have been studied in the past. We analysed spatial behaviour at both the population and individual level with linear regression models and developed a computational model, based on maximum likelihood estimation (MLE), to infer the underlying combination of cues. Results Overall homing performance was optimal in an environment containing three landmarks arranged around the goal location. With more than three landmarks, individual differences between participants in the use of cues are striking. For some, the addition of landmarks does not worsen their performance, whereas for others it seems to impair their use of landmark information. Discussion It appears that navigation success in complex environments depends on the ability to identify the correct clearing around the goal location, suggesting that some participants may not be able to see the forest for the trees.
Collapse
Affiliation(s)
- Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Martin M Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Patrick Unterbrink
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Sina Meier
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
7
|
Iggena D, Jeung S, Maier PM, Ploner CJ, Gramann K, Finke C. Multisensory input modulates memory-guided spatial navigation in humans. Commun Biol 2023; 6:1167. [PMID: 37963986 PMCID: PMC10646091 DOI: 10.1038/s42003-023-05522-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 10/30/2023] [Indexed: 11/16/2023] Open
Abstract
Efficient navigation is supported by a cognitive map of space. The hippocampus plays a key role for this map by linking multimodal sensory information with spatial memory representations. However, in human navigation studies, the full range of sensory information is often unavailable due to the stationarity of experimental setups. We investigated the contribution of multisensory information to memory-guided spatial navigation by presenting a virtual version of the Morris water maze on a screen and in an immersive mobile virtual reality setup. Patients with hippocampal lesions and matched controls navigated to memorized object locations in relation to surrounding landmarks. Our results show that availability of multisensory input improves memory-guided spatial navigation in both groups. It has distinct effects on navigational behaviour, with greater improvement in spatial memory performance in patients. We conclude that congruent multisensory information shifts computations to extrahippocampal areas that support spatial navigation and compensates for spatial navigation deficits.
Collapse
Affiliation(s)
- Deetje Iggena
- Charité - Universitätsmedizin Berlin, Department of Neurology, Augustenburger Platz 1, 13353, Berlin, Germany.
- Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Unter den Linden 6, 10099, Berlin, Germany.
| | - Sein Jeung
- Technische Universität Berlin, Department of Biological Psychology and Neuroergonomics, Fasanenstraße 1, 10623, Berlin, Germany
- Norwegian University of Science and Technology, Kavli Institute for Systems Neuroscience, Olav Kyrres gate 9,7030, Trondheim, Norway
- Max-Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103, Leipzig, Germany
| | - Patrizia M Maier
- Charité - Universitätsmedizin Berlin, Department of Neurology, Augustenburger Platz 1, 13353, Berlin, Germany
- Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Unter den Linden 6, 10099, Berlin, Germany
| | - Christoph J Ploner
- Charité - Universitätsmedizin Berlin, Department of Neurology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - Klaus Gramann
- Technische Universität Berlin, Department of Biological Psychology and Neuroergonomics, Fasanenstraße 1, 10623, Berlin, Germany
- University of California, San Diego, Center for Advanced Neurological Engineering, 9500 Gilman Dr, La Jolla, CA, 92093, USA
| | - Carsten Finke
- Charité - Universitätsmedizin Berlin, Department of Neurology, Augustenburger Platz 1, 13353, Berlin, Germany
- Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Unter den Linden 6, 10099, Berlin, Germany
| |
Collapse
|
8
|
Abstract
Aims of the present article are: 1) assessing vestibular contribution to spatial navigation, 2) exploring how age, global positioning systems (GPS) use, and vestibular navigation contribute to subjective sense of direction (SOD), 3) evaluating vestibular navigation in patients with lesions of the vestibular-cerebellum (patients with downbeat nystagmus, DBN) that could inform on the signals carried by vestibulo-cerebellar-cortical pathways. We applied two navigation tasks on a rotating chair in the dark: return-to-start (RTS), where subjects drive the chair back to the origin after discrete angular displacement stimuli (path reversal), and complete-the-circle (CTC) where subjects drive the chair on, all the way round to origin (path completion). We examined 24 normal controls (20-83 yr), five patients with DBN (62-77 yr) and, as proof of principle, two patients with early dementia (84 and 76 yr). We found a relationship between SOD, assessed by Santa Barbara Sense of Direction Scale, and subject's age (positive), GPS use (negative), and CTC-vestibular-navigation-task (positive). Age-related decline in vestibular navigation was observed with the RTS task but not with the complex CTC task. Vestibular navigation was normal in patients with vestibulo-cerebellar dysfunction but abnormal, particularly CTC, in the demented patients. We conclude that vestibular navigation skills contribute to the build-up of our SOD. Unexpectedly, perceived SOD in the elderly is not inferior, possibly explained by increased GPS use by the young. Preserved vestibular navigation in cerebellar patients suggests that ascending vestibular-cerebellar projections carry velocity (not position) signals. The abnormalities in the cognitively impaired patients suggest that their vestibulo-spatial navigation is disrupted.NEW & NOTEWORTHY Our subjective sense-of-direction is influenced by how good we are at spatial navigation using vestibular cues. Global positioning systems (GPS) may inhibit sense of direction. Increased use of GPS by the young may explain why the elderly's sense of direction is not worse than the young's. Patients with vestibulo-cerebellar dysfunction (downbeat nystagmus syndrome) display normal vestibular navigation, suggesting that ascending vestibulo-cerebellar-cortical pathways carry velocity rather than position signals. Pilot data indicate that dementia disrupts vestibular navigation.
Collapse
Affiliation(s)
- Athena Zachou
- Neuro-otology Unit, Department of Brain Sciences, Imperial College London, Charing Cross Hospital Campus, London, United Kingdom
- 1st Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, Greece
| | - Adolfo M Bronstein
- Neuro-otology Unit, Department of Brain Sciences, Imperial College London, Charing Cross Hospital Campus, London, United Kingdom
- 1st Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, Greece
| |
Collapse
|
9
|
Jeung S, Hilton C, Berg T, Gehrke L, Gramann K. Virtual Reality for Spatial Navigation. Curr Top Behav Neurosci 2023; 65:103-129. [PMID: 36512288 DOI: 10.1007/7854_2022_403] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Immersive virtual reality (VR) allows its users to experience physical space in a non-physical world. It has developed into a powerful research tool to investigate the neural basis of human spatial navigation as an embodied experience. The task of wayfinding can be carried out by using a wide range of strategies, leading to the recruitment of various sensory modalities and brain areas in real-life scenarios. While traditional desktop-based VR setups primarily focus on vision-based navigation, immersive VR setups, especially mobile variants, can efficiently account for motor processes that constitute locomotion in the physical world, such as head-turning and walking. When used in combination with mobile neuroimaging methods, immersive VR affords a natural mode of locomotion and high immersion in experimental settings, designing an embodied spatial experience. This in turn facilitates ecologically valid investigation of the neural underpinnings of spatial navigation.
Collapse
Affiliation(s)
- Sein Jeung
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christopher Hilton
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Timotheus Berg
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Lukas Gehrke
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany
| | - Klaus Gramann
- Department of Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany.
- Center for Advanced Neurological Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
10
|
Ruginski I, Giudice N, Creem-Regehr S, Ishikawa T. Designing mobile spatial navigation systems from the user’s perspective: an interdisciplinary review. SPATIAL COGNITION AND COMPUTATION 2022. [DOI: 10.1080/13875868.2022.2053382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Ian Ruginski
- Department of Geography, University of Zurich, Zurich, Switzerland
| | - Nicholas Giudice
- Spatial Computing program, School of Computing and Information Science, University of Maine, Orono, ME USA
| | | | - Toru Ishikawa
- Department of Information Networking for Innovation and Design (INIAD), Toyo University, Tokyo, Japan
| |
Collapse
|
11
|
Stavropoulos A, Lakshminarasimhan KJ, Laurens J, Pitkow X, Angelaki D. Influence of sensory modality and control dynamics on human path integration. eLife 2022; 11:63405. [PMID: 35179488 PMCID: PMC8856658 DOI: 10.7554/elife.63405] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/11/2021] [Indexed: 12/02/2022] Open
Abstract
Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.
Collapse
Affiliation(s)
- Akis Stavropoulos
- Center for Neural Science, New York University, New York, United States
| | | | - Jean Laurens
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Xaq Pitkow
- Department of Electrical and Computer Engineering, Rice University, Houston, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States.,Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States
| | - Dora Angelaki
- Center for Neural Science, New York University, New York, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States.,Tandon School of Engineering, New York University, New York, United States
| |
Collapse
|
12
|
Combination and competition between path integration and landmark navigation in the estimation of heading direction. PLoS Comput Biol 2022; 18:e1009222. [PMID: 35143474 PMCID: PMC8865642 DOI: 10.1371/journal.pcbi.1009222] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/23/2022] [Accepted: 01/06/2022] [Indexed: 11/19/2022] Open
Abstract
Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues. Successful navigation requires us to combine visual information about our environment with body-based cues about our own rotations and translations. In this work we investigated how these disparate sources of information work together to compute an estimate of heading. Using a novel virtual reality task we measured how humans integrate visual and body-based cues when there is mismatch between them—that is, when the estimate of heading from visual information is different from body-based cues. By building computational models of different strategies, we reveal that humans use a hybrid strategy for integrating visual and body-based cues—combining them when the mismatch between them is small and picking one or the other when the mismatch is large.
Collapse
|
13
|
Abstract
Spatial navigation is a complex cognitive activity that depends on perception, action, memory, reasoning, and problem-solving. Effective navigation depends on the ability to combine information from multiple spatial cues to estimate one's position and the locations of goals. Spatial cues include landmarks, and other visible features of the environment, and body-based cues generated by self-motion (vestibular, proprioceptive, and efferent information). A number of projects have investigated the extent to which visual cues and body-based cues are combined optimally according to statistical principles. Possible limitations of these investigations are that they have not accounted for navigators' prior experiences with or assumptions about the task environment and have not tested complete decision models. We examine cue combination in spatial navigation from a Bayesian perspective and present the fundamental principles of Bayesian decision theory. We show that a complete Bayesian decision model with an explicit loss function can explain a discrepancy between optimal cue weights and empirical cues weights observed by (Chen et al. Cognitive Psychology, 95, 105-144, 2017) and that the use of informative priors to represent cue bias can explain the incongruity between heading variability and heading direction observed by (Zhao and Warren 2015b, Psychological Science, 26[6], 915-924). We also discuss (Petzschner and Glasauer's , Journal of Neuroscience, 31(47), 17220-17229, 2011) use of priors to explain biases in estimates of linear displacements during visual path integration. We conclude that Bayesian decision theory offers a productive theoretical framework for investigating human spatial navigation and believe that it will lead to a deeper understanding of navigational behaviors.
Collapse
|
14
|
Impact of a Vibrotactile Belt on Emotionally Challenging Everyday Situations of the Blind. SENSORS 2021; 21:s21217384. [PMID: 34770689 PMCID: PMC8587958 DOI: 10.3390/s21217384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/31/2021] [Accepted: 11/03/2021] [Indexed: 11/16/2022]
Abstract
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the "feelSpace belt" was developed. The belt signals magnetic north as a fixed reference frame via vibrotactile stimulation. This study investigates the effect of the belt on typical orientation and navigation tasks and evaluates the emotional impact. Eleven blind subjects wore the belt daily for seven weeks. Before, during and after the study period, they filled in questionnaires to document their experiences. A small sub-group of the subjects took part in behavioural experiments before and after four weeks of training, i.e., a straight-line walking task to evaluate the belt's effect on keeping a straight heading, an angular rotation task to examine effects on egocentric orientation, and a triangle completion navigation task to test the ability to take shortcuts. The belt reduced subjective discomfort and increased confidence during navigation. Additionally, the participants felt safer wearing the belt in various outdoor situations. Furthermore, the behavioural tasks point towards an intuitive comprehension of the belt. Altogether, the blind participants benefited from the vibrotactile belt as an assistive technology in challenging everyday situations.
Collapse
|
15
|
Yu S, Boone AP, He C, Davis RC, Hegarty M, Chrastil ER, Jacobs EG. Age-Related Changes in Spatial Navigation Are Evident by Midlife and Differ by Sex. Psychol Sci 2021; 32:692-704. [PMID: 33819436 DOI: 10.1177/0956797620979185] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Accumulating evidence suggests that distinct aspects of successful navigation-path integration, spatial-knowledge acquisition, and navigation strategies-change with advanced age. Yet few studies have established whether navigation deficits emerge early in the aging process (prior to age 65) or whether early age-related deficits vary by sex. Here, we probed healthy young adults (ages 18-28) and midlife adults (ages 43-61) on three essential aspects of navigation. We found, first, that path-integration ability shows negligible effects of sex or age. Second, robust sex differences in spatial-knowledge acquisition are observed not only in young adulthood but also, although with diminished effect, at midlife. Third, by midlife, men and women show decreased ability to acquire spatial knowledge and increased reliance on taking habitual paths. Together, our findings indicate that age-related changes in navigation ability and strategy are evident as early as midlife and that path-integration ability is spared, to some extent, in the transition from youth to middle age.
Collapse
Affiliation(s)
- Shuying Yu
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | - Alexander P Boone
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | - Chuanxiuyue He
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | - Rie C Davis
- Department of Geography, University of California, Santa Barbara
| | - Mary Hegarty
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | - Elizabeth R Chrastil
- Department of Geography, University of California, Santa Barbara.,Department of Neurobiology and Behavior, University of California, Irvine
| | - Emily G Jacobs
- Department of Psychological and Brain Sciences, University of California, Santa Barbara.,Neuroscience Research Institute, University of California, Santa Barbara
| |
Collapse
|
16
|
Robinson EM, Wiener M. Dissociable neural indices for time and space estimates during virtual distance reproduction. Neuroimage 2020; 226:117607. [PMID: 33290808 DOI: 10.1016/j.neuroimage.2020.117607] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/23/2020] [Accepted: 11/25/2020] [Indexed: 10/22/2022] Open
Abstract
The perception and measurement of spatial and temporal dimensions have been widely studied. Yet, whether these two dimensions are processed independently is still being debated. Additionally, whether EEG components are uniquely associated with time or space, or whether they reflect a more general measure of magnitude quantity remains unknown. While undergoing EEG, subjects performed a virtual distance reproduction task, in which they were required to first walk forward for an unknown distance or time, and then reproduce that distance or time. Walking speed was varied between estimation and reproduction phases, to prevent interference between distance or time in each estimate. Behaviorally, subject performance was more variable when reproducing time than when reproducing distance, but with similar patterns of accuracy. During estimation, EEG data revealed the contingent negative variation (CNV), a measure previously associated with timing and expectation, tracked the probability of the upcoming interval, for both time and distance. However, during reproduction, the CNV exclusively oriented to the upcoming temporal interval at the start of reproduction, with no change across spatial distances. Our findings indicate that time and space are neurally separable dimensions, with the CNV both serving a supramodal role in temporal and spatial expectation, yet an exclusive role in preparing duration reproduction.
Collapse
Affiliation(s)
- Eva Marie Robinson
- Department of Psychology, University of Arizona, Tuscon, AZ 85721, United States; Department of Psychology, George Mason University, 4400 University Drive, 3F5, Fairfax, VA 22030, United States
| | - Martin Wiener
- Department of Psychology, George Mason University, 4400 University Drive, 3F5, Fairfax, VA 22030, United States.
| |
Collapse
|
17
|
Using virtual reality to assess dynamic self-motion and landmark cues for spatial updating in children and adults. Mem Cognit 2020; 49:572-585. [PMID: 33108632 DOI: 10.3758/s13421-020-01111-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 11/08/2022]
Abstract
The relative contribution of different sources of information for spatial updating - keeping track of one's position in an environment - has been highly debated. Further, children and adults may differ in their reliance on visual versus body-based information for spatial updating. In two experiments, we tested children (age 10-12 years) and young adult participants on a virtual point-to-origin task that varied the types of self-motion information available for translation: full-dynamic (walking), visual-dynamic (controller induced), and no-dynamic (teleporting). In Experiment 1, participants completed the three conditions in an indoor virtual environment with visual landmark cues. Adults were more accurate in the full- and visual-dynamic conditions (which did not differ from each other) compared to the no-dynamic condition. In contrast, children were most accurate in the visual-dynamic condition and also least accurate in the no-dynamic condition. Adults outperformed children in all conditions. In Experiment 2, we removed the potential for relying on visual landmarks by running the same paradigm in an outdoor virtual environment with no geometrical room cues. As expected, adults' errors increased in all conditions, but performance was still relatively worse in teleporting. Surprisingly, children showed overall similar accuracy and patterns across locomotion conditions to adults. Together, the results support the importance of dynamic translation information (either visual or body-based) for spatial updating across both age groups, but suggest children may be more reliant on visual information than adults.
Collapse
|
18
|
A comparison of virtual locomotion methods in movement experts and non-experts: testing the contributions of body-based and visual translation for spatial updating. Exp Brain Res 2020; 238:1911-1923. [PMID: 32556428 DOI: 10.1007/s00221-020-05851-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 06/10/2020] [Indexed: 10/24/2022]
Abstract
Both visual and body-based (vestibular and proprioceptive) information contribute to spatial updating, or the way a navigator keeps track of self-position during movement. Research has tested the relative contributions of these sources of information and found mixed results, with some studies demonstrating the importance of body-based information, especially for translation, and some demonstrating the sufficiency of visual information. Here, we invoke an individual differences approach to test whether some individuals may be more dependent on certain types of information compared to others. Movement experts tend to be dependent on motor processes in small-scale spatial tasks, which can help or hurt performance, but it is unknown if this effect extends into large-scale spatial tasks like spatial updating. In the current study, expert dancers and non-dancers completed a virtual reality point-to-origin task with three locomotion methods that varied the availability of body-based and visual information for translation: walking, joystick, and teleporting. We predicted decrements in performance in both groups as self-motion information was reduced, and that dancers would show a larger cost. Surprisingly, both dancers and non-dancers performed with equal accuracy in walking and joystick and were impaired in teleporting, with no large differences between groups. We found slower response times for both groups with reductions in self-motion information, and minimal evidence for a larger cost for dancers. While we did not see strong dance effects, more participation in spatial activities related to decreased angular error. Together, the results suggest a flexibility in reliance on visual or body-based information for translation in spatial updating that generalizes across dancers and non-dancers, but significant decrements associated with removing both of these sources of information.
Collapse
|
19
|
Does active learning benefit spatial memory during navigation with restricted peripheral field? Atten Percept Psychophys 2020; 82:3033-3047. [PMID: 32346822 DOI: 10.3758/s13414-020-02038-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Spatial learning of real-world environments is impaired with severely restricted peripheral field of view (FOV). In prior research, the effects of restricted FOV on spatial learning have been studied using passive learning paradigms - learners walk along pre-defined paths and are told the location of targets to be remembered. Our research has shown that mobility demands and environmental complexity may contribute to impaired spatial learning with restricted FOV through attentional mechanisms. Here, we examine the role of active navigation, both in locomotion and in target search. First, we compared effects of active versus passive locomotion (walking with a physical guide versus being pushed in a wheelchair) on a task of pointing to remembered targets in participants with simulated 10° FOV. We found similar performance between active and passive locomotion conditions in both simpler (Experiment 1) and more complex (Experiment 2) spatial learning tasks. Experiment 3 required active search for named targets to remember while navigating, using both a mild and a severe FOV restriction. We observed no difference in pointing accuracy between the two FOV restrictions but an increase in attentional demands with severely restricted FOV. Experiment 4 compared active and passive search with severe FOV restriction, within subjects. We found no difference in pointing accuracy, but observed an increase in cognitive load in active versus passive search. Taken together, in the context of navigating with restricted FOV, neither locomotion method nor level of active search affected spatial learning. However, the greater cognitive demands could have counteracted the potential advantage of the active learning conditions.
Collapse
|