1
|
Shayman CS, McCracken MK, Finney HC, Fino PC, Stefanucci JK, Creem-Regehr SH. Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. J Vis 2024; 24:7. [PMID: 39382867 PMCID: PMC11469273 DOI: 10.1167/jov.24.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/14/2024] [Indexed: 10/10/2024] Open
Abstract
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-5487-0007
| | - Maggie K McCracken
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0006-5280-0546
| | - Hunter C Finney
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0008-2324-5007
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-8621-3706
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0003-4238-2951
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0001-7740-1118
| |
Collapse
|
2
|
Kessler F, Frankenstein J, Rothkopf CA. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nat Commun 2024; 15:5677. [PMID: 38971789 PMCID: PMC11227593 DOI: 10.1038/s41467-024-49722-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/14/2024] [Indexed: 07/08/2024] Open
Abstract
Goal-directed navigation requires continuously integrating uncertain self-motion and landmark cues into an internal sense of location and direction, concurrently planning future paths, and sequentially executing motor actions. Here, we provide a unified account of these processes with a computational model of probabilistic path planning in the framework of optimal feedback control under uncertainty. This model gives rise to diverse human navigational strategies previously believed to be distinct behaviors and predicts quantitatively both the errors and the variability of navigation across numerous experiments. This furthermore explains how sequential egocentric landmark observations form an uncertain allocentric cognitive map, how this internal map is used both in route planning and during execution of movements, and reconciles seemingly contradictory results about cue-integration behavior in navigation. Taken together, the present work provides a parsimonious explanation of how patterns of human goal-directed navigation behavior arise from the continuous and dynamic interactions of spatial uncertainties in perception, cognition, and action.
Collapse
Affiliation(s)
- Fabian Kessler
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Julia Frankenstein
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
3
|
Scherer J, Müller MM, Unterbrink P, Meier S, Egelhaaf M, Bertrand OJN, Boeddeker N. Not seeing the forest for the trees: combination of path integration and landmark cues in human virtual navigation. Front Behav Neurosci 2024; 18:1399716. [PMID: 38835838 PMCID: PMC11148297 DOI: 10.3389/fnbeh.2024.1399716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction In order to successfully move from place to place, our brain often combines sensory inputs from various sources by dynamically weighting spatial cues according to their reliability and relevance for a given task. Two of the most important cues in navigation are the spatial arrangement of landmarks in the environment, and the continuous path integration of travelled distances and changes in direction. Several studies have shown that Bayesian integration of cues provides a good explanation for navigation in environments dominated by small numbers of easily identifiable landmarks. However, it remains largely unclear how cues are combined in more complex environments. Methods To investigate how humans process and combine landmarks and path integration in complex environments, we conducted a series of triangle completion experiments in virtual reality, in which we varied the number of landmarks from an open steppe to a dense forest, thus going beyond the spatially simple environments that have been studied in the past. We analysed spatial behaviour at both the population and individual level with linear regression models and developed a computational model, based on maximum likelihood estimation (MLE), to infer the underlying combination of cues. Results Overall homing performance was optimal in an environment containing three landmarks arranged around the goal location. With more than three landmarks, individual differences between participants in the use of cues are striking. For some, the addition of landmarks does not worsen their performance, whereas for others it seems to impair their use of landmark information. Discussion It appears that navigation success in complex environments depends on the ability to identify the correct clearing around the goal location, suggesting that some participants may not be able to see the forest for the trees.
Collapse
Affiliation(s)
- Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Martin M Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Patrick Unterbrink
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Sina Meier
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
4
|
Müller MM, Scherer J, Unterbrink P, Bertrand OJN, Egelhaaf M, Boeddeker N. The Virtual Navigation Toolbox: Providing tools for virtual navigation experiments. PLoS One 2023; 18:e0293536. [PMID: 37943845 PMCID: PMC10635524 DOI: 10.1371/journal.pone.0293536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
Spatial navigation research in humans increasingly relies on experiments using virtual reality (VR) tools, which allow for the creation of highly flexible, and immersive study environments, that can react to participant interaction in real time. Despite the popularity of VR, tools simplifying the creation and data management of such experiments are rare and often restricted to a specific scope-limiting usability and comparability. To overcome those limitations, we introduce the Virtual Navigation Toolbox (VNT), a collection of interchangeable and independent tools for the development of spatial navigation VR experiments using the popular Unity game engine. The VNT's features are packaged in loosely coupled and reusable modules, facilitating convenient implementation of diverse experimental designs. Here, we depict how the VNT fulfils feature requirements of different VR environments and experiments, guiding through the implementation and execution of a showcase study using the toolbox. The presented showcase study reveals that homing performance in a classic triangle completion task is invariant to translation velocity of the participant's avatar, but highly sensitive to the number of landmarks. The VNT is freely available under a creative commons license, and we invite researchers to contribute, extending and improving tools using the provided repository.
Collapse
Affiliation(s)
- Martin M. Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Patrick Unterbrink
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | | | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, NRW, Germany
| |
Collapse
|
5
|
Liang Q, Liao J, Li J, Zheng S, Jiang X, Huang R. The role of the parahippocampal cortex in landmark-based distance estimation based on the contextual hypothesis. Hum Brain Mapp 2023; 44:131-141. [PMID: 36066186 PMCID: PMC9783420 DOI: 10.1002/hbm.26069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 07/30/2022] [Accepted: 08/13/2022] [Indexed: 02/05/2023] Open
Abstract
Parahippocampal cortex (PHC) is a vital neural bases in spatial navigation. However, its functional role is still unclear. "Contextual hypothesis," which assumes that the PHC participates in processing the spatial association between the landmark and destination, provides a potential answer to the question. Nevertheless, the hypothesis was previously tested using the picture categorization task, which is indirectly related to spatial navigation. By now, study is still needed for testing the hypothesis with a navigation-related paradigm. In the current study, we tested the hypothesis by an fMRI experiment in which participants performed a distance estimation task in a virtual environment under three different conditions: landmark free (LF), stable landmark (SL), and ambiguous landmark (AL). By analyzing the behavioral data, we found that the presence of an SL improved the participants' performance in distance estimation. Comparing the brain activity in SL-versus-LF contrast as well as AL-versus-LF contrast, we found that the PHC was activated by the SL rather than by AL when encoding the distance. This indicates that the PHC is elicited by strongly associated context and encodes the landmark reference for distance perception. Furthermore, accessing the representational similarity with the activity of the PHC across conditions, we observed a high similarity within the same condition but low similarity between conditions. This result indicated that the PHC sustains the contextual information for discriminating between scenes. Our findings provided insights into the neural correlates of the landmark information processing from the perspective of contextual hypothesis.
Collapse
Affiliation(s)
- Qunjun Liang
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| | - Jiajun Liao
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| | - Jinhui Li
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| | - Senning Zheng
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| | - Xiaoqian Jiang
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| | - Ruiwang Huang
- School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational ScienceSouth China Normal UniversityGuangzhouGuangdongChina
| |
Collapse
|
6
|
Patrick SC, Assink JD, Basille M, Clusella-Trullas S, Clay TA, den Ouden OFC, Joo R, Zeyl JN, Benhamou S, Christensen-Dalsgaard J, Evers LG, Fayet AL, Köppl C, Malkemper EP, Martín López LM, Padget O, Phillips RA, Prior MK, Smets PSM, van Loon EE. Infrasound as a Cue for Seabird Navigation. Front Ecol Evol 2021. [DOI: 10.3389/fevo.2021.740027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Seabirds are amongst the most mobile of all animal species and spend large amounts of their lives at sea. They cross vast areas of ocean that appear superficially featureless, and our understanding of the mechanisms that they use for navigation remains incomplete, especially in terms of available cues. In particular, several large-scale navigational tasks, such as homing across thousands of kilometers to breeding sites, are not fully explained by visual, olfactory or magnetic stimuli. Low-frequency inaudible sound, i.e., infrasound, is ubiquitous in the marine environment. The spatio-temporal consistency of some components of the infrasonic wavefield, and the sensitivity of certain bird species to infrasonic stimuli, suggests that infrasound may provide additional cues for seabirds to navigate, but this remains untested. Here, we propose a framework to explore the importance of infrasound for navigation. We present key concepts regarding the physics of infrasound and review the physiological mechanisms through which infrasound may be detected and used. Next, we propose three hypotheses detailing how seabirds could use information provided by different infrasound sources for navigation as an acoustic beacon, landmark, or gradient. Finally, we reflect on strengths and limitations of our proposed hypotheses, and discuss several directions for future work. In particular, we suggest that hypotheses may be best tested by combining conceptual models of navigation with empirical data on seabird movements and in-situ infrasound measurements.
Collapse
|
7
|
Negen J, Bird LA, Nardini M. An adaptive cue selection model of allocentric spatial reorientation. J Exp Psychol Hum Percept Perform 2021; 47:1409-1429. [PMID: 34766823 PMCID: PMC8582329 DOI: 10.1037/xhp0000950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
After becoming disoriented, an organism must use the local environment to reorient and recover vectors to important locations. A new theory, adaptive combination, suggests that the information from different spatial cues is combined with Bayesian efficiency during reorientation. To test this further, we modified the standard reorientation paradigm to be more amenable to Bayesian cue combination analyses while still requiring reorientation in an allocentric (i.e., world-based, not egocentric) frame. Twelve adults and 20 children at ages 5 to 7 years old were asked to recall locations in a virtual environment after a disorientation. Results were not consistent with adaptive combination. Instead, they are consistent with the use of the most useful (nearest) single landmark in isolation. We term this adaptive selection. Experiment 2 suggests that adults also use the adaptive selection method when they are not disoriented but are still required to use a local allocentric frame. This suggests that the process of recalling a location in the allocentric frame is typically guided by the single most useful landmark rather than a Bayesian combination of landmarks. These results illustrate that there can be important limits to Bayesian theories of the cognition, particularly for complex tasks such as allocentric recall. Whether studying the development of children’s spatial cognition, creating artificial intelligence with human-like capacities, or designing civic spaces, we can benefit from a strong understanding of how humans process the space around them. Here we tested a prominent theory that brings together statistical theory and psychological theory (Bayesian models of perception and memory) but found that it could not satisfactorily explain our data. Our findings suggest that when tracking the spatial relations between objects from different viewpoints, rather than efficiently combining all the available landmarks, people often fall back to the much simpler method of tracking the spatial relation to the nearest landmark.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University
| | | | | |
Collapse
|
8
|
Mallot HA, Lancier S. Place recognition from distant landmarks: human performance and maximum likelihood model. BIOLOGICAL CYBERNETICS 2018; 112:291-303. [PMID: 29480375 DOI: 10.1007/s00422-018-0751-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 02/11/2018] [Indexed: 06/08/2023]
Abstract
We present a simple behavioral experiment on human place recognition from a configuration of four visual landmarks. Participants were asked to navigate several paths, all involving a turn at one specific point, and while doing so incidentally learned the position of that turning point. In the test phase, they were asked to return to the turning point in a reduced environment leaving only the four landmarks visible. Results are compared to two versions of a maximum likelihood model of place recognition using either view-based or depth-based cues for place recognition. Only the depth-based model is in good qualitative agreement with the data. In particular, it reproduces landmark configuration-dependent effects of systematic bias and statistical error distribution as well as effects of approach direction. The model is based on a place code (depth and bearing of the landmarks at target location) and an egocentric working memory of surrounding space including current landmark position in a local, map-like representation. We argue that these elements are crucial for human place recognition.
Collapse
|
9
|
Shamsyeh Zahedi M, Zeil J. Fractal dimension and the navigational information provided by natural scenes. PLoS One 2018; 13:e0196227. [PMID: 29734381 PMCID: PMC5937794 DOI: 10.1371/journal.pone.0196227] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.
Collapse
Affiliation(s)
- Moosarreza Shamsyeh Zahedi
- Research School of Biology, Australian National University, Canberra ACT, Australia
- Department of Mathematics, Payame Noor University, Tehran, Iran
- * E-mail:
| | - Jochen Zeil
- Research School of Biology, Australian National University, Canberra ACT, Australia
| |
Collapse
|