1
|
Zhu C, Zhou K, Tang Y, Tang F, Si B. Adaptive learning rate in dynamical binary environments: the signature of adaptive information processing. Cogn Neurodyn 2024; 18:4009-4031. [PMID: 39712114 PMCID: PMC11655807 DOI: 10.1007/s11571-024-10128-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 04/11/2024] [Accepted: 04/17/2024] [Indexed: 12/24/2024] Open
Abstract
Adaptive mechanisms of learning models play critical roles in interpreting adaptive behavior of humans and animals. Different learning models, varying from Bayesian models, deep learning or regression models to reward-based reinforcement learning models, adopt similar update rules. These update rules can be reduced to the same generalized mathematical form: the Rescorla-Wagner equation. In this paper, we construct a hierarchical Bayesian model with an adaptive learning rate for inferring a hidden probability in a dynamical binary environment, and analysis the adaptive behavior of the model on synthetic data. The update rule of the model state turns out to be an extension of the Rescorla-Wagner equation. The adaptive learning rate is modulated by beliefs and environment uncertainty. Our results underscore adaptive learning rate as mechanistic component in efficient and accurate inference, as well as the signature of information processing in adaptive machine learning models.
Collapse
Affiliation(s)
- Changbo Zhu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016 Liaoning China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Ke Zhou
- Faculty of Phychology, Beijing Normal University, Beijing, 100875 China
| | - Yandong Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016 Liaoning China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016 Liaoning China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Bailu Si
- School of Systems Science, Beijing Normal University, Beijing, 100875 China
- Chinese Institute for Brain Research, Beijing, Beijing, 102206 China
| |
Collapse
|
2
|
Shayman CS, McCracken MK, Finney HC, Fino PC, Stefanucci JK, Creem-Regehr SH. Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. J Vis 2024; 24:7. [PMID: 39382867 PMCID: PMC11469273 DOI: 10.1167/jov.24.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/14/2024] [Indexed: 10/10/2024] Open
Abstract
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-5487-0007
| | - Maggie K McCracken
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0006-5280-0546
| | - Hunter C Finney
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0008-2324-5007
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-8621-3706
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0003-4238-2951
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0001-7740-1118
| |
Collapse
|
3
|
McIntire G, Dopkins S. Super-optimality and relative distance coding in location memory. Mem Cognit 2024; 52:1439-1450. [PMID: 38519780 DOI: 10.3758/s13421-024-01553-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2024] [Indexed: 03/25/2024]
Abstract
The prevailing model of landmark integration in location memory is Maximum Likelihood Estimation, which assumes that each landmark implies a target location distribution that is narrower for more reliable landmarks. This model assumes weighted linear combination of landmarks and predicts that, given optimal integration, the reliability with multiple landmarks is the sum of the reliabilities with the individual landmarks. Super-optimality is reliability with multiple landmarks exceeding optimal reliability given the reliability with each landmark alone; this is shown when performance exceeds predicted optimal performance, found by aggregating reliability values with single landmarks. Past studies claiming super-optimality have provided arguably impure measures of performance with single landmarks given that multiple landmarks were presented at study in conditions with a single landmark at test, disrupting encoding specificity and thereby leading to underestimation in predicted optimal performance. This study, unlike those prior studies, only presented a single landmark at study and the same landmark at test in single landmark trials, showing super-optimality conclusively. Given that super-optimal information integration occurs, emergent information, that is, information only available with multiple landmarks, must be used. With the target and landmarks all in a line, as throughout this study, relative distance is the only emergent information available. Use of relative distance was confirmed here by finding that, when both landmarks are left of the target at study, the target is remembered further right of its true location the further left the left landmark is moved from study to test.
Collapse
Affiliation(s)
- Gordon McIntire
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA.
| | - Stephen Dopkins
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA
| |
Collapse
|
4
|
Shayman CS, McCracken MK, Finney HC, Katsanevas AM, Fino PC, Stefanucci JK, Creem-Regehr SH. Effects of older age on visual and self-motion sensory cue integration in navigation. Exp Brain Res 2024; 242:1277-1289. [PMID: 38548892 PMCID: PMC11111325 DOI: 10.1007/s00221-024-06818-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/01/2024] [Indexed: 05/16/2024]
Abstract
Older adults demonstrate impairments in navigation that cannot be explained by general cognitive and motor declines. Previous work has shown that older adults may combine sensory cues during navigation differently than younger adults, though this work has largely been done in dark environments where sensory integration may differ from full-cue environments. Here, we test whether aging adults optimally combine cues from two sensory systems critical for navigation: vision (landmarks) and body-based self-motion cues. Participants completed a homing (triangle completion) task using immersive virtual reality to offer the ability to navigate in a well-lit environment including visibility of the ground plane. An optimal model, based on principles of maximum-likelihood estimation, predicts that precision in homing should increase with multisensory information in a manner consistent with each individual sensory cue's perceived reliability (measured by variability). We found that well-aging adults (with normal or corrected-to-normal sensory acuity and active lifestyles) were more variable and less accurate than younger adults during navigation. Both older and younger adults relied more on their visual systems than a maximum likelihood estimation model would suggest. Overall, younger adults' visual weighting matched the model's predictions whereas older adults showed sub-optimal sensory weighting. In addition, high inter-individual differences were seen in both younger and older adults. These results suggest that older adults do not optimally weight each sensory system when combined during navigation, and that older adults may benefit from interventions that help them recalibrate the combination of visual and self-motion cues for navigation.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA.
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, USA.
| | - Maggie K McCracken
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Hunter C Finney
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Andoni M Katsanevas
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, USA
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| |
Collapse
|
5
|
Abstract
This article is an overview of the research and controversy initiated by Cheng's (Cognition, 23(2), 149-178, 1986) article hypothesizing a purely geometric module in spatial representation. Hundreds of experiments later, we know much more about spatial behavior across a very wide array of species, ages, and kinds of conditions, but there is still no consensus model of the phenomena. I argue for an adaptive combination approach that entails several principles: (1) a focus on ecological niches and the spatial information they offer; (2) an approach to development that is experience-expectant: (3) continued plasticity as environmental conditions change; (4) language as one of many cognitive tools that can support spatial behavior.
Collapse
|
6
|
Freas CA, Spetch ML. A special issue honoring Ken Cheng: navigating animal minds. Learn Behav 2024; 52:9-13. [PMID: 38231427 DOI: 10.3758/s13420-024-00624-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 01/18/2024]
Affiliation(s)
- Cody A Freas
- School of Natural Sciences, Macquarie University, Sydney, NSW, Australia.
| | - Marcia L Spetch
- Department of Psychology, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
7
|
Chen Y, Mou W. Path integration, rather than being suppressed, is used to update spatial views in familiar environments with constantly available landmarks. Cognition 2024; 242:105662. [PMID: 37952370 DOI: 10.1016/j.cognition.2023.105662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 11/01/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
This project tested three hypotheses conceptualizing the interaction between path integration based on self-motion and piloting based on landmarks in a familiar environment with persistent landmarks. The first hypothesis posits that path integration functions automatically, as in environments lacking persistent landmarks (environment-independent hypothesis). The second hypothesis suggests that persistent landmarks suppress path integration (suppression hypothesis). The third hypothesis proposes that path integration updates the spatial views of the environment (updating-spatial-views hypothesis). Participants learned a specific object's location. Subsequently, they undertook an outbound path originating from the object and then indicated the object's location (homing). In Experiments 1&1b, there were landmarks throughout the first 9 trials. On some later trials, the landmarks were presented during the outbound path but unexpectedly removed during homing (catch trials). On the last trials, there were no landmarks throughout (baseline trials). Experiments 2-3 were similar but added two identical objects (the original one and a rotated distractor) during homing on the catch and baseline trials. Experiment 4 replaced two identical objects with two groups of landmarks. The results showed that in Experiments 1&1b, homing angular error on the first catch trial was significantly larger than the matched baseline trial, undermining the environment-independent hypothesis. Conversely, in Experiment 2-4, the proportion of participants who recognized the original object or landmarks was similar between the first catch and the matched baseline trial, favoring the updating-spatial-views hypothesis over the suppression hypothesis. Therefore, while mismatches between updated spatial views and actual views of unexpected removal of landmarks impair homing performance, the updated spatial views help eliminate ambiguous targets or landmarks within the familiar environment.
Collapse
Affiliation(s)
- Yue Chen
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| | - Weimin Mou
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| |
Collapse
|
8
|
Du YK, Liang M, McAvan AS, Wilson RC, Ekstrom AD. Frontal-midline theta and posterior alpha oscillations index early processing of spatial representations during active navigation. Cortex 2023; 169:65-80. [PMID: 37862831 PMCID: PMC10841878 DOI: 10.1016/j.cortex.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/12/2023] [Accepted: 09/15/2023] [Indexed: 10/22/2023]
Abstract
Previous research has demonstrated that humans combine multiple sources of spatial information such as self-motion and landmark cues while navigating through an environment. However, it is unclear whether this involves comparing multiple representations obtained from different sources during navigation (parallel hypothesis) or building a representation first based on self-motion cues and then combining with landmarks later (serial hypothesis). We tested these two hypotheses (parallel vs serial) in an active navigation task using wireless mobile scalp EEG recordings. Participants walked through an immersive virtual hallway with or without conflicts between self-motion and landmarks (i.e., intersections) and pointed toward the starting position of the hallway. We employed the oscillatory signals recorded during mobile wireless scalp EEG as a means of identifying when participant representations based on self-motion versus landmark cues might have first emerged. We found that path segments, including intersections present early during navigation, were more strongly associated with later pointing error, regardless of when they appeared during encoding. We also found that there was sufficient information contained within the frontal-midline theta and posterior alpha oscillatory signals in the earliest segments of navigation involving intersections to decode condition (i.e., conflicting vs not conflicting). Together, these findings suggest that intersections play a pivotal role in the early development of spatial representations, suggesting that memory representations for the geometry of walked paths likely develop early during navigation, in support of the parallel hypothesis.
Collapse
Affiliation(s)
- Yu Karen Du
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA; Department of Psychology & Brain and Mind Institute, University of Western Ontario, London, ON N6A 3K7, Canada
| | - Mingli Liang
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA
| | - Andrew S McAvan
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA; Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| | - Robert C Wilson
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA
| | - Arne D Ekstrom
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA; Evelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719, USA.
| |
Collapse
|
9
|
Du YK, Liang M, McAvan AS, Wilson RC, Ekstrom AD. Frontal-midline theta and posterior alpha oscillations index early processing of spatial representations during active navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.22.537940. [PMID: 37131721 PMCID: PMC10153283 DOI: 10.1101/2023.04.22.537940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Previous research has demonstrated that humans combine multiple sources of spatial information such as self-motion and landmark cues, while navigating through an environment. However, it is unclear whether this involves comparing multiple representations obtained from different sources during navigation (parallel hypothesis) or building a representation first based on self-motion cues and then combining with landmarks later (serial hypothesis). We tested these two hypotheses (parallel vs. serial) in an active navigation task using wireless mobile scalp EEG recordings. Participants walked through an immersive virtual hallway with or without conflicts between self-motion and landmarks (i.e., intersections) and pointed toward the starting position of the hallway. We employed the oscillatory signals recorded during mobile wireless scalp EEG as means of identifying when participant representations based on self-motion vs. landmark cues might have first emerged. We found that path segments, including intersections present early during navigation, were more strongly associated with later pointing error, regardless of when they appeared during encoding. We also found that there was sufficient information contained within the frontal-midline theta and posterior alpha oscillatory signals in the earliest segments of navigation involving intersections to decode condition (i.e., conflicting vs. not conflicting). Together, these findings suggest that intersections play a pivotal role in the early development of spatial representations, suggesting that memory representations for the geometry of walked paths likely develop early during navigation, in support of the parallel hypothesis.
Collapse
Affiliation(s)
- Yu Karen Du
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
- Department of Psychology & Brain and Mind Institute, University of Western Ontario, London, ON, Canada N6A 3K7
| | - Mingli Liang
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
| | - Andrew S McAvan
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
- Department of Psychology, Vanderbilt University, Vanderbilt University, Nashville, TN 37240
| | - Robert C Wilson
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
| | - Arne D Ekstrom
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
- Evelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85719
| |
Collapse
|