1
|
Stein N, Watson T, Lappe M, Westendorf M, Durant S. Eye and head movements in visual search in the extended field of view. Sci Rep 2024; 14:8907. [PMID: 38632334 PMCID: PMC11023950 DOI: 10.1038/s41598-024-59657-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 04/12/2024] [Indexed: 04/19/2024] Open
Abstract
In natural environments, head movements are required to search for objects outside the field of view (FoV). Here we investigate the power of a salient target in an extended visual search array to facilitate faster detection once this item comes into the FoV by a head movement. We conducted two virtual reality experiments using spatially clustered sets of stimuli to observe target detection and head and eye movements during visual search. Participants completed search tasks with three conditions: (1) target in the initial FoV, (2) head movement needed to bring the target into the FoV, (3) same as condition 2 but the periphery was initially hidden and appeared after the head movement had brought the location of the target set into the FoV. We measured search time until participants found a more salient (O) or less salient (T) target among distractors (L). On average O's were found faster than T's. Gaze analysis showed that saliency facilitation occurred due to the target guiding the search only if it was within the initial FoV. When targets required a head movement to enter the FoV, participants followed the same search strategy as in trials without a visible target in the periphery. Moreover, faster search times for salient targets were only caused by the time required to find the target once the target set was reached. This suggests that the effect of stimulus saliency differs between visual search on fixed displays and when we are actively searching through an extended visual field.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Münster, 48143, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany.
| | - Tamara Watson
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, 2751, Australia
| | - Markus Lappe
- Institute for Psychology, University of Münster, 48143, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany
| | - Maren Westendorf
- Institute for Psychology, University of Münster, 48143, Münster, Germany
| | - Szonya Durant
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK
| |
Collapse
|
2
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
3
|
Wise T, Emery K, Radulescu A. Naturalistic reinforcement learning. Trends Cogn Sci 2024; 28:144-158. [PMID: 37777463 PMCID: PMC10878983 DOI: 10.1016/j.tics.2023.08.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/23/2023] [Accepted: 08/24/2023] [Indexed: 10/02/2023]
Abstract
Humans possess a remarkable ability to make decisions within real-world environments that are expansive, complex, and multidimensional. Human cognitive computational neuroscience has sought to exploit reinforcement learning (RL) as a framework within which to explain human decision-making, often focusing on constrained, artificial experimental tasks. In this article, we review recent efforts that use naturalistic approaches to determine how humans make decisions in complex environments that better approximate the real world, providing a clearer picture of how humans navigate the challenges posed by real-world decisions. These studies purposely embed elements of naturalistic complexity within experimental paradigms, rather than focusing on simplification, generating insights into the processes that likely underpin humans' ability to navigate complex, multidimensional real-world environments so successfully.
Collapse
Affiliation(s)
- Toby Wise
- Department of Neuroimaging, King's College London, London, UK.
| | - Kara Emery
- Center for Data Science, New York University, New York, NY, USA
| | - Angela Radulescu
- Center for Computational Psychiatry, Icahn School of Medicine at Mt. Sinai, New York, NY, USA
| |
Collapse
|
4
|
Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
|
5
|
Moskowitz JB, Berger SA, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. The influence of movement-related costs when searching to act and acting to search. J Neurophysiol 2023; 129:115-130. [PMID: 36475897 DOI: 10.1152/jn.00305.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Real-world search behavior often involves limb movements, either during search or after search. Here we investigated whether movement-related costs influence search behavior in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. In all tasks, there were multiple targets. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle, and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behavior.NEW & NOTEWORTHY Numerous studies have investigated the perceptual and cognitive factors that influence decision making about where to look, or move, in search tasks. However, little is known about how search is influenced by movement-related costs associated with acting on an object once it has been visually located or acting during manual search. In this article, we show that movement time costs can bias visual and manual search and that movement effort costs bias manual search.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Sarah A Berger
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
6
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
7
|
Sullivan B, Ludwig CJH, Damen D, Mayol-Cuevas W, Gilchrist ID. Look-ahead fixations during visuomotor behavior: Evidence from assembling a camping tent. J Vis 2021; 21:13. [PMID: 33688920 PMCID: PMC7961111 DOI: 10.1167/jov.21.3.13] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Eye movements can support ongoing manipulative actions, but a class of so-called look ahead fixations (LAFs) are related to future tasks. We examined LAFs in a complex natural task—assembling a camping tent. Tent assembly is a relatively uncommon task and requires the completion of multiple subtasks in sequence over a 5- to 20-minute duration. Participants wore a head-mounted camera and eye tracker. Subtasks and LAFs were annotated. We document four novel aspects of LAFs. First, LAFs were not random and their frequency was biased to certain objects and subtasks. Second, latencies are larger than previously noted, with 35% of LAFs occurring within 10 seconds before motor manipulation and 75% within 100 seconds. Third, LAF behavior extends far into future subtasks, because only 47% of LAFs are made to objects relevant to the current subtask. Seventy-five percent of LAFs are to objects used within five upcoming steps. Last, LAFs are often directed repeatedly to the target before manipulation, suggesting memory volatility. LAFs with short fixation–action latencies have been hypothesized to benefit future visual search and/or motor manipulation. However, the diversity of LAFs suggest they may also reflect scene exploration and task relevance, as well as longer term problem solving and task planning.
Collapse
Affiliation(s)
- Brian Sullivan
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| | | | - Dima Damen
- Department of Computer Science, University of Bristol, Bristol, UK.,
| | | | - Iain D Gilchrist
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| |
Collapse
|
8
|
Hu Z, Bulling A, Li S, Wang G. FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2681-2690. [PMID: 33750707 DOI: 10.1109/tvcg.2021.3067779] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Human visual attention in immersive virtual reality (VR) is key for many important applications, such as content design, gaze-contingent rendering, or gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Based on this dataset, we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors, i.e. users' historical gaze positions, task-related objects, saliency information of the VR content, and users' head rotation velocities. Based on this analysis, we propose FixationNet - a novel learning-based model to forecast users' eye fixations in the near future in VR. We evaluate the performance of our model for free-viewing and task-oriented settings and show that it outperforms the state of the art by a large margin of 19.8% (from a mean error of 2.93° to 2.35°) in free-viewing and of 15.1% (from 2.05° to 1.74°) in task-oriented situations. As such, our work provides new insights into task-oriented attention in virtual environments and guides future work on this important topic in VR research.
Collapse
|
9
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
10
|
When Natural Behavior Engages Working Memory. Curr Biol 2020; 31:869-874.e5. [PMID: 33278355 PMCID: PMC7902904 DOI: 10.1016/j.cub.2020.11.013] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
Working memory (WM) enables temporary storage and manipulation of information,1 supporting tasks that require bridging between perception and subsequent behavior. Its properties, such as its capacity, have been thoroughly investigated in highly controlled laboratory tasks.1, 2, 3, 4, 5, 6, 7, 8 Much less is known about the utilization and properties of WM in natural behavior,9, 10, 11 when reliance on WM emerges as a natural consequence of interactions with the environment. We measured the trade-off between reliance on WM and gathering information externally during immersive behavior in an adapted object-copying task.12 By manipulating the locomotive demands required for task completion, we could investigate whether and how WM utilization changed as gathering information from the environment became more effortful. Reliance on WM was lower than WM capacity measures in typical laboratory tasks. A clear trade-off also occurred. As sampling information from the environment required increasing locomotion and time investment, participants relied more on their WM representations. This reliance on WM increased in a shallow and linear fashion and was associated with longer encoding durations. Participants’ avoidance of WM usage showcases a fundamental dependence on external information during ecological behavior, even if the potentially storable information is well within the capacity of the cognitive system. These foundational findings highlight the importance of using immersive tasks to understand how cognitive processes unfold within natural behavior. Our novel VR approach effectively combines the ecological validity, experimental rigor, and sensitive measures required to investigate the interplay between memory and perception in immersive behavior. Video Abstract
Gaze provides a measure of working-memory (WM) usage during natural behavior Natural reliance on WM is low even when searching for objects externally is effortful WM utilization increases linearly as searching for objects requires more locomotion The trade-off between using WM versus external sampling affects performance
Collapse
|
11
|
Baxter R, Smith AD. Searching for individual determinants of probabilistic cueing in large-scale immersive virtual environments. Q J Exp Psychol (Hove) 2020; 75:328-347. [PMID: 33089735 DOI: 10.1177/1747021820969148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Large-scale search behaviour is an everyday occurrence, yet its underlying mechanisms are not commonly examined within experimental psychology. Key to efficient search behaviour is the sensitivity to environmental cues that might guide exploration, such as a target appearing with greater regularity in one region than another. Spatial cueing by probability has been examined in visual search paradigms, but the few studies that have addressed its contribution to large-scale search and foraging present contrasting accounts of the conditions under which a cueing effect can be reliably observed. In the present study, participants physically searched a virtual arena by inspecting identical locations until they found the target. The target was always present, although its location was probabilistically defined so that it appeared in the cued hemispace on 80% of trials. In Experiment 1, when participants' starting positions were stable, a probabilistic cueing effect was observed, with a strong bias towards searching the cued side. In Experiment 2, the starting position changed across the experiment, such that the cued region was defined in allocentric co-ordinates only. In this case, a probabilistic cueing effect was not observed across the sample. Analysis of individual differences in Experiment 2 suggests, however, that some participants may have learned the contingency underpinning the target's location, although these differences were unrelated to other tests of visuospatial ability. These results suggest that the ability to learn the likelihood of an item's fixed location when starting from different perspectives is driven by individual differences in other cognitive or perceptual factors.
Collapse
Affiliation(s)
- Rory Baxter
- School of Psychology, University of Plymouth, Plymouth, UK
| | | |
Collapse
|
12
|
Rubin M, Minns S, Muller K, Tong MH, Hayhoe MM, Telch MJ. Avoidance of social threat: Evidence from eye movements during a public speaking challenge using 360°- video. Behav Res Ther 2020; 134:103706. [PMID: 32920165 PMCID: PMC7530106 DOI: 10.1016/j.brat.2020.103706] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/02/2020] [Accepted: 07/29/2020] [Indexed: 11/19/2022]
Abstract
Social anxiety (SA) is thought to be maintained in part by avoidance of social threat, which exacerbates fear of negative evaluation. Yet, relatively little research has been conducted to evaluate the connection between social anxiety and attentional processes in realistic contexts. The current pilot study examined patterns of attention (eye movements) in a commonly feared social context - public speaking. Participants (N = 84) with a range of social anxiety symptoms gave an impromptu five-minute speech in an immersive 360°-video environment, while wearing a virtual reality headset equipped with eye-tracking hardware. We found evidence for the expected interaction between fear of public speaking and social threat (uninterested vs. interested audience members). Consistent with prediction, participants with greater fear of public speaking looked fewer times at uninterested members of the audience (high social threat) compared to interested members of the audience (low social threat) b = 0.418, p = 0.046, 95% CI [0.008, 0.829]. Analyses of attentional indices over the course of the speech revealed that the interaction between fear of public speaking and gaze on audience members was only significant in the first three-minutes. Our results provide support for theoretical models implicating avoidance of social threat as a maintaining factor in social anxiety. Future research is needed to test whether guided attentional training targeting in vivo attentional avoidance may improve clinical outcomes for those presenting with social anxiety.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Sean Minns
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J Telch
- Department of Psychology, The University of Texas at Austin, TX, USA.
| |
Collapse
|
13
|
Castelhano MS, Krzyś K. Rethinking Space: A Review of Perception, Attention, and Memory in Scene Processing. Annu Rev Vis Sci 2020; 6:563-586. [PMID: 32491961 DOI: 10.1146/annurev-vision-121219-081745] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Scene processing is fundamentally influenced and constrained by spatial layout and spatial associations with objects. However, semantic information has played a vital role in propelling our understanding of real-world scene perception forward. In this article, we review recent advances in assessing how spatial layout and spatial relations influence scene processing. We examine the organization of the larger environment and how we take full advantage of spatial configurations independently of semantic information. We demonstrate that a clear differentiation of spatial from semantic information is necessary to advance research in the field of scene processing.
Collapse
Affiliation(s)
- Monica S Castelhano
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| | - Karolina Krzyś
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| |
Collapse
|
14
|
Does active learning benefit spatial memory during navigation with restricted peripheral field? Atten Percept Psychophys 2020; 82:3033-3047. [PMID: 32346822 DOI: 10.3758/s13414-020-02038-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Spatial learning of real-world environments is impaired with severely restricted peripheral field of view (FOV). In prior research, the effects of restricted FOV on spatial learning have been studied using passive learning paradigms - learners walk along pre-defined paths and are told the location of targets to be remembered. Our research has shown that mobility demands and environmental complexity may contribute to impaired spatial learning with restricted FOV through attentional mechanisms. Here, we examine the role of active navigation, both in locomotion and in target search. First, we compared effects of active versus passive locomotion (walking with a physical guide versus being pushed in a wheelchair) on a task of pointing to remembered targets in participants with simulated 10° FOV. We found similar performance between active and passive locomotion conditions in both simpler (Experiment 1) and more complex (Experiment 2) spatial learning tasks. Experiment 3 required active search for named targets to remember while navigating, using both a mild and a severe FOV restriction. We observed no difference in pointing accuracy between the two FOV restrictions but an increase in attentional demands with severely restricted FOV. Experiment 4 compared active and passive search with severe FOV restriction, within subjects. We found no difference in pointing accuracy, but observed an increase in cognitive load in active versus passive search. Taken together, in the context of navigating with restricted FOV, neither locomotion method nor level of active search affected spatial learning. However, the greater cognitive demands could have counteracted the potential advantage of the active learning conditions.
Collapse
|
15
|
Domínguez-Zamora FJ, Lajoie K, Miller AB, Marigold DS. Age-related changes in gaze sampling strategies during obstacle navigation. Gait Posture 2020; 76:252-258. [PMID: 31877549 DOI: 10.1016/j.gaitpost.2019.11.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 10/30/2019] [Accepted: 11/22/2019] [Indexed: 02/02/2023]
Abstract
BACKGROUND Appropriate coordination of gaze behavior and body motion is essential for navigating cluttered environments. This is often complicated by having to contend with a concurrent secondary task, like engaging in a conversation or looking for relevant landmarks. However, there is little evidence of how aging and multitasking affects how gaze is deployed during obstacle navigation to guide our movements. RESEARCH QUESTION How do gaze patterns differ between young and older adults when navigating around a series of obstacles under dual-task conditions? METHODS 17 young adults and 17 older adults navigated around vertically-oriented obstacles in isolation (i.e., single-task condition) and while engaging in a concurrent backward-counting or visual search task (i.e., dual-task conditions). In the visual search condition, participants had to identify the location of an object (i.e., a black shape on a tile) along the perimeter of the pathway, simulating a landmark. We quantified the spatial-temporal pattern of gaze to obstacles relative to body position, as well as the frequency and duration of gaze fixations to obstacles, route-planning features, and landmarks. RESULTS We found that older adults transferred gaze away from obstacles earlier and contacted obstacles more frequently than young adults. However, the proportion of fixation number and duration to obstacles did not differ between groups in any condition. In addition, older adults had to allocate gaze to landmarks to a greater extent in the visual search condition-at the expense of fixating route-planning areas-to maintain similar search performance in the dual-task condition compared to the single-task condition. SIGNIFICANCE Older adults use different gaze strategies and have greater difficulty under dual-tasking conditions than young adults when navigating around a series of obstacles. We suggest that deficits in visual working memory and/or divided attention may explain these results.
Collapse
Affiliation(s)
- F Javier Domínguez-Zamora
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada
| | - Kim Lajoie
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada
| | - Andreas B Miller
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada
| | - Daniel S Marigold
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada.
| |
Collapse
|
16
|
Domínguez-Zamora FJ, Marigold DS. Motor cost affects the decision of when to shift gaze for guiding movement. J Neurophysiol 2019; 122:378-388. [PMID: 31141440 DOI: 10.1152/jn.00027.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Frequent gait modifications are often required to navigate our world. These can involve long or wide steps or changes in direction. People generally prefer to minimize the motor cost (or effort) of a movement, although with changes in gait this is not always possible. The decision of when and where to shift gaze is critical for controlling motor actions, since vision informs the brain about the available choices for movement-in this case, where to step. Here we asked how motor cost influences the allocation of gaze. To address this, we had participants walk and step to the center of sequential targets on the ground. We manipulated the motor cost associated with controlling foot placement by varying the location of one target in the lateral direction on a trial-to-trial basis within environments with different numbers of targets. Costlier steps caused a switch from a gaze strategy of planning future steps to one favoring visual feedback of the current foot placement when participants had to negotiate another target immediately after. Specifically, costlier steps delayed gaze shifts away from the manipulated target. We show that this relates to the cost of moving the leg and redirecting the body's center of mass from target to target. Overall, our results suggest that temporal gaze decisions are affected by motor costs associated with step-to-step demands of the environment. Moreover, they provide insight into what affects the coordination between the eyes and feet for the control of stable and accurate foot placement while walking. NEW & NOTEWORTHY Changes in gait allow us to navigate our world. For instance, one may step long or wide to avoid a spilled drink. The brain can direct gaze to gather relevant information for making these types of motor decisions; however, the factors affecting gaze allocation in natural behaviors are poorly understood. We show how the motor cost associated with a step influences the decision of when to redirect gaze to ensure accurate foot placement while walking.
Collapse
Affiliation(s)
- F Javier Domínguez-Zamora
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University , Burnaby, British Columbia , Canada
| | - Daniel S Marigold
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University , Burnaby, British Columbia , Canada.,Behavioural and Cognitive Neuroscience Institute, Simon Fraser University , Burnaby, British Columbia , Canada
| |
Collapse
|
17
|
Hayhoe MM, Matthis JS. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection. Interface Focus 2018; 8:20180009. [PMID: 29951189 DOI: 10.1098/rsfs.2018.0009] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2018] [Indexed: 11/12/2022] Open
Abstract
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
Collapse
Affiliation(s)
- Mary M Hayhoe
- Center for Perceptual Systems, University of Texas Austin, Austin, TX, USA
| | | |
Collapse
|