1
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
2
|
Rubin M, Muller K, Hayhoe MM, Telch MJ. Attentional heterogeneity in social anxiety disorder: Evidence from Hidden Markov Models. Behav Res Ther 2024; 173:104461. [PMID: 38134499 PMCID: PMC10872338 DOI: 10.1016/j.brat.2023.104461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/11/2023] [Accepted: 12/10/2023] [Indexed: 12/24/2023]
Abstract
There is some evidence for heterogeneity in attentional processes among individuals with social anxiety. However, there is limited work considering how attentional processes may differ as a mechanism in a naturalistic task-based context (e.g., public speaking). In this secondary analysis we tested attentional heterogeneity among individuals diagnosed with social anxiety disorder (N = 21) in the context of a virtual reality exposure treatment study. Participants completed a public speaking challenge in an immersive 360°-video virtual reality environment with eye tracking at pre-treatment, post-treatment, and at 1-week follow-up. Using a Hidden Markov Model (HMM) approach with clustering we tested whether there were distinct profiles of attention pre-treatment and whether there were changes following the intervention. As a secondary aim we tested whether the distinct attentional profiles at pre-treatment predicted differential treatment outcomes. We found two distinct attentional profiles pre-treatment that we characterized as audience-focused and audience-avoidant. However, by the 1-week follow-up the two profiles were no longer meaningfully different. We found a meaningful difference between HMM groups for fear of public speaking at post-treatment b = -8.54, 95% Highest Density Interval (HDI) [-16.00, -0.90], Bayes Factor (BF) = 8.31 but not at one-week follow-up b = -5.83, 95% HDI [-13.25, 1.81], BF = 2.28. These findings provide support for heterogeneity in attentional processes among socially anxious individuals, but our findings indicate that this may change following treatment. Moreover, our results offer preliminary mechanistic evidence that patterns of avoidance may be specifically related to poorer treatment outcomes for virtual reality exposure therapy.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA; Department of Psychology, Palo Alto University, CA, USA.
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J Telch
- Department of Psychology, The University of Texas at Austin, TX, USA
| |
Collapse
|
3
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
4
|
Nachtnebel SJ, Cambronero-Delgadillo AJ, Helmers L, Ischebeck A, Höfler M. The impact of different distractions on outdoor visual search and object memory. Sci Rep 2023; 13:16700. [PMID: 37794077 PMCID: PMC10551016 DOI: 10.1038/s41598-023-43679-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
We investigated whether and how different types of search distractions affect visual search behavior and target memory while participants searched in a real-world environment. They searched either undistracted (control condition), listened to a podcast (auditory distraction), counted down aloud at intervals of three while searching (executive working memory load), or were forced to stop the search on half of the trials (time pressure). In line with findings from laboratory settings, participants searched longer but made fewer errors when the target was absent than when it was present, regardless of distraction condition. Furthermore, compared to the auditory distraction condition, the executive working memory load led to higher error rates (but not longer search times). In a surprise memory test after the end of the search tasks, recognition was better for previously present targets than for absent targets. Again, this was regardless of the previous distraction condition, although significantly fewer targets were remembered by the participants in the executive working memory load condition than by those in the control condition. The findings suggest that executive working memory load, but likely not auditory distraction and time pressure affected visual search performance and target memory in a real-world environment.
Collapse
Affiliation(s)
| | | | - Linda Helmers
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Margit Höfler
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
- Department for Dementia Research, University for Continuing Education Krems, Dr.-Karl-Dorrek-Straße 30, 3500, Krems, Austria
| |
Collapse
|
5
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
6
|
Moskowitz JB, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. Visual search for reach targets in actionable space is influenced by movement costs imposed by obstacles. J Vis 2023; 23:4. [PMID: 37289172 PMCID: PMC10257340 DOI: 10.1167/jov.23.6.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/07/2023] [Indexed: 06/09/2023] Open
Abstract
Real world search tasks often involve action on a target object once it has been located. However, few studies have examined whether movement-related costs associated with acting on located objects influence visual search. Here, using a task in which participants reached to a target object after locating it, we examined whether people take into account obstacles that increase movement-related costs for some regions of the reachable search space but not others. In each trial, a set of 36 objects (4 targets and 32 distractors) were displayed on a vertical screen and participants moved a cursor to a target after locating it. Participants had to fixate on an object to determine whether it was a target or distractor. A rectangular obstacle, of varying length, location, and orientation, was briefly displayed at the start of the trial. Participants controlled the cursor by moving the handle of a robotic manipulandum in a horizontal plane. The handle applied forces to simulate contact between the cursor and the unseen obstacle. We found that search, measured using eye movements, was biased to regions of the search space that could be reached without moving around the obstacle. This result suggests that when deciding where to search, people can incorporate the physical structure of the environment so as to reduce the movement-related cost of subsequently acting on the located target.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
7
|
Schuetz I, Karimpur H, Fiehler K. vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behav Res Methods 2023; 55:570-582. [PMID: 35322350 PMCID: PMC10027796 DOI: 10.3758/s13428-022-01831-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
8
|
Moskowitz JB, Berger SA, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. The influence of movement-related costs when searching to act and acting to search. J Neurophysiol 2023; 129:115-130. [PMID: 36475897 DOI: 10.1152/jn.00305.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Real-world search behavior often involves limb movements, either during search or after search. Here we investigated whether movement-related costs influence search behavior in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. In all tasks, there were multiple targets. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle, and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behavior.NEW & NOTEWORTHY Numerous studies have investigated the perceptual and cognitive factors that influence decision making about where to look, or move, in search tasks. However, little is known about how search is influenced by movement-related costs associated with acting on an object once it has been visually located or acting during manual search. In this article, we show that movement time costs can bias visual and manual search and that movement effort costs bias manual search.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Sarah A Berger
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
9
|
Rubin M, Muller K, Hayhoe MM, Telch MJ. Attention guidance augmentation of virtual reality exposure therapy for social anxiety disorder: a pilot randomized controlled trial. Cogn Behav Ther 2022; 51:371-387. [PMID: 35383544 PMCID: PMC9458616 DOI: 10.1080/16506073.2022.2053882] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 03/11/2022] [Indexed: 11/03/2022]
Abstract
Biased attention to social threats has been implicated in social anxiety disorder. Modifying visual attention during exposure therapy offers a direct test of this mechanism. We developed and tested a brief virtual reality exposure therapy (VRET) protocol using 360°-video and eye tracking. Participants (N = 21) were randomized to either standard VRET or VRET + attention guidance training (AGT). Multilevel Bayesian models were used to test (1) whether there was an effect of condition over time and (2) whether post-treatment changes in gaze patterns mediated the effect of condition at follow-up. There was a large overall effect of the intervention on symptoms of social anxiety, as well as an effect of the AGT augmentation on changes in visual attention to audience members. There was weak evidence against an effect of condition on fear of public speaking and weak evidence supporting a mediation effect, however these estimates were strongly influenced by model priors. Taken together, our findings suggest that attention can be modified within and during VRET and that modification of visual gaze avoidance may be casually linked to reductions in social anxiety. Replication with a larger sample size is needed.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M. Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J. Telch
- Department of Psychology, The University of Texas at Austin, TX, USA
| |
Collapse
|
10
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
11
|
Selective visual attention during public speaking in an immersive context. Atten Percept Psychophys 2022; 84:396-407. [PMID: 35064557 PMCID: PMC8993214 DOI: 10.3758/s13414-021-02430-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2021] [Indexed: 02/03/2023]
Abstract
It has recently become feasible to study selective visual attention to social cues in increasingly ecologically valid ways. In this secondary analysis, we examined gaze behavior in response to the actions of others in a social context. Participants (N = 84) were asked to give a 5-minute speech to a five-member audience that had been filmed in 360° video, displayed in a virtual reality headset containing a built-in eye tracker. Audience members were coached to make movements that would indicate interest or lack of interest (e.g., nodding vs. looking away). The goal of this paper was to analyze whether these actions influenced the speaker's gaze. We found that participants showed reliable evidence of gaze towards audience member actions in general, and towards audience member actions involving their phone specifically (compared with other actions like looking away or leaning back). However, there were no differences in gaze towards actions reflecting interest (like nodding) compared with actions reflecting lack of interest (like looking away). Participants were more likely to look away from audience member actions as well, but there were no specific actions that elicited looking away more or less. Taken together, these findings suggest that the actions of audience members are broadly influential in motivating gaze behaviors in a realistic, contextually embedded (public speaking) setting. Further research is needed to examine the ways in which these findings can be elucidated in more controlled laboratory environments as well as in the real world.
Collapse
|
12
|
Traner MR, Bromberg-Martin ES, Monosov IE. How the value of the environment controls persistence in visual search. PLoS Comput Biol 2021; 17:e1009662. [PMID: 34905548 PMCID: PMC8714092 DOI: 10.1371/journal.pcbi.1009662] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 12/28/2021] [Accepted: 11/21/2021] [Indexed: 11/18/2022] Open
Abstract
Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.
Collapse
Affiliation(s)
- Michael R. Traner
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, United States of America
| | - Ethan S. Bromberg-Martin
- Department of Neuroscience, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - Ilya E. Monosov
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, United States of America
- Department of Neuroscience, Washington University School of Medicine, St. Louis, Missouri, United States of America
- Department of Neurosurgery, Washington University, St. Louis, Missouri, United States of America
- Pain Center, Washington University, St. Louis, Missouri, United States of America
- Department of Electrical Engineering, Washington University, St. Louis, Missouri, United States of America
- * E-mail:
| |
Collapse
|
13
|
Plewan T, Rinkenauer G. Visual search in virtual 3D space: the relation of multiple targets and distractors. PSYCHOLOGICAL RESEARCH 2021; 85:2151-2162. [PMID: 33388993 PMCID: PMC8357743 DOI: 10.1007/s00426-020-01392-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 07/13/2020] [Indexed: 11/16/2022]
Abstract
Visual search and attentional alignment in 3D space are potentially modulated by information in unattended depth planes. The number of relevant and irrelevant items as well as their spatial relations may be regarded as factors which contribute to such effects. On a behavioral level, it might be different whether multiple distractors are presented in front of or behind target items. However, several studies revealed that attention cannot be restricted to a single depth plane. To further investigate this issue, two experiments were conducted. In the first experiment, participants searched for (multiple) targets in one depth plane, while non-target items (distractors) were simultaneously presented in this or another depth plane. In the second experiment, an additional spatial cue was presented with different validities to highlight the target position. Search durations were generally shorter when the search array contained two additional targets and were markedly longer when three distractors were displayed. The latter effect was most pronounced when a single target and three distractors coincided in the same depth plane and this effect persisted even when the target position was validly cued. The study reveals that the depth relation of target and distractor stimuli was more important than the absolute distance between these objects. Furthermore, the present findings suggest that within an attended depth plane, irrelevant information elicits strong interference. In sum, this study provides further evidence that allocation of attention is a flexible process which may be modulated by a variety of perceptual and cognitive factors.
Collapse
Affiliation(s)
- Thorsten Plewan
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany.
- Psychology School, Hochschule Fresenius - University of Applied Sciences Düsseldorf, Düsseldorf, Germany.
| | - Gerhard Rinkenauer
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany
| |
Collapse
|
14
|
Enders LR, Smith RJ, Gordon SM, Ries AJ, Touryan J. Gaze Behavior During Navigation and Visual Search of an Open-World Virtual Environment. Front Psychol 2021; 12:681042. [PMID: 34434140 PMCID: PMC8380848 DOI: 10.3389/fpsyg.2021.681042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, an increasing number of vision researchers are employing virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while subjects freely navigated through a rich, open-world virtual environment. Within this environment, subjects completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target conditions: Humvees, motorcycles, aircraft, or furniture. Our results show a statistically significant relationship between gaze behavior and target objects across Target Conditions with increased visual attention toward assigned targets. Specifically, we see an increase in the number of fixations and an increase in dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings and support that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic (open-world) virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.
Collapse
Affiliation(s)
| | | | | | - Anthony J Ries
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|
15
|
Franchak JM, McGee B, Blanch G. Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration. PLoS One 2021; 16:e0256463. [PMID: 34415981 PMCID: PMC8378697 DOI: 10.1371/journal.pone.0256463] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/06/2021] [Indexed: 11/19/2022] Open
Abstract
How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom.
Collapse
Affiliation(s)
- John M. Franchak
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - Brianna McGee
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - Gabrielle Blanch
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| |
Collapse
|
16
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | | |
Collapse
|
17
|
Sullivan B, Ludwig CJH, Damen D, Mayol-Cuevas W, Gilchrist ID. Look-ahead fixations during visuomotor behavior: Evidence from assembling a camping tent. J Vis 2021; 21:13. [PMID: 33688920 PMCID: PMC7961111 DOI: 10.1167/jov.21.3.13] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Eye movements can support ongoing manipulative actions, but a class of so-called look ahead fixations (LAFs) are related to future tasks. We examined LAFs in a complex natural task—assembling a camping tent. Tent assembly is a relatively uncommon task and requires the completion of multiple subtasks in sequence over a 5- to 20-minute duration. Participants wore a head-mounted camera and eye tracker. Subtasks and LAFs were annotated. We document four novel aspects of LAFs. First, LAFs were not random and their frequency was biased to certain objects and subtasks. Second, latencies are larger than previously noted, with 35% of LAFs occurring within 10 seconds before motor manipulation and 75% within 100 seconds. Third, LAF behavior extends far into future subtasks, because only 47% of LAFs are made to objects relevant to the current subtask. Seventy-five percent of LAFs are to objects used within five upcoming steps. Last, LAFs are often directed repeatedly to the target before manipulation, suggesting memory volatility. LAFs with short fixation–action latencies have been hypothesized to benefit future visual search and/or motor manipulation. However, the diversity of LAFs suggest they may also reflect scene exploration and task relevance, as well as longer term problem solving and task planning.
Collapse
Affiliation(s)
- Brian Sullivan
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| | | | - Dima Damen
- Department of Computer Science, University of Bristol, Bristol, UK.,
| | | | - Iain D Gilchrist
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| |
Collapse
|
18
|
Hu Z, Bulling A, Li S, Wang G. FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2681-2690. [PMID: 33750707 DOI: 10.1109/tvcg.2021.3067779] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Human visual attention in immersive virtual reality (VR) is key for many important applications, such as content design, gaze-contingent rendering, or gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Based on this dataset, we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors, i.e. users' historical gaze positions, task-related objects, saliency information of the VR content, and users' head rotation velocities. Based on this analysis, we propose FixationNet - a novel learning-based model to forecast users' eye fixations in the near future in VR. We evaluate the performance of our model for free-viewing and task-oriented settings and show that it outperforms the state of the art by a large margin of 19.8% (from a mean error of 2.93° to 2.35°) in free-viewing and of 15.1% (from 2.05° to 1.74°) in task-oriented situations. As such, our work provides new insights into task-oriented attention in virtual environments and guides future work on this important topic in VR research.
Collapse
|
19
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
20
|
Bruning AL, Lewis-Peacock JA. Long-term memory guides resource allocation in working memory. Sci Rep 2020; 10:22161. [PMID: 33335170 PMCID: PMC7747625 DOI: 10.1038/s41598-020-79108-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/01/2020] [Indexed: 01/02/2023] Open
Abstract
Working memory capacity is incredibly limited and thus it is important to use this resource wisely. Prior knowledge in long-term memory can aid in efficient encoding of information by allowing for the prioritization of novel stimuli over familiar ones. Here we used a full-report procedure in a visual working memory paradigm, where participants reported the location of six colored circles in any order, to examine the influence of prior information on resource allocation in working memory. Participants learned that one of the items appeared in a restricted range of locations, whereas the remaining items could appear in any location. We found that participants' memory performance benefited from learning this prior information. Specifically, response precision increased for all items when prior information was available for one of the items. Responses for both familiar and novel items were systematically ordered from highest to lowest precision. Participants tended to report the familiar item in the second half of the six responses and did so with greater precision than for novel items. Moreover, novel items that appeared near the center of the prior location were reported with worse precision than novel items that appeared elsewhere. This shows that people strategically allocated working memory resources by ignoring information that appeared in predictable locations and prioritizing the encoding of information that appeared in unpredictable locations. Together these findings demonstrate that people rely on long-term memory not only for remembering familiar items, but also for the strategic allocation of their limited capacity working memory resources.
Collapse
Affiliation(s)
- Allison L Bruning
- Department of Psychology, Center for Learning and Memory, University of Texas at Austin, 108 E Dean Keeton St, Stop A8000, Austin, TX, 78712, USA.
| | - Jarrod A Lewis-Peacock
- Department of Psychology, Center for Learning and Memory, University of Texas at Austin, 108 E Dean Keeton St, Stop A8000, Austin, TX, 78712, USA
| |
Collapse
|
21
|
Rubin M, Minns S, Muller K, Tong MH, Hayhoe MM, Telch MJ. Avoidance of social threat: Evidence from eye movements during a public speaking challenge using 360°- video. Behav Res Ther 2020; 134:103706. [PMID: 32920165 PMCID: PMC7530106 DOI: 10.1016/j.brat.2020.103706] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/02/2020] [Accepted: 07/29/2020] [Indexed: 11/19/2022]
Abstract
Social anxiety (SA) is thought to be maintained in part by avoidance of social threat, which exacerbates fear of negative evaluation. Yet, relatively little research has been conducted to evaluate the connection between social anxiety and attentional processes in realistic contexts. The current pilot study examined patterns of attention (eye movements) in a commonly feared social context - public speaking. Participants (N = 84) with a range of social anxiety symptoms gave an impromptu five-minute speech in an immersive 360°-video environment, while wearing a virtual reality headset equipped with eye-tracking hardware. We found evidence for the expected interaction between fear of public speaking and social threat (uninterested vs. interested audience members). Consistent with prediction, participants with greater fear of public speaking looked fewer times at uninterested members of the audience (high social threat) compared to interested members of the audience (low social threat) b = 0.418, p = 0.046, 95% CI [0.008, 0.829]. Analyses of attentional indices over the course of the speech revealed that the interaction between fear of public speaking and gaze on audience members was only significant in the first three-minutes. Our results provide support for theoretical models implicating avoidance of social threat as a maintaining factor in social anxiety. Future research is needed to test whether guided attentional training targeting in vivo attentional avoidance may improve clinical outcomes for those presenting with social anxiety.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Sean Minns
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J Telch
- Department of Psychology, The University of Texas at Austin, TX, USA.
| |
Collapse
|
22
|
Castelhano MS, Krzyś K. Rethinking Space: A Review of Perception, Attention, and Memory in Scene Processing. Annu Rev Vis Sci 2020; 6:563-586. [PMID: 32491961 DOI: 10.1146/annurev-vision-121219-081745] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Scene processing is fundamentally influenced and constrained by spatial layout and spatial associations with objects. However, semantic information has played a vital role in propelling our understanding of real-world scene perception forward. In this article, we review recent advances in assessing how spatial layout and spatial relations influence scene processing. We examine the organization of the larger environment and how we take full advantage of spatial configurations independently of semantic information. We demonstrate that a clear differentiation of spatial from semantic information is necessary to advance research in the field of scene processing.
Collapse
Affiliation(s)
- Monica S Castelhano
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| | - Karolina Krzyś
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| |
Collapse
|
23
|
Harada Y, Ohyama J. The effect of task-irrelevant spatial contexts on 360-degree attention. PLoS One 2020; 15:e0237717. [PMID: 32810159 PMCID: PMC7437462 DOI: 10.1371/journal.pone.0237717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 07/31/2020] [Indexed: 11/19/2022] Open
Abstract
The effect of spatial contexts on attention is important for evaluating the risk of human errors and the accessibility of information in different situations. In traditional studies, this effect has been investigated using display-based and non-laboratory procedures. However, these two procedures are inadequate for measuring attention directed toward 360-degree environments and controlling exogeneous stimuli. In order to resolve these limitations, we used a virtual-reality-based procedure and investigated how spatial contexts of 360-degree environments influence attention. In the experiment, 20 students were asked to search for and report a target that was presented at any location in 360-degree virtual spaces as accurately and quickly as possible. Spatial contexts comprised a basic context (a grey and objectless space) and three specific contexts (a square grid floor, a cubic room, and an infinite floor). We found that response times for the task and eye movements were influenced by the spatial context of 360-degree surrounding spaces. In particular, although total viewing times for the contexts did not match the saliency maps, the differences in total viewing times between the basic and specific contexts did resemble the maps. These results suggest that attention comprises basic and context-dependent characteristics, and the latter are influenced by the saliency of 360-degree contexts even when the contexts are irrelevant to a task.
Collapse
Affiliation(s)
- Yuki Harada
- National Institute of Advanced Industrial Science and Technology, Human Augmentation Research Center, Tsukuba, Ibaraki, Japan
- Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| | - Junji Ohyama
- National Institute of Advanced Industrial Science and Technology, Human Augmentation Research Center, Tsukuba, Ibaraki, Japan
| |
Collapse
|
24
|
Saeedpour-Parizi MR, Hassan SE, Baniasadi T, Baute KJ, Shea JB. Hierarchical goal effects on center of mass velocity and eye fixations during gait. Exp Brain Res 2020; 238:2433-2443. [PMID: 32776171 DOI: 10.1007/s00221-020-05900-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 08/01/2020] [Indexed: 11/28/2022]
Abstract
The purpose of this study was to determine the effect of hierarchical goal structure of a yet-to-be performed task on gait and eye fixation behavior while walking to the location of where the task was to be performed. Subjects performed different goal-directed tasks representing three hierarchical levels of planning. The first level of planning consisted of having the subject walk to a bookcase on which an object (a cup) was located in the middle of a shelf. The second level of planning consisted of walking to the bookcase and picking up the cup which was in the middle, on the right side, or on the left side of the bookcase shelf. The third level of planning consisted of walking to the bookcase, picking up the cup which was located in the middle of the bookcase shelf, and moving it to a higher shelf. Findings showed that hierarchal goals do affect center of mass velocity and eye fixation behavior. Center of mass velocity to the bookcase increased with an increase in the number of goals. Subjects decreased gait velocity as they approached the bookcase and adjusted their last steps to accommodate picking up the cup. The findings also demonstrated the important role of vision in controlling gait velocity in goal-directed tasks. Eye fixation duration was more important than the number of eye fixations in controlling gait velocity. Thus, the amount of information gained through object fixation duration is of greater importance than the number of fixations on the object for effective goal achievement.
Collapse
Affiliation(s)
- Mohammad R Saeedpour-Parizi
- Department of Kinesiology, School of Public Health, Indiana University, 1025 E 7th Street, Bloomington, IN, 47405, USA.
| | - Shirin E Hassan
- School of Optometry, Indiana University, 800 E Atwater Avenue, Bloomington, IN, 47405, USA
| | - Tayebeh Baniasadi
- Department of Kinesiology, School of Public Health, Indiana University, 1025 E 7th Street, Bloomington, IN, 47405, USA
| | | | - John B Shea
- Department of Kinesiology, School of Public Health, Indiana University, 1025 E 7th Street, Bloomington, IN, 47405, USA
| |
Collapse
|
25
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
26
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
27
|
Shan S, Jia S, Lawson T, Yan L, Lin M, Liu Y. The Use of TAT Peptide-Functionalized Graphene as a Highly Nuclear-Targeting Carrier System for Suppression of Choroidal Melanoma. Int J Mol Sci 2019; 20:E4454. [PMID: 31509978 PMCID: PMC6769650 DOI: 10.3390/ijms20184454] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 09/04/2019] [Accepted: 09/04/2019] [Indexed: 12/14/2022] Open
Abstract
Tumorous metastasis is a difficult challenge to resolve for researchers and for clinicians. Targeted delivery of antitumor drugs towards tumor cells' nuclei can be a practical approach to resolving this issue. This work describes an efficient nuclear-targeting delivery system prepared from trans-activating transcriptional activator (TAT) peptide-functionalized graphene nanocarriers. The TAT peptide, originally observed in a human immunodeficiency virus 1 (HIV-1), was incorporated with graphene via an edge-functionalized ball-milling method developed by the author's research group. High tumor-targeting capability of the resulting nanocarrier was realized by the strong affinity between TAT and the nuclei of cancer cells, along with the enhanced permeability and retention (EPR) effect of two-dimensional graphene nanosheets. Subsequently, a common antitumor drug, mitomycin C (MMC), was covalently linked to the TAT-functionalized graphene (TG) to form a nuclear-targeted nanodrug MMC-TG. The presence of nanomaterials inside the nuclei of ocular choroidal melanoma (OCM-1) cells was shown using transmission electron microscopy (TEM) and confocal laser scanning microscopy. In vitro results from a Transwell co-culture system showed that most of the MMC-TG nanodrugs were delivered in a targeted manner to the tumorous OCM-1 cells, while a very small amount of MMC-TG was delivered in a non-targeted manner to normal human retinal pigment epithelial (ARPE-19) cells. TEM results further confirmed that apoptosis of OCM-1 cells was started from the lysis of nuclear substances, followed by the disappearance of nuclear membrane and cytoplasm. This suggests that the as-synthesized MMC-TG is a promising nuclear-target nanodrugfor resolution of tumorous metastasis issues at the headstream.
Collapse
Affiliation(s)
- Suyan Shan
- Laboratory of Nanoscale Biosensing and Bioimaging, School of Ophthalmology and Optometry, School of Biomedical Engineering, Wenzhou Medical University, 270 Xueyuanxi Road, Wenzhou 325027, China.
| | - Shujuan Jia
- Laboratory of Nanoscale Biosensing and Bioimaging, School of Ophthalmology and Optometry, School of Biomedical Engineering, Wenzhou Medical University, 270 Xueyuanxi Road, Wenzhou 325027, China.
| | - Tom Lawson
- ARC Center of Excellence for Nanoscale Bio Photonics, Macquarie University, Sydney, NSW 2109, Australia.
| | - Lu Yan
- Laboratory of Nanoscale Biosensing and Bioimaging, School of Ophthalmology and Optometry, School of Biomedical Engineering, Wenzhou Medical University, 270 Xueyuanxi Road, Wenzhou 325027, China.
| | - Mimi Lin
- Laboratory of Nanoscale Biosensing and Bioimaging, School of Ophthalmology and Optometry, School of Biomedical Engineering, Wenzhou Medical University, 270 Xueyuanxi Road, Wenzhou 325027, China.
| | - Yong Liu
- Laboratory of Nanoscale Biosensing and Bioimaging, School of Ophthalmology and Optometry, School of Biomedical Engineering, Wenzhou Medical University, 270 Xueyuanxi Road, Wenzhou 325027, China.
| |
Collapse
|
28
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
29
|
Litchfield D, Donovan T. Expecting the initial glimpse: prior target knowledge activation or repeated search does not eliminate scene preview search benefits. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2018.1555163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | - Tim Donovan
- Medical & Sport Sciences, University of Cumbria, Carlisle, UK
| |
Collapse
|
30
|
Hayhoe MM, Matthis JS. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection. Interface Focus 2018; 8:20180009. [PMID: 29951189 DOI: 10.1098/rsfs.2018.0009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2018] [Indexed: 11/12/2022] Open
Abstract
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
Collapse
Affiliation(s)
- Mary M Hayhoe
- Center for Perceptual Systems, University of Texas Austin, Austin, TX, USA
| | | |
Collapse
|
31
|
Olk B, Dinu A, Zielinski DJ, Kopper R. Measuring visual search and distraction in immersive virtual reality. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172331. [PMID: 29892418 PMCID: PMC5990815 DOI: 10.1098/rsos.172331] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 03/27/2018] [Indexed: 05/27/2023]
Abstract
An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target-distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target-distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research.
Collapse
Affiliation(s)
- Bettina Olk
- Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany
- HSD University of Applied Sciences, Waidmarkt 3 and 9, 50676 Cologne, Germany
| | - Alina Dinu
- Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistrasse 52, 20251 Hamburg, Germany
| | - David J. Zielinski
- Duke University, Pratt School of Engineering, FCiemas Building, 101 Science Dr., Durham, NC 27708-0271, USA
| | - Regis Kopper
- Duke University, Pratt School of Engineering, FCiemas Building, 101 Science Dr., Durham, NC 27708-0271, USA
| |
Collapse
|
32
|
Abstract
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Collapse
Affiliation(s)
- Chia-Ling Li
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA.
| | - M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
33
|
Of "what" and "where" in a natural search task: Active object handling supports object location memory beyond the object's identity. Atten Percept Psychophys 2017; 78:1574-84. [PMID: 27165170 DOI: 10.3758/s13414-016-1111-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.
Collapse
|
34
|
Li CL, Aivar MP, Kit DM, Tong MH, Hayhoe MM. Memory and visual search in naturalistic 2D and 3D environments. J Vis 2017; 16:9. [PMID: 27299769 PMCID: PMC4913723 DOI: 10.1167/16.8.9] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Collapse
|
35
|
Abstract
Investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions by highlighting the importance of behavioral goals and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. This conceptualization of visually guided actions as a sequence of sensory-motor decisions has been formalized within the framework of statistical decision theory, which structures the problem and provides the context for much recent progress in vision and action. Components of a good decision include the task, which defines the behavioral goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge.
Collapse
Affiliation(s)
- Mary M Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Texas 78712;
| |
Collapse
|
36
|
R. Tavakoli H, Borji A, Laaksonen J, Rahtu E. Exploiting inter-image similarity and ensemble of extreme learners for fixation prediction using deep features. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.03.018] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
Tong MH, Zohar O, Hayhoe MM. Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 2017; 17:28. [PMID: 28114501 PMCID: PMC5256682 DOI: 10.1167/17.1.28] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 11/20/2016] [Indexed: 11/24/2022] Open
Abstract
While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top-down factors determine gaze choices in the context of ongoing natural behavior. One purely top-down model by Sprague, Ballard, and Robinson (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. We explore the plausibility of the central claims of this approach in the context of a task where subjects walk through a virtual environment performing interceptions, avoidance, and path following. Many aspects of both walking direction choices and gaze allocation are consistent with this approach. Subjects use gaze to reduce uncertainty for task-relevant information that is used to inform action choices. Notably the addition of motion to peripheral objects did not affect fixations when the objects were irrelevant to the task, suggesting that stimulus saliency was not a major factor in gaze allocation. The modular approach of independent component behaviors is consistent with the main aspects of performance, but there were a number of deviations suggesting that modules interact. Thus the model forms a useful, but incomplete, starting point for understanding top-down factors in active behavior.
Collapse
Affiliation(s)
- Matthew H Tong
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Oran Zohar
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
38
|
Meister MLR, Buffalo EA. Getting directions from the hippocampus: The neural connection between looking and memory. Neurobiol Learn Mem 2016; 134 Pt A:135-144. [PMID: 26743043 PMCID: PMC4927424 DOI: 10.1016/j.nlm.2015.12.004] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Revised: 12/12/2015] [Accepted: 12/16/2015] [Indexed: 01/29/2023]
Abstract
Investigations into the neural basis of memory in human and non-human primates have focused on the hippocampus and associated medial temporal lobe (MTL) structures. However, how memory signals from the hippocampus affect motor actions is unknown. We propose that approaching this question through eye movement, especially by assessing the changes in looking behavior that occur with experience, is a promising method for exposing neural computations within the hippocampus. Here, we review how looking behavior is guided by memory in several ways, some of which have been shown to depend on the hippocampus, and how hippocampal neural signals are modulated by eye movements. Taken together, these findings highlight the need for future research on how MTL structures interact with the oculomotor system. Probing how the hippocampus reflects and impacts motor output during looking behavior renders a practical path to advance our understanding of the hippocampal memory system.
Collapse
Affiliation(s)
- Miriam L R Meister
- Department of Physiology and Biophysics, University of Washington, USA; Washington National Primate Research Center, USA; University of Washington School of Medicine, USA
| | - Elizabeth A Buffalo
- Department of Physiology and Biophysics, University of Washington, USA; Washington National Primate Research Center, USA; University of Washington School of Medicine, USA
| |
Collapse
|
39
|
Affiliation(s)
- Gernot Horstmann
- Center for Interdisciplinary Research
- Department of Psychology
- CITEC; Bielefeld University; Bielefeld Germany
| |
Collapse
|
40
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
41
|
Draschkow D, Wolfe JM, Võ MLH. Seek and you shall remember: scene semantics interact with visual search to build better memories. J Vis 2014; 14:10. [PMID: 25015385 DOI: 10.1167/14.8.10] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.
Collapse
Affiliation(s)
| | - Jeremy M Wolfe
- Harvard Medical School, Cambridge, MA, USABrigham and Women's Hospital, Boston, MA, USA
| | - Melissa L H Võ
- Harvard Medical School, Cambridge, MA, USABrigham and Women's Hospital, Boston, MA, USAJohann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|